Last week I
mentioned that some old test automation breaks while it is disabled.
As an example,
suppose I added a test to check for the 22 Franch regions being labelled
properly back in 2014.It works for a
year, but then France
announces it will consolidate its regions in January 2016.While working on that change, I disable my
test since I know it won't provide any value while the rest of the changes are
Then I forget to
turn the test back on and don't notice that until after the change.
In this case, the
fix is straightforward.I change my test
to account for the real world changes that happened while it was disabled.In this case, I take out the list of the 22
regions and replace that list with the 14 new ones.
This pattern - the
code being tested changes while the test is disabled - is common.In almost all cases, simply changing the test
to account for the new expected behavior is all that needs to be done to enable
the test.So I typically make that
change, enable the test, run ita few
thousand times and if it passes, leave it enabled as part of the build system
Sometimes the tests
are more complicated that I know how to fix.In that case, I contact the team that owns the test and hand off the
work to enable it to them.
All in all, this is
a simple case to handle.
There is also the
case that the test is no longer valid.Think of a test that validated Tableau worked on Windows Vista.Vista is no longer around, so that test can
simply be deleted.
Other factors can
change as well, and I'll wrap this up next week.
concerns and criticisms always welcome,
Another aspect of my
work recently has been paying down technical debt we built over the years.An example of technical debt would be this:
Imagine we are building an
application that can compute miles per gallon your car gets
Wecreate the algorithm to compute miles
We add tests to
make sure it works
We ship it
Then we are a hit in the
But the rest of the world
wants liters per 100 kilometers.
We add that feature
As we add it, we
realize we need to change our existing code that only knows about miles
We figure it will take a
week to do this
During this week, we disable
the tests that test the code for "miles"
We finish the liters per
We check in
We ship and the whole world
But look back at
step 5c.The tests for miles (whatever
they were) were disabled and we never turned them back on. We call this "technical debt" or, in this case since we know it is test related, "test debt." It happens when we take shortcuts like 5c - disabling a test. I'll just point out a better practice would have been to ensure every bit of new code we wrote for the metric values should never have broken the MPG code, and the test should never have been disabled. In the real world, the most likely reason to do this would be for speed - I simply want to test my new code quickly and don't want to run all the tests over the old code I am not changing, so I disable the old tests for right now and will re-enable them when I am done. (Or so I say...)
So one other task I
have taken on is identifying these tests that are in this state.Fortunately, there are not many of them but
every so often they slip through the process and wind up being disabled for far
longer than what we anticipated.Turning
them back on is usually easy.Every so
often, an older test won't pass nowadays because so much code has changed while
it was disabled.
What to do in those
cases is a little trickier and I will cover that next.
Questions, comments, concerns and criticisms always welcome,
I finished my tool
to look through a large set of our test code to classify our tests with respect
to who owns them, when they run and other attributes like that.My first use of this was to find
"dead" tests - tests that never run, provide no validation or
otherwise are left in the system for some reason.I want to give a sense of scale for how big
this type of challenge is.
through just over 1000 tests, I identified 15 that appeared they may be
dead.Closer examination of those tests
took about 1/2 a day and determined that 8 of them are actually in use.This revealed a hole in my tool - there was
an attribute I forgot to check.
One of the tests was
actually valid and had simply been mis-tagged.I reenabled that test and it is now running again and providing
validation that nothing has broken.
The other 6 tests
were a bit more challenging.I had to
look at each test then look at lab results to see if anyone was actually still
running them, dig through each test to see what was the expected result and so
on.In most cases, I had to go to the
person that wrote the test - in 2 instances, almost 10 years ago - to see if
the tests could be removed.It might
seem trivial to track 6 files out of 1000+ but this will save us build time for
every build and maintenance costs over the years as well as leaving a slightly
cleaner test code base.
In 4 of the cases,
the tests can be removed and I have removed them.In the USA, this is a holiday week for us so
I am waiting on some folks to get back in the office next week to follow up on
the last 2 tests.
This is all
incremental steps to squaring away our test code.
criticisms and complaints always welcome,