One of the tasks in
my plate is to find and fill holes in test coverage. A good question - which I am going to avoid
for now - is how features ship with holes in coverage. Instead, I want to focus on holes that get
created over time.
For instance, my
team owns the Trend Lines feature in Tableau.
Here is a sample image we have used to test for years:
You can see the
multiple trend lines drawn, each with a confidence band around it. We validate that we do not alter this
expected visualization with each new version of Tableau using a simple image
comparison. We also validate the math
behind this feature but I want to focus on the image for now. This test has been running and passing for
years.
So how can debt
accumulate with this scenario? One way
it cropped up was adding a version of Tableau that works on the Mac. Initially, when this test was created, only a
Windows version of Tableau existed.
Since that point, we have added Tableau for the Mac, Linux (via server)
and different flavors of server (Windows and Linux). So now we have to verify we draw correctly on
each new platform.
The hole gets
created when some new supported configuration is added. When we added Mac support, we obviously ran
all our tests. In this case, the Windows
test did not break since we changed nothing for Windows. And because we did not add a test for the
Mac, the non-existent test did not fail (again, obviously). But we never added that test so a hole
existed. We are filling that gap now, as
well as Linux tests.
Questions, comments,
concerns and criticisms always welcome,
John
No comments:
Post a Comment