I've been working on
code coverage a bit this week. Our team
uses Bullseye for gathering information
and while it has its good and bad points, overall it is fairly easy to use
manually.
More specifically, I
have been trying to identify files that have no automated coverage at all. This would mean that this file could, in
theory, not even exist, and we would not know.
The reality is that Tableau would not be able to be built, but not
having any automated coverage is a poor state to be in.
And this is where I
hit my first snag. We have several
different types of automation we run each day.
Unit tests is an obvious starting point, and there are also end to end
(integration) tests. Code coverage
numbers for those are easy enough to gather and to merge together.
We also run other
types of tests like security tests and performance tests. Getting numbers from something like a
performance test is tricky. Since we are
trying to measure accurate performance, we don't want to also slow down the
system by trying to monitor which lines of code are being hit or missed. On the other hand, the code being hit should
be the exact same code the other tests already cover - in other words, we
should not have special code that only runs when we try to measure
performance. It's hard to validate that
assumption when we specifically don't want to measure code coverage for half of
the equation.
In any event, we do
have a few files with low coverage, and we are working to ensure that code has
adequate automated tests on it moving forward.
More on this - what adequate means in this context - coming up.
Questions, comments,
concerns and criticisms always welcome,
John
No comments:
Post a Comment