We voted this week
to dedicate an upcoming sprint to focus on becoming more efficient as a team
rather than focus on any given new functionality for Tableau. The thinking here is that if we become 10%
more efficient, we can deliver 10% more features in a given release over time,
so this small investment now will pay large dividends in the future.
The test team chose
to work on analyzing automation results.
For instance, if a given test is known to fail some large percentage of
the time - let's say 99.99% for sake of argument - then if it fails tonight I
might not need to make investigating it the highest priority task on my plate
tomorrow. Similarly, a test that has
never failed and fails tonight might very well become my most important task
tomorrow.
So what we are doing
in our first steps is determining the failure rate of every single test we
have. Just tying together all that data
- years worth, times several thousand tests, times multiple runs per day, et… -
is a large challenge. Then we have to
mine the data for the reason for each failure.
If the failure was due to a product bug, then we need to factor out that
failure from computing how often each test
intermittently failed.
The data mining and
computation for all of this seems like a good, achievable goal for one
sprint. Using
that data in a meaningful way will be the (obvious) follow on project.
Wish us luck!
Questions, comments,
concerns and criticisms always welcome,
John
No comments:
Post a Comment