It's been a different type of week here at Tableau. Between getting Monday off for Christmas and now Friday we will be out while we move buildings, there have only been three days to get work done.
So here is a just a short list of things I have done as a tester this week:
1. Posted LOTS of small code changes for review. As part of an efficiency effort, I am removing bits and pieces of dead code that are left around in the codebase. It is not an especially difficult task, but will help with maintainability over time. The most challenging aspect of this is not the code changes - it is finding the right set of folks to review the change I am trying to make. In some cases, this code has been around for many years, and digging through the history of the repository in those cases can be time consuming. I want to find the original people who checked in the code to validate my removal and easily more than half my time is spent on this one task.
2. I have managed to do some testing as well. The most frustrating part of this blog is that I seldom can mention what features I am testing since they are in development. I don't want to mention new features until they are shiping - I certainly don't want to imply we have a feature that appears to be ready to go only to find out there is some core problem with it. That creates confusion (at best) all around. But I managed to automate a test that checks for rounding errors and got that running as well as completing some manual testing tasks.
3. We are moving buildings this weekend, so I have spent some time packing my stuff. We also have some common areas, like a library, that I helped with as well. The movers will be in later today and we will be out while they are moving our gear around. The frustrating part is that our machines will be powered down. I won't be able to log in remotely to my machine to keep any of the processes flowing, but hey, I'll take the time off and enjoy myself!
So that's it for the short week here at Tableau. Happy New Year everyone!
Questions, comments, concerns and criticisms always welcome,
John
Thursday, December 29, 2016
Monday, December 19, 2016
Well, that saved some time
After getting our
unit tests assigned to the correct team, I took a few days off so no post last week. When I returned, I found a defect assigned to me saying
that a unit test was failing. In this
case, the test was related to filling in missing values from a column of data that
the user is trying to display in a visualization. Now, to be fair, there are many different
ways to try to infer what the missing data should be and there is a larger
question of whether you should impute a missing value at all. But the purpose of this test was to validate
that in a range of numbers with a missing value that if we want to use the
average of all the values to fill in a missing value, we would compute the average
correctly and actually add it to the column of missing data.
This is not an
especially difficult test and the code that it exercises is not all that
challenging either. If you ever take a
class in data science this is typically an exercise that you will be asked to
complete early in the class cycle. IIRC,
this was near the end of my first "semester" of this
online program from Johns Hopkins. (This
was a good class, by the way). Anyway, I
looked at the test and realized this was not my area - a partner team owns that
codepath now. I simply assigned it over
to them to investigate the source of the error and moved on to my next task.
My next task is
simply TFS - Team Foundation Server - cleanup.
I am our team's scrum master, so one of my duties is to stay on top of
incoming defects and help keep our sprint schedule on track. This doesn't take all that much time, maybe a
few hours per week, but being timely makes it much easier than letting the
database start to build up. So I devote
a small amount of time each day into scrubbing through it. After that, I will start digging into our
manual test cases to see if automating any of them would be the next best step
for me to take.
Questions, comments,
concerns and criticisms always welcome,
John
Monday, December 5, 2016
Unit and integration tests
I've been working
to assign ownership to the correct team for "unit tests"
recently. While changing the team name
that owns each test is straightforward in most cases, in some I had to step
into the tests to determine exactly what was being validated. This is a great way for me to learn the code,
by the way.
In any case, I soon
discovered that some of the tests I had been thinking about as unit tests are
actually integration tests. The
difference between the two is what I want to mention today.
A unit test is the
simples form of automated testing that we write. In a simple case, suppose I am writing a
calculator application and want to multiply two whole numbers. I could write a function that looks something
like this:
int multiply(int
first, int second)
{
int
result = 0;
for
(int i=0;i<first;i++)
{
result
= result + second;
}
return
result;
}
Now when I write a
unit test, I can pass in 8 and 4 and validate I get 32 as a result and also 0
and 2 to validate I get 0. Not much to
this but that is the point of a unit test - it tests just one function. If it fails, I know the exact one function
that now needs to be investigated.
Then suppose I add a
power function to the calculator.
Raising a number to a power is just multiplying the number by itself as
many times as I want the power t be, so my function to this might be:
int power(number,
power)
{
If(power<0)
return
ERROR_CODE;
int
result ;
for
(int i=0;i<power;i++)
{
result
= multiply(result, 1);
}
return
result;
}
I just call my
multiply command as part of my power command.
Reusing code is always a goal.
But now when I test
my power function, I have a challenge if the test fails. A failure might be in the part of the code
that is unique to my power function, or it could also be in the multiply
command. There could also exist a case
in which both tests are failing for different reasons. So instead of quickly finding the one bit of
ode I need to investigate, I now have 2 places to look and three investigations
to complete.
Looking more at the
calculator, if I added a "Compute compounded interest function" I
would need to use the power function and the multiply function and possibly a
few others as well. Now if a test fails
I might have dozens of locations to investigate.
On the positive
side, I might also discover a flaw in my code that only shows when one function
calls another. This type of
functionality is referred to as an integration
test and is absolutely critical to shipping software. Read about the most famous example of not
covering this here:
a Mars satellite was lost because Lockheed Martin tested all their code with
miles and everyone else tested all their code with kilometers. Very loosely speaking, the Lockheed code told
the spacecraft it was "1,000,000" from earth it meant one million
miles. When the satellite heard
"1,000,000" it assumed kilometers and that was the root of the loss
of the satellite. Integration tests
should have been use to catch this type of error.
Eagle eyed readers
will point out that my power function is useless without the multiply
function. How can I isolate my unit
tests to use only the code in that function instead of needing the multiply
command as well? I'll cover that next
time up.
Questions, comments,
concerns and criticisms always welcome,
John
Subscribe to:
Posts (Atom)