Monday, December 11, 2017

Integration Testing, Part 2


The overall goal of software testing is to validate the software we ship works as described.  Integration testing is a key part of that and I want to continue with a very simple "test."



I want to continue with the muffler and engine analogy from my last post.  If the engine is designed correctly, it has a specification document for it.  That document would list various data and specifications about the engine, such as horsepower, the type of fuel needed and so on.  This document will also list how much fuel the engine burns at a given speed it is running and based on that, it will say how many gallons/liters of exhaust it generates per second at each given engine speed.



Let's say it generates 1 unit of exhaust at idle, 3 units at half speed and 12 (wow!  You are really flooring it!) at full speed per second.



It is our job to test that the muffler we use can process that much exhaust.  The "test" here is simple.  We look at the data sheet for the muffler and see if it can process up to 12 units of exhaust per second.  If that answer is no, we don't need to set up an engine and muffler and measure the exhaust.  We can simply say this will not work and we need to make a change (to either the engine or the muffler), or select a different engine and muffler combination.  Easy enough, but this simple test of reading the documentation is missed often enough that I wanted to call it out as a simple first step to take.



This also supposes that the documentation is accurate.  That is not always the case in software since the underlying code can change at any point and updating documentation sometimes lags.  In the mechanical engineering world, though, that does not happen as often.



More on this muffler and engine next time.



Questions, comments, concerns and criticisms always welcome,

John

Tuesday, November 28, 2017

Integration Testing, Part 1


Integration tests are the tests we use to validate that 2 or more software modules work together. 

Let me give an example by analogy.  Suppose you have a car engine and you know it works (for whatever definition of "work" you want to use).  I have a muffler, and it also works, again, using whatever definition of "works" you want to use.

Now suppose you are asked "Will the engine you make work with my muffler?"

Each component works, but how can we tell if they will work together?

Integration testing is the key here.  We know that each component works by itself, but there are no guarantees that they will work together.

For instance, one test case will be that the size of the hole for the exhaust from the engine is the same size as the muffler pipe (to speak broadly).  If the engine has a 5 inch exhaust and the muffler is only 3 inches wide, we have a mismatch and they won't work together.

A second case, assuming the first passes, is connecting the 2 components.  Even if the size of the exhaust is correct, if you use metric bolts and I don't, we are in a failing state again.

In fact, there will be many more test cases for this.  Materials construction (some metals don't interact well with others), weight considerations, stress cases (handling backfiring, for instance) and many, many more. 

The same mentality applies to software testing and I will go deeper into that next time.

Until then, questions, comments, concerns and criticisms always welcome,
John

Monday, November 20, 2017

Updating some data about our data for our team


One of the techniques we use to develop new features (which I cannot talk about) is wrapping the new code behind what we call a Feature Flag.  A feature flag is just a variable that we set OFF while we are working on features to keep that code from running until we are ready to turn it ON.  This is relatively basic engineering and there really is not anything special about it.

As a related note, many Windows applications use a registry key to turn features on or off.  We use a text file here with other data about the flag stored in it. For example, we not only store the name of the flag, but the name of the team that owns it, a short description of what it is for and when we expect to be done.

In some cases, those dates can be wrong or the name of the flag needs to be changed to make its purpose more clear.  For instance, imagine this contrived example. A flag named "tooltip" is not all that useful, but a flag named "ShowAdvancedAnalyticsTooltipsForTheWebClients" is a bit more explanatory.  And if work finishes early or lags behind estimated dates, those dates can change as well.

So this week I expect to update these values (metadata) for our flags.  This is very low priority work but with US holidays approaching, now seems like the best time to get this knocked off the to do list.

Questions, comments, concerns and criticisms always welcome,
John

Wednesday, October 25, 2017

Don't bother porting dead code


I have a task on me to move (we call it "port") some test code from one location to another.  The details are not interesting, but it did involve moving a test "Shape" object and a "Text" object.

The text object and the shape object both inherited from the same parent class, included the same set of header files, were co-located in the same directory within the project I wanted to modify and were otherwise similar in structure.  For a variety of reasons, though, the text object could be moved without much effort at all.  The shape object proved far more difficult.

At first, the compiler complained it could not find the header file for shape.h.  That was a little tricky, but it boiled down to having 2 files named shape.h in the project, and the path to the file I wanted was not specified correctly.  Fixing that caused the shape object not to be able to inherit from its parent class.

And thus began about 2 weeks of trying to get the darn code to build.  I would find and fix one problem only to get to the next error.  This is not unusual - we call it peeling the onion - but it is time consuming. 

For my needs, this is a medium priority task at best, so I wasn't working on it full time.  Just when I could fit it into my schedule for an hour or so.  I started with 27 build errors, built that up to about 100, then whittled it all down to 2.

But at this point I was 2 weeks into build files, linking errors, etc… and decided to try a new approach since I felt I was treating symptoms and not the underlying problem.

I put everything back where it was (reverted my changes, so to speak) and rebuilt.  I then stepped through the code to see how the Shape object was being allocated and used.

It wasn't.

Although it is referenced in the tests, it was never used.  It was dead code.

Sigh.

I was able to delete the dead code, move everything else I needed and got unblocked.

Lesson learned - do your investigation early in the process to determine what actually needs to be ported!

Now, off to a 2 week hiatus.  See you when I am back!

Questions, comments, concerns and criticisms always welcome,
John

Friday, October 20, 2017

Back from Tableau Conference 2017


What a whirlwind that was.  I started the week help with folks getting their schedules straight, then did a little docent work with helping people find rooms.  I also was crowd control for seating during the keynotes (which were a blast! Adam Savage was pretty terrific) and even got to do a little security work in there.

Overall, this was a fantastic conference.  I got to meet several of our customers 1 on 1 and gained some tremendous insights into what everyone wants from us.  That alone made the conference worthwhile for me - now I have a much better idea where I need to focus my time and test efforts moving forward.

If you come next year in New Orleans, be sure to let me know!  I'd love to spend some time chatting with any Tableau users that happen to be reading this blog!

Questions, comments, concerns and criticisms always welcome,
John

PS: Yes, someone mentioned that it seemed like everywhere we walked we always walked through the casino.  I just chuckled and mentioned that means that whoever designed the walkways did the job right...

Tuesday, October 3, 2017

I so badly want to rewrite this little bit of test code, but won't


I saw something similar to this in a test script:

#fdef Capybara:
#include "A.h"
#include "B.h"
#else
#include "B.h"
#endif

Fair enough.  I can imagine that this was NOT the way this conde snippet was checked in - it probably changed over time to what it is now.  I haven't yet dug into the history of it.

That seems a bit inefficient to me and I would prefer to change it to:

#fdef Capybara:
#include "A.h"
#endif
#include "B.h"

Fewer lines and should perform the exact same includes. 

But I won't make this change and here is why:
  1. It may break.  The odds are small, but why add risk where the system is currently working?
  2. It's a small benefit overall.  Maybe 1/1000 of a second faster build times.
  3. It takes some amount of time to make the change, build, test it, get a code review, check it in, etc…  I can't see any time savings for this change in the long run.
  4. A better fix than this one change would be a tool looking for this pattern in ALL our files.  Then use the tool to make the change everywhere, and make the tool part of the regular code checks we perform constantly.

But considering #2 overall, even a tool would not pay for itself.  So I am leaving this as is for now and moving on to my next task.

(But if I have the fortune of needing to edit that file, I will be sorely tempted to make this fix at the same time :) )

Questions, comments, concerns and criticisms always welcome,
John

Monday, September 25, 2017

Tableau Conference is 2 weeks away and I am getting ready


I got my schedule set for TC which is now just around the corner.  I will be working as a Schedule Scout, helping folks that attend get their schedules created.  Seems like a great way to get a conversation started face to face with our customers.

For TC last year, I had the same goal of talking directly with customers.  I looked around at all the jobs we can do - at TC, ALL the jobs are performed by Tableau employees - and figured that working at the logo store would be ideal.  My thought was that I would meet a large number of customers, and I did!  The only downside was that the store was very busy and I did not have much time to interact with everyone.

This year it looks like I will get a chance to work 1:1 with people and I am looking forward to it.  I hope to meet you in Vegas!

Questions, comments, concerns and criticisms always welcome,
John