Friday, May 18, 2018

How often we ship Tableau, a test perspective

If you look at Tableau closely, it's obvious - and we make the claim - we ship a new version of Tableau every quarter.  But from the test point of view, we ship much more often.  Here's what is looks like from our point of view.

Imagine you own a textbook publishing company. Each quarter, you release a new textbook covering some new topic you have not covered before.  An example might be Astronomy: The Mountains of Mars from Winter 2017 and  Egyptian History out in Spring 2018.

At the same time you are ready to ship the Egyptian History book, though, the Mars explorer finds enough data to cause you to need to update one of the chapters of the Mars Mountain book.  So for Spring 2018, you have 2 books you need to send out the door: the new Egypt book and an updated version of the Mars book. 

Your proofreaders will need to focus most of their time on the new book but still must devote some amount of time to validating the text and layout of the updated chapter of the Mars book.  Additionally, the new chapter might change the page count of the Mars book.  If so, you might need to bind the book differently.  If there are new photos, you may want to update the cover or back of the book.  The table of contents might change, and the index will likely need to be updated. 

A test case for the index might be to validate the previous contents are intact after the new chapter index is inserted.  If the size of the index no longer fits on the current set of pages, you will need to rebind the book, shrink the index or otherwise resolve this dilemma.  And the proofreaders, who might have naively thought they needed to verify only the new chapter contents, potentially have to validate the entire Mars book.

Testing is in the same position.  While our focus is typically on the new versions of Tableau we release every quarter, we also continue to support the last few years' worth of releases. That means in addition to testing the "major" release of Tableau, we have to test the updates we consistently ship as well.  So from my point of view, always have multiple releases we have to validate.  And that means that we ship far more often than once per quarter.

Questions, comments, concerns and criticisms always welcome,

Friday, May 11, 2018

Constantly learning

One of the requirements that comes along with working in the tech industry (or any industry, really) is to adopt a notion of constant learning.  A new computer science major today will know more than those that graduated ten years ago, and will know less than students hired ten years from now.

In order to stay current with the industry, I have found several opportunities to keep learning.  One of my favorites is online classes (MOOCs, or massive open online courses).  The University of Washington has a Data Science program certificate class starting soon and it looks like there are enough of us around here to develop a study group.  For many folks, having that level of interaction is a necessity for getting the most out of the class.  The environment it creates -  a useful forum for discussing the techniques being taught - really helps cement the  lesson at hand.

I'm not sure what the emphasis of this class will be, though.  I hope it is more along the lines of implementing a few ML routines as opposed to using "off the shelf" solutions (which are never 100% complete - you always need to write code at some point) but it is definitely on my radar.  Let me know if you sign up for any classes in this series (the audit track is free) and maybe we can "attend" together!

Questions, comments, concerns and criticisms always welcome,

Monday, May 7, 2018

Lots of Tabpy activity

Looking back over the last few weeks I have had a ton of meetings.  One of the products my team owns is Tabpy and I have spent a good amount of time over there.

Specifically, I have been performing some code reviews coming in from the community (thanks gang!) and even had a call with one of the developers.  I also checked in some documentation changes over there to update some error messages to help make setup errors a bit more clear.

Also on the Tabpy front, we have had some team wide customer calls about how companies are using Python integration and how we can help them meet their goals.  Behind the scenes, we are taking notes, designing stories and entering a set of items to track this work in our backlog.  And yes, we are working on these items already, but I (obviously) can't share specifics.  That is simply a frustrating aspect of blogging about testing.

Questions, comments, concerns and criticisms always welcome,

Tuesday, April 17, 2018

Filling holes in test coverage

One of the tasks in my plate is to find and fill holes in test coverage.  A good question - which I am going to avoid for now - is how features ship with holes in coverage.  Instead, I want to focus on holes that get created over time.

For instance, my team owns the Trend Lines feature in Tableau.  Here is a sample image we have used to test for years:

You can see the multiple trend lines drawn, each with a confidence band around it.  We validate that we do not alter this expected visualization with each new version of Tableau using a simple image comparison.  We also validate the math behind this feature but I want to focus on the image for now.  This test has been running and passing for years.

So how can debt accumulate with this scenario?  One way it cropped up was adding a version of Tableau that works on the Mac.  Initially, when this test was created, only a Windows version of Tableau existed.  Since that point, we have added Tableau for the Mac, Linux (via server) and different flavors of server (Windows and Linux).  So now we have to verify we draw correctly on each new platform.

The hole gets created when some new supported configuration is added.  When we added Mac support, we obviously ran all our tests.  In this case, the Windows test did not break since we changed nothing for Windows.  And because we did not add a test for the Mac, the non-existent test did not fail (again, obviously).  But we never added that test so a hole existed.  We are filling that gap now, as well as Linux tests.

Questions, comments, concerns and criticisms always welcome,

Friday, April 6, 2018

Updating the tasks I posted from 2 weeks ago

Two weeks ago I posted a laundry list of to do items.  We are an agile team with a  2 week sprint cycle, so it seems natural to follow up on these.

Again, in no particular order:
  1. We are getting some beta feedback on our features and are working on fixes for those items.  --- I contacted the person that submitted a key piece of feedback and showed him a proposed change.  He thought it was much better, so remain tuned.

  1. Some of our unit tests use integration points to set up before the test runs. 
    1. This is ongoing work.  In the past 2 weeks, I have refactored 4 tests.
    2. I also had to update a new test we added because the path to one of the files "is too long."  Sigh.

  1. Spent 2 hours performing backlog cleanup.  Consolidating related items, assigning tasks to stories, etc…

  1. I actually tested Tableau :)

  1. I also had time to jump over to Tabpy on github and perform a code review and merge there.

FWIW, 2 and 4 will be focus areas for the next sprint.  Let me know if you find these updates interesting.

Questions, comments, concerns and criticisms always welcome,

Thursday, March 29, 2018

Top 3 Outlook tips to help with day to day life

Outlook has SO much functionality built into it that it can get overwhelming at times - meetings come in, reminders fire, emai pops up, etc.. etc… etc..

But there are some actions you can take to make your life much easier.  I share these with each new team I go to and want to put them here as well.  For Outlook "old-timers" (to which I belong since I was there for Outlook 97) these may seem obvious.  If you have never heard of these before, these may really help.

  1. Create  a "cooling off" rule.  I have a rule set to delay all mail I send by 2 minutes.  This helps me a few times per year when I notice a typo at the last second, see that someone else replied while I was pressing send, or otherwise make some goof I need to correct.  Here's how

    1. Open the Home tab | Rules | Manage Rules and Alerts…
    2. Select Apply rule on messages I send | Next

    3. Conditions is the next tab.  I leave this as is since I want this to apply to all my mail, so I click Next.  I get a popup that says this will apply to every message I send, and since that is exactly what I want, I click Yes.
    4. Now I get the action I want Outlook to take.  I select Defer delivery by a number of minutes and change the link to 2:
      Now all mail will wait in the Outbox for 2 minutes after I press send.
    5. Exceptions can be useful.  You may want the rule to be Delay all mail by 2 minutes unless is it High Importance, so you can set exceptions like that here.
    6. Click Finish and the rule is saved.
  2. Second tip is helpful when an email thread is spiraling out of control.  Right click it and select "Ignore".  You will get an alert reminding you that this item and all replies, etc… will be moved to Deleted Items.  Nifty.
  3.  Finally, I work remotely now and then and it is helpful to force Outlook to sync.  Yes, it syncs automatically, but if I know I am about to lose my wifi connection, I can force it to sync by hitting F9.  This is more useful in soothing my conscience, but it is a habit that mentally helps me so I keep doing it.

1 and 2 are the biggies.  They don't help every single day, but when they do help, they make a HUGE difference.
Questions, comments, concerns and criticisms always welcome,

Friday, March 23, 2018

Back to work

Last week I had a trip to Palo Alto and this week I had a few days of internal training (which was quite good).  Now it is back to work.

And by work, this week I wanted to simply post a "laundry list" of tasks on my plate.

So, in no particular order:
  1. We are getting some beta feedback on our features and are working on fixes for those items.  Stay tuned for more details on this as we get the work completed.
  2. Some of our unit tests use integration points to set up before the test runs.  Refactor those to be closer to unit tests.  This gives a few benefits:
    1. Slightly faster build times.  This work generally results in less code which generally builds faster.
    2. Easier to know what happened when a test fails.  If a test has to complete three setup items before running the test (item four), then a failure reported could be one of the 3 setup items or a failure of the code you wanted to test.  In fact, the odds are 3/4 that your code is working fine, but something else in setup failed.  Less tasks to do make it easier to identify the source of a reported failure.
  3. Our team had sprint planning this week.  I think I mentioned we do 2 week sprints.
  4. We scrubbed our backlog to keep it current and reprioritized items as needed (see item #1 above for an example of what might cause planned work to be re-prioritized).
  5. Two days of training (Changing your Viewpoint types of things)
  6. I actually tested Tableau :)

Questions, comments, concerns and criticisms always welcome,