Thursday, March 29, 2018

Top 3 Outlook tips to help with day to day life

Outlook has SO much functionality built into it that it can get overwhelming at times - meetings come in, reminders fire, emai pops up, etc.. etc… etc..

But there are some actions you can take to make your life much easier.  I share these with each new team I go to and want to put them here as well.  For Outlook "old-timers" (to which I belong since I was there for Outlook 97) these may seem obvious.  If you have never heard of these before, these may really help.

  1. Create  a "cooling off" rule.  I have a rule set to delay all mail I send by 2 minutes.  This helps me a few times per year when I notice a typo at the last second, see that someone else replied while I was pressing send, or otherwise make some goof I need to correct.  Here's how

    1. Open the Home tab | Rules | Manage Rules and Alerts…
    2. Select Apply rule on messages I send | Next

    3. Conditions is the next tab.  I leave this as is since I want this to apply to all my mail, so I click Next.  I get a popup that says this will apply to every message I send, and since that is exactly what I want, I click Yes.
    4. Now I get the action I want Outlook to take.  I select Defer delivery by a number of minutes and change the link to 2:
      Now all mail will wait in the Outbox for 2 minutes after I press send.
    5. Exceptions can be useful.  You may want the rule to be Delay all mail by 2 minutes unless is it High Importance, so you can set exceptions like that here.
    6. Click Finish and the rule is saved.
  2. Second tip is helpful when an email thread is spiraling out of control.  Right click it and select "Ignore".  You will get an alert reminding you that this item and all replies, etc… will be moved to Deleted Items.  Nifty.
  3.  Finally, I work remotely now and then and it is helpful to force Outlook to sync.  Yes, it syncs automatically, but if I know I am about to lose my wifi connection, I can force it to sync by hitting F9.  This is more useful in soothing my conscience, but it is a habit that mentally helps me so I keep doing it.

1 and 2 are the biggies.  They don't help every single day, but when they do help, they make a HUGE difference.
Enjoy!
 
Questions, comments, concerns and criticisms always welcome,
John

Friday, March 23, 2018

Back to work


Last week I had a trip to Palo Alto and this week I had a few days of internal training (which was quite good).  Now it is back to work.

And by work, this week I wanted to simply post a "laundry list" of tasks on my plate.

So, in no particular order:
  1. We are getting some beta feedback on our features and are working on fixes for those items.  Stay tuned for more details on this as we get the work completed.
  2. Some of our unit tests use integration points to set up before the test runs.  Refactor those to be closer to unit tests.  This gives a few benefits:
    1. Slightly faster build times.  This work generally results in less code which generally builds faster.
    2. Easier to know what happened when a test fails.  If a test has to complete three setup items before running the test (item four), then a failure reported could be one of the 3 setup items or a failure of the code you wanted to test.  In fact, the odds are 3/4 that your code is working fine, but something else in setup failed.  Less tasks to do make it easier to identify the source of a reported failure.
  3. Our team had sprint planning this week.  I think I mentioned we do 2 week sprints.
  4. We scrubbed our backlog to keep it current and reprioritized items as needed (see item #1 above for an example of what might cause planned work to be re-prioritized).
  5. Two days of training (Changing your Viewpoint types of things)
  6. I actually tested Tableau :)

Questions, comments, concerns and criticisms always welcome,
John

Friday, March 16, 2018

A quick jaunt to Palo Alto


The bigger organization within Tableau of which I am a part has a rather large team at our office in Palo Alto.  We had an All Hands meeting on Monday this week and I was able to attend in person.  I always like visiting Silicon Valley and decided I would ramble a bit about why this week.

  1. The number of tech companies there is astounding.  Everywhere I look I see Microsoft, Amazon, Apple, iAm, Sophos, Salesforce and so on.  It just brings out the geek in me.
  2. The shuttle we used was a Tesla.  Repeat: the shuttle was a Tesla.
  3. The weather there is typically nicer than Seattle.  One exception: I was in San Francisco once and went from hail in the morning to 95 degrees in the afternoon.  Cue the Mark Twain quote.
  4. It's always nice to step away from my desk now and then to get some new thoughts going.
  5. Meeting the entire team in Palo Alto lets me a face with a name.

I also talked with a few folks that are facing some of the same challenges our team is facing.  Maybe we can collaborate to face them and not waste time duplicating work - nothing to do with Palo Alto, just good business.

All in all, an enjoyable trip worth the time investment.

Questions, comments, concerns and criticisms always welcome,
John

Wednesday, March 7, 2018

A code coverage analogy


One of the tasks I have on my plate is code coverage tracking.  I've written about this in the past and today I want to go over an analogy on the usefulness of this statistic.

First I can point out it is relatively easy to track.  We have plenty of tools to do this for us, get reports, etc…  But just because it is easy to track does not mean it is all that useful.  I can track the contents of my glove compartment each day, but I think we can all agree that this would not be a useful piece of data to have.  There is not much I would do differently on a day to day basis based on the contents of the glove compartment.

Code coverage, on the other hand, can be useful.  It tells me how much code I have to validate that is not covered by automated tests - it tells me a risk level.  By itself, it is not enough (look up Mars ClimateExplorer as a terrific case study).  It does help make a decision, though. 

Here's my analogy:  code coverage is to a test engineer as a patient's temperature is to a doctor.  It is one useful metric that can help isolate risk.  By itself, a 98.6F temperature does not tell a doctor that a patient is healthy.  It only can help eliminate some possible causes of illness - certain stages of the flu, for instance.  No doctor could reasonably base a diagnosis on one temperature measurement.  It is one statistic among many others such as blood pressure, heart rate, etc.. that can help lead to a more accurate diagnosis.

Likewise, just because I have a 98.6% rate of code coverage, I cannot pronounce a feature risk free and ready to ship.  I also have to look at scenario pass rates, international/localization status, accessibility, performance goals and so on. 

Too often we get trapped in a mindset of "this statistics by itself is useless so let's not bother tracking it."  While the first half of that sentence may be true (and is, for code coverage), the second half of that sentence does not follow.  If given a choice of having a patient's temperature included in a checkup or not having the temperature included, doctors will always want to track that statistic.  It helps give part of a bigger picture.

And that is exactly what code coverage does.


Questions, comments, concerns and criticisms always welcome,
John

Thursday, March 1, 2018

Amazing amount of mileage out of this one test file


One of the accumulations that happens to testers over time is the build up of a staggering array of test files.  Files that use "odd" characters in Japanese, a Finnish test script, an absolutely huge Excel file, etc…  I tend to keep any file that ever exposed a bug around, use it in automation and use it for ad hoc testing as time goes by.

I also have one very simple file that I use quite often.  Very basic, but very good for "smoke testing" : simply ensuring a new feature doesn't crash right out of the gate.

It's just a csv file with some simple numbers and text in it:

x
y
z
name
2
-1
4
ant
4
0
-2
bat
6
3.1
1.5
cat
-1
0.2
3
dog


Super basic, but fast to load.  It also has a mix of integers and floating point numbers, named columns, some text values and multiple rows of data.  I use it at least a half dozen times per week and I wanted to share this since it makes a good point that sometimes a simple test case is enough to provide some value.  If my test - whatever it is - fails with this file, I know I am blocked and I can file a bug and move on.

And if it passes, then I know I can start to dive further into the test matrix I have.  There's no guarantee that any of those other tests will pass.  But since I love analogies, I see it like this.

This simple test matrix is the equivalent of starting a car.  If it fails, I know I won't be testing the brakes or driving on the highway.  But just because the car starts I still can't tell if it will work when put in gear.  But at least I know the basic test - the engine works, in this analogy - is passing.

Questions, comments, concerns and criticisms always welcome,
John