devXero's blog

a blog about agile, development, and automation

Posts Tagged ‘Testing’

Interesting thread on hiring Agile Testers

Posted by Mike Longin on November 3, 2009

Thought this was a thread worth following

As I stated in the thread, it is a tough decision to hire developers to fill Agile Tester spots. Actually more to the point, it is tough to fill Agile Tester spots in general You are really looking for someone with good technical skills but with a true passion for testing. When you look at hiring a developer to fill that spot you are going to be getting those technical skills but there are times you will find that the candidate in question does not have the true passion for testing. If this is the case then it is a possibility that the job that they are asked to do will not be up to the needs of the team.


Posted in Uncategorized | Tagged: , | Leave a Comment »

Internal Presentation on Reliabilty and Efficiency of UI tests

Posted by Mike Longin on October 28, 2009

I did a presentation internally at Ultimate Software on improving the Reliability and Efficiency of tests and I thought it would be worth posting on here as well.  I will also be making a few more posts to cover all the topics I outlined in the presentation.

Improving SWAT\UI Test Reliability and Efficiency

Posted in Uncategorized | Tagged: , , , , | Leave a Comment »

Ways to improve the reliability and efficiency of SWAT/UI tests (Part 1 – Sleeps are evil)

Posted by Mike Longin on October 4, 2009

This is the start of a series of posts I am going to be writing on how to improve both the reliability and efficiency of SWAT tests.  Any ideas I present here can be applied to any UI testing tool (SWAT, Selenium, WatiN, QTP, etc…).  The most important concept to take away from these posts will be that tests can be made both reliable and efficient with just a few simple techniques.

For the first post we will be looking at the Sleep command (  Almost all UI testing frameworks contain some version of this command.  This command is also sometimes known as a “Wait” Command.

Lets start out with why this command is bad:

  • The command will slow down a test suite
  • No matter what, the time you set for the sleep is always too slow or to long.  Either it will be too long, and your test will just sit there waiting for nothing, or it will be too short and the test will move on before it should.

Assuming you are still reading this, and agree that sleeps are bad, you are probably asking what you can do to avoid sleep commands and make your test both stronger and more reliable.

Read the rest of this entry »

Posted in Uncategorized | Tagged: , , , , , , , , | Leave a Comment »

The test step sweet spot

Posted by Mike Longin on August 27, 2009

As part of Chris McMahons ( presentation “History of a Large Test Automation Project using Selenium” he mentioned the concept of a test step sweet spot.  To him the general rule should be about 200 test steps.  For those that use Fitnesse that would be 200 assertions.  I have been thinking along these steps for a while and his comments finally gave rise to my inner dialog.  Once a test starts getting to long it does 2 things.

  1. It starts testing much more then an intial test case and creates many more failure points
  2. It becomes unmanageable to support.  If a failure happens later in the test run, it will be next to impossible to debug and deal and solve the issue

Every person writing tests should be looking for that sweet spot.  Looking at my test suite, I can see that for 200 Fitnesse\SWAT tests that I have the average assertion count is about 210.  However my most solid tests are all closer to around 150.  It is interesting to me that my numbers are so close to Chris’s.   From now on I am going to start paying more attention to a test and force myself to reexamine the test once I am above 150 and try to keep myself below 200.

Posted in Uncategorized | Tagged: , , , , , | Leave a Comment »

For internal builds is it acceptable to have some failing tests?

Posted by Mike Longin on December 19, 2008

So the typical answer to this question is no.   Usually with some harshly worded questions such as:

  • Why would you have a test in there, if you dont mind if failing?
  • Why would you even consider releasing a build with failing code?

But let me add some context.  When you have 20+ teams using the build and the test that fails is specific to one team AND the code is for a more “minor” feature.  Does that change the equation?  Or how about it affects 5 teams, but not the other 25.  Is that ok?

I am curious what people think.  

To me its worth starting to think of multiple tiers.  Tier one tests must pass every time.  Tier 2 tests are tests that affect 10% of the teams and while an internal build can be deployed, these teams must code freeze and fix the issue.  Tier 3 tests are limited to an individual team and that team would need to freeze and fix but everyone else can continue.  I am sure someone will tell me I am wrong, but to me its worth at least thinking about.

Posted in Uncategorized | Tagged: , , , | Leave a Comment »

Automated test data (using setup servers or setup scripts?)

Posted by Mike Longin on August 13, 2008

So I feel that this is kinda of a religious debate but I have never shied away from a little fun.

When it comes to writing an automated test (lets discuss functional UI tests), should the data come from a setup script, or sitting on a DB server that is intended for this purpose.  To further complicate this question, for my company there are multiple tables that would need to have data inserted, possibly as many as 10 or more.

For myself I have found that there is no real hard and fast rule.  I lean towards setup data for stock data.  By stock data I mean data that is fairly static(because no data is ever 100% static) seems to be best left in a setup server.  However data that is either specifically being tested or that is subject to a great deal of change should always come from a setup script.

What about you?

Posted in Uncategorized | Tagged: , , , , | Leave a Comment »