top of page
  • sundaresan krishnaswami

architecting your automation strategy - part 2

in part 1, we saw how to set up a fundamental test strategy


Making automation work


Assert expected behavior:


Most automation Test Engineers fail to understand that automation complements your testing. When you test manually you assert a system’s behavior.


Assert what is critical to your system.


Few questions that may help you with better assertions:

1. What if this particular field does not show the right value?

2. Will it affect my system if this particular field is not entered?

3. What if an API structure/signature is changed?

4. Is this image critical to a user's decision making?

5. Are the values entered is of the right data type?


Work with your Product Manager, Developer to understand which element in a UI is critical to automation and add assertions to every scenario based on the test data and input.

Select the right test data:


While automation helps in testing large volumes of test data, using Testing Techniques like Equivalence Partitioning, Boundary Value Analysis to select the right set of test data saves execution time. Too much test data can be overkill while at the same time you must note that critical input conditions must be always validated for system integrity.

Run early and run often:


Once you have a decent set of API and UI scripts select the critical tests and split them into Build Verification, Sanity, and Regression buckets.

Make the best use of CI/CD tools like Jenkins, GitLab, or even Cronjob to run automation every day or as frequently as possible.

Rerun failed tests:


Flakiness in automation is a given, for a number of factors that could affect your scripts — poor network, environment issues, Workflow change, UI elements change, tools can be flaky or race conditions can affect your test runs.

Use build tool plugins like Maven or Gradle failsafe to rerun failed scenarios. This will eliminate unnecessary manual re-runs to see if the error conditions are due to flakiness or script.

Optimize your run time:


It will be wise to automate certain test cases only if it takes more time for a Test Engineer to complete it.

Tips to reduce automation run times:

  • Use parallel tests. Most frameworks allow you to run tests in parallel using distributed or fragmented approach

  • Systematically review the time taken for each test case, reduce the number of steps or unnecessary assertions

  • Remove unused test cases or tests that you think will no longer find bugs

  • Identify overheads in the code, try to address them

Involve your development team:

  • Your developer will know a thing or two about optimizing the code. Have your developers review automation code as well as pull requests

  • Learn coding best practices that could help optimize the runtime


Coach your team to make use of automation:


Have your team review daily reports and provide feedback.

Have them analyze failures or replicate them manually.

Get them interested in the testing time saved using automation which could help in writing better stories for UI automation.


Measuring ROI


While the key measure for automation is the time it saves, the primary measure that I would go for

  • Is the number of bugs found using automation

  • Time saved in the regression cycle

  • The coverage % of my automation tests.


Conclusion


Automation is key to any Software Test Life Cycle. Automation test scripts have a very short life span while the costs involved in creating them can be huge. Be it adopting a test tool or involving an expert to build frameworks and to maintain them, it is important to get the maximum bang out of test automation within a short span. I hope this article will help you achieve it.

Recent Posts

See All

The first rule of any technology used in a business is that automation applied to an efficient operation will magnify the efficiency. The second is that automation applied to an inefficient operation

bottom of page