How to Shift Left your test strategy? - Part 3

Page content

In the final instalment of this blog series, I will share some of my ideas that will help you in the journey towards Shift Left testing. I have used and applied these in various roles across my career.

Unit tests (UT) for existing code

If you have an existing product that you are re-architecting and/or migrating to cloud, it is highly likely that you will reuse some of that code. Here are a few ideas on developing a robust UT suite for existing code:

  • Start with files that change most often: You should use Git history to find important source code files that change often. Increasing the Unit tests for these files have significant benefits. Since these files tend to change often, the related UT will also be executed more often and will help give the fastest feedback about quality of the code.
  • Breakdown large/complex files and files with big functions: Most static analysis tools will show you files that are large and complex. As we discussed in the previous blog, UT works best when you have modular, loosely coupled code. Therefore, it is a good idea to break down large/complex functions into modular code and then write unit tests for the same.
  • Do not use mocking just to improve code coverage: The effectiveness of UT is measured as a percentage of code covered by the tests. Therefore, it is easier to shoot for very high code coverage since it can be measure easily. When you have dependencies with other libraries or runtime services, it is common to mock these dependencies. Mocking allows UT to cover more code. However, mocking also requires you to write more code. I advocate using right abstractions to keep the dependencies to a minimum. This would lead to more modular code and hence more effective UT - which is more important than running behind a code coverage goal.

Approaches for Integration tests

Microservices and distributed systems have elevated the importance of integration tests. As seen in the previous blog, it is important to define what integration tests exactly mean. I recommend the following approach for integration tests:

  • Component tests: it is defined as running tests using the API of a component. The definition of a component is one microservice and its related infrastructure components. You will have to stub/fake other microservices based on dependency. This approach makes component tests most amenable for hermetic testing.

  • Integration tests: Running tests against a related set of microservices (multi-component). This is an extension of component tests, where you use actual microservices instead of stubs or mocks. The key aspect is to pick a related set of microservices. For example, if you use message brokers like Kafka, you can run integration tests for producers and related code completely independent from the consumers and the related services. It is even possible to run scale and performance tests with this approach.

Test automation and CI/CD

As mentioned in the first blog, it is easy to think of test automation as a collection of scripts. However, this thinking is outdated. I strongly advocate applying all the principles of software development and deployment to test automation. This means that you have “infrastructure” available 24x7 to run automated tests. As you develop automation code, use CI/CD pipelines for code reviews, static analysis etc and eventually deploy the automation code to the test automation infrastructure. The test automation infrastructure should be able to create the right environment, run the tests and then report the results of your tests.

Move away from the classifications such as sanity and regression tests. This classification is not applicable to cloud where you are continuously deploying code. Strive to run all tests, all the time. If you apply the test pyramid properly, you will have a focused suite of test cases and parallelization will help reduce the time taken to run these test cases.

Multi-tenant testbeds and testing in production

The common refrain for fast and large-scale testing is the availability of test beds. Even though a cloud product is inherently shared with multiple customers (multi-tenant), QA teams prefer dedicated testbeds for different types of tests. This paradox stems from the inverted pyramid where long running, end-to-end test dominate the test strategy. If you really want to test a cloud product from customer perspective, use multi-tenancy and run different types of tests on the same testbed in parallel. In fact, each developer or test engineer can be given a dedicated account on a multi-tenant testbed. When customers can use multi-tenant production environment, it should be possible for QA to do the same.

The natural question is “when code is under development, how can different types of tests be run on a common testbed”. Canary testing solves this problem and is a necessity for cloud-based products. If you use Kubernetes, you can leverage technologies like Telepresence to inject new services into an existing testbed. Lastly use production to run a focused suite of tests all the time. It is impossible to create a test environment that matches production. The entropy of a production system is not replicable. Therefore, I strongly advocate running a focused suite of automated tests on the production setup. What is the value of testing something that is already released? You can base line your test automation code - if the tests pass successfully, you can safely say that test code is working properly. During development if there is no change to the same functionality, the same production tests should also pass - this effectively regresses your new changes.


This concludes the three-part series about shifting your testing to the left - closer to the development phase. Cloud environments are unforgiving when it comes to quality. Each bug can potentially impact all your customers. So it is important to create a robust test strategy and integrate the testing into every stage of product development.