Steve Rabin is the Chief Technology Officer at Insight Venture Partners. He has extensive experience designing enterprise-class software and managing technology teams. At Insight, Steve’s goal is to help the technical teams of portfolio companies build best-of-breed R&D organizations and enterprise-class solutions.

Software testing is part art and part science. It’s a combination of looking at the components of a system holistically, as well as in detail. In Part 1 of this blog series, we looked at the different aspects of testing and discussed what testing is. We also considered the role context plays in software testing. 

In this blog post, we’ll discuss testing predictability, testing techniques and the artifacts required to help ensure a quality product is released. 

Change is a constant and one of the few things a tester can be sure of. 

Testing (un)predictability

Over time, projects unfold in ways that are not predictable, and therefore requirements are always going to change. Change is a constant and one of the few things a tester can be sure of. 

Because of this, test plans are never truly complete. Rather than constantly updating test plans, testing teams should spend more time designing, executing, and running tests. The goal is to test the product that already exists.

Test the product that already exists, not the hypothetic ideal

Testing techniques

Different types of defects are revealed by different types of tests. The tester’s responsibility is to find creative ways to exercise the product’s functionality in order to expose problems that are meaningful to project stakeholders.

The following is a sample list of 11 test techniques that can be applied during the product’s development and release cycles. These show the complexity of the testing team’s role:

  1. Unit Test: No regressions caused, thread safety, code coverage, etc.
  2. Build Verification Test (BVT): A quick test or smoke test to make sure the product can be tested further
  3. Full Functional Test: Do all functions in the product work as intended?
  4. User Interface (UI) Test
  5. Security Test: Vulnerabilities exposed in the code base, issues related to authentication and authorization, etc.
  6. Acceptance Test: Does it meet user’s acceptance criteria?
  7. Regression Test: Is there consistent functionality across builds?
  8. Load Test: Make sure that the product is not overwhelmed by stressful numbers of users or amounts of data. 
  9. Platform or Environment Test: Ensure the product acts as intended across platforms, versions, patch levels, etc.
  10. Benchmark Test (by build and/or platform): Other items may be network speeds, hardware, machine utilization.
  11. Installation Test (by build and/or platform)

Note 1: Where possible, automate repetitive testing tasks. 

Note 2: Although not a part of this discussion, using a test first approach from design through release should be considered. As mentioned in Part I of our blog series, quality is the primary goal.

The techniques listed above, although incomplete, provide a starting point to report on risk and help give project owners useful information. 

It’s up to you to develop expectations for each of these items for your context, and if you find something during testing that doesn’t fit your expectations then figure out why. Be sure to report anything you identify as a risk to the project owners. Managing expectations should be front and center, so communication with stakeholders is key. 

Additional testing activities to help further confidence and report risk.

Good software testing is a challenging intellectual process; make your testing an intellectual activity. 

End to end purview: Get involved at all of the appropriate points of the project and not just at the development phase, and push to get access to the software as soon as possible. Ensuring quality is a continuous process through the full product development cycle. It’s not something that occurs towards the end of the project. Be aware of what your role is, and vocalize quality concerns early (and if necessary, often). 

Automated, but not automatic: Automated testing is not automatic manual testing. It oversimplifies the practice to talk about automated tests as if they were automated human testing. Testing tools and automation are important but not a replacement for knowing what to test and how to test it. The intellectual dimension of the process is knowing where to look and having pattern recognition to interpret the results. 

Tie artifacts to the results and plans: Test artifacts are worthwhile to the extent that they provide valuable data. Being able to have a historical record is only useful from an audit perspective. Think about what’s relevant to the team and stakeholders. 

Ways to provide value and further confidence in the testing process include:

  • Making test results traceable to test plans.
  • Making test results traceable to requirements. 
  • Modeling how customers actually use the software. Techniques that may help are use cases and stories that relate to real-world scenarios. This may involve actual users or interactions within/between architecture tiers or machines. API testing is an example of this. 

Contextualize acceptance testing: Acceptance testing is another activity that is often misinterpreted where there is no clarity about the user or business context. How many different user types touch the software? What are their roles and relative importance? Should acceptance testing be targeted at all of them?

Since there’s not an infinite amount of time, these questions depend on context, project phase and to whom the software is oriented. This is where testing intelligence and product management collaboration plays its part. 

Michael Bolton, a testing specialist, brings up some interesting questions with regards to acceptance testing. 

  • Who are the people producing the item being tested?
  • Who are the people accepting it?
  • Who are the people who have mandated the testing?
  • Who is doing the testing?

According to Michael, “Acceptance testing is any testing done by one party for the purpose of accepting another party’s work. It’s whatever the acceptor says it is. The key to understanding acceptance testing is to understand the dimensions of the context.”

In summary, testing requires a mindset of change, testing the actual product, not the product that may exist in the future. It also involves complex techniques and a mindset of curiosity and discovery. Acceptance testing depends on the context of the product’s usage and the user’s needs. This, in turn, requires an understanding of the business, and hence, a holistic view of the customer and business needs.

In part three of this blog, we’ll discuss knowing when to ship software, which is arguably the most important aspect of testing and quality.