We use cookies on this site to enhance your experience. Visit our Privacy Policy for more info.

A Context Driven Approach to Releasing Quality Software (Part 1 of 3)

Steve Rabin | December 12, 2017| 1 min. read

This post was written by Steve Rabin and Adam White. Steve Rabin is the Chief Technology Officer at Insight Venture Partners. He has extensive experience designing enterprise-class software and managing technology teams. At Insight, Steve’s goal is to help the technical teams of portfolio companies build best-of-breed R&D organizations and enterprise-class solutions. Adam White is a leader in the robotics and medTech space. Having held roles in all aspects of engineering. he has developed an interesting perspective on what it takes to produce quality products on time and on budget.  Software testing is part art and part science. It’s a combination of looking at the components of a system holistically, as well as in detail.

Guidelines

This three-part blog series is a guideline for testers who ship commercial software, to help ensure everyone involved in the software testing process understands what is required to be successful. 

It’s worth noting that these guidelines are not comprehensive and intentionally don’t constitute a complete list. It would be nearly impossible to create a complete list of every item that must be considered from the R&D/testing perspective. Additionally, over time, some of the basics of software testing may be taken for granted. Some might even have been forgotten. Because of this, these guidelines are also intended to act as a refresher.

Every project environment (customers, dev teams, test teams, schedules, products, deliverables etc.) is different, hence some of the following guidelines may work and some may not. Teams should decide what makes sense and modify the list as needed.

Balancing context and goals

Software testing is part art and part science. It’s a combination of looking at the components of a system holistically as well as in detail. Part one of this blog series addresses different aspects of testing and discusses what testing is. 

Testing is a context-dependent activity. In some contexts, the role of software testing is to manage confidence and report on risk. In other cases, testing’s goal is to ask the right questions to move the product in the right direction at the right time. By doing this, testers indirectly help improve the quality of the product. 

Mainly, testing is about critical thinking and asking the right questions to get an answer that provides value and insight to stakeholders.

Testing organizations that validate software must balance context and goals, mixing creativity with defined practices. Given the complexities and time constraints usually involved in software development cycles, it is easy to lose sight of what is truly needed to validate a software product.

Testing defined

There is no single answer to this question; many possible answers come to mind. 

James Bach, co-author of Lessons Learned in Software Testing defines testing as “questioning the product in order to evaluate it.”

While Cem Kaner, co-author of Lessons Learned in Software Testing, says, “Testing is a technical investigation of the product conducted to provide stakeholders with quality-related information.” He elaborates further: “Testing is investigation. As investigators, we must make the best use of limited time and resources to sample wisely from a huge population of potential tasks. Much of our investigation is exploratory – we learn as we go, and we continually design tests to reflect our increasing knowledge. Some, and not all, of these tests will be profitably reusable.” 

This is a core concept. Teams create a repository of tests, often numbering in the thousands. Over time, tests become stale, or no longer relevant, and must be refreshed to ensure they’re asking the right questions and testing the right things. Equally important is considering the context under which the tests were created. 

Testing context

It is entirely proper for different test groups to have different missions. A core practice in the service of one mission might be irrelevant or counter-productive in the service of another. Consider the two following project examples from www.context-driven-testing.com.

The first is developing the control software for an airplane. What "correct behavior" means is a highly technical and mathematical subject. FAA regulations must be followed. Anything you do – or don't do – would be evidence in a potential lawsuit happening 20 years from now. The development staff share an engineering culture that values caution, precision, repeatability and double-checking everyone's work. 

An alternative project is developing a new, music service. "Correct behavior" is whatever woos a vast audience of users to use your software. There are no regulatory requirements that matter (other than those governing public stock offerings). Time to market matters – 20 months from now it will all be over, for better or worse. The development team does not have, or need, a formal engineering culture as compared to the above. In fact, engineering speak which is common for the first culture has little practice here.

The value of any practice depends on its context. Testing is done on behalf of project owners in the service of developing, qualifying, debugging, investigating or selling a product. 

The goal is to implement testing activities that can confidently help you answer the question, "Is the product ready to ship to customers?”, and “ready” depends on the context of the question and the stage of the project/product; it’s not an easy question to answer. The simple answer is that as testers, we’re never finished focusing on product quality. 

Determining “ready”

Here are a few guidelines that may help:

  • Determine what kind of coverage you intend to achieve (where coverage means "the models we have for the product, and the extent to which we have tested against them")?
  • Figure out what your oracles are (where oracles means "principles or mechanisms by which we recognize problems"). For example, where do you look, who do you ask/debate/question when you are investigating the problem domain. 
  • Asking the right questions can help the project owners see the outcome of the project. If you find something that doesn’t feel right then voice concerns early by reporting factual problems.
  • Articulate risks and the impact of decisions on testing. 
  • Remember, project owners and key stakeholders are the ones who hold the key to quality. 
  • If development slips and the release date doesn’t move, it’s the testing team’s responsibility to provide a list of features that won’t be tested to the project owners. Testing is a collaborative effort agreed to between the development team and other stakeholders as well. 

Many other test activities that are also important in addition to coverage and oracles. These include determining coverage, choosing techniques, configuring the product, operating the product, observing the result, evaluating the result and selecting tools (automated and otherwise). 

Tests should become more challenging or should focus on different risks as the program becomes more stable. This is because contexts change over the life of a development project just as the software and its objectives do. Software doesn’t live in a clean room or at stasis. Figuring out what testing techniques and activities are important in your context is the first task.

Part Two of this blog help ensure a quality product is released and Part Three helps you know when to ship the software.