A Context Driven Approach to Releasing Quality Software (Part 3)

Steve Rabin | January 31, 2018| 1 min. read

This post was written by Steve Rabin and Adam White. Steve Rabin is the Chief Technology Officer at Insight Venture Partners. He has extensive experience designing enterprise-class software and managing technology teams. At Insight, Steve’s goal is to help the technical teams of portfolio companies build best-of-breed R&D organizations and enterprise-class solutions. Adam White is a leader in the robotics and medTech space. Having held roles in all aspects of engineering. he has developed an interesting perspective on what it takes to produce quality products on time and on budget.  Software testing is part art and part science. It’s a combination of looking at the components of a system holistically, as well as in detail.

In parts one and two of this blog series, we looked at stakeholder collaboration, the different aspects of testing, discussed what testing is, examined different testing techniques and considered the role context plays in software testing.

Knowing when to ship the software

Ultimately, in a market-driven company, the decision to ship is a business one. So how does one provide the right information to help manage the project community’s confidence to ship software? There are several ways one can use to figure this out. The items below may or may not apply to your context, market-driven or not.

  • All critical and high priority bugs are fixed
  • All areas of the product have been tested to some degree
  • Metrics fall within acceptable ranges
  • Automation is reporting proper results
  • Ask your team and stakeholders
Build a cross-functional checklist 

No high severity issues are outstanding.

  • All high severity issues that the development team can get done, requested by support, product management or other project owners are in the product and tested. 
  • The process of keeping track of these issues is managed in combination with support, product management, project management, development and test engineering teams.

All areas of the product have been tested to some degree.

  • A testing dashboard or risk matrix is a great way to provide this information. A tool, like Jira, is helpful, or it could be as simple as a spreadsheet that contains each feature listed in the left column and each build number across the top. 
  • Each cell has 3 (or more) status codes - Green, Red and Grey. Green means no issues have been discovered with that feature. Red means there is a problem and includes the bug number in the cell. Grey means that there is more testing or investigation to be done before a reliable status can be determined. For a detailed presentation on this, see footnote.

The usefulness and priority of the risk matrix varies during the project life cycle. The information is always available, but we don't want information overload for the project community.

  • Near the beginning of the project, the interest in the ability to ship the software is generally low, so we don't send the document out to a broad audience. The week or two before release is a different story. There is a very high interest in this information during the last stages of the product, so we provide the document on a daily basis. 
  • Providing this information can help the test team achieve visibility along with buy-in at all levels of project involvement. This helps the test team stay away from becoming the "Quality gatekeepers."

Automation reports proper results.

  • Is your automation focused on the right areas?
  • Can you use your own product to test your own product?

Requirements verified

  • Walk through the requirements and make sure we have test case coverage for each item. 

Metrics

  • The number of customer issues successfully fixed is a good one to start with.
  • There are many more; for example, performance, security, transaction integrity, 3rd party integrations, etc.
  • Bug rates during the project may provide insight—if all variables stay constant throughout the entire cycle (which rarely happens), your spidey sense should be active. 

Test case coverage

  • This is a very dangerous metric even when not used in context with the other heuristics. It is also very dangerous to continue to aim for % coverage when the product is growing or in a state of rapid change. Reporting this metric can make those who don't understand testing feel really good about bad software.
  • Percent coverage does really mean much without context. Tests may have 90% code coverage. This sounds good, but what if the remaining 10% is where most of the app's complexity lies and therefore represents a higher likelihood of defects? 

Ask your team!

  • A very good way to figure out if you are done testing is to ask "Can we ship?" or “Would you feel comfortable giving this to customers?” or “Does this product meet the goals for the release?” The answers (and body language) will tell you their confidence in the product and how it ties back into their own work.
  • The answers you get from your team (and relevant stakeholders) can tell you a lot. This assumes there’s an open culture where all resources feel comfortable speaking their minds concerning quality and ship status. 
The whole is greater than a sum of its parts

If you report on each of these items individually, they do not hold much value. When this information is aggregated, it can offer useful insights since the data comes from resources with meaningful involvement in the project. Someone has to provide context to the stats or else the numbers/charts/graphs can be wildly misinterpreted. A designated release manager or release group should have the authority and be accountable for this. 

Related to this is a cross-functional checklist. A release 'go' or 'no-go' decision is based on all lines on this list being green. There are very few reasons to release a piece of software if this is not the case. 

The specific items in the list impact market-driven contexts. For example, you can get your product to market quickly by sacrificing quality or feature set depth. This may have meaningful market advantages if your customers are willing to compromise on the above attributes. In this case, the team must be prepared to quickly address customer concerns and deal with any missed bugs/features in a timely fashion.

When making release compromises (yes, it happens!), it’s important to keep pace with customers and align with the market. A lightweight approach to test artifacts is needed because product changes are very dynamic—too much detail will be a waste given higher priority tasks; it’s a balancing act.

There are risks when shipping software this way. Releasing an MVP (minimum viable product) is a good example. All signs can point to 'go,' but it’s easy to overlook something important. This may be quality, functionality or something else (release notes, install scripts, etc.). The test team might not find a bug that your most prized customer does after you ship.

Teams sometimes make decisions to ship too soon. Shipping software is a tradeoff between breadth of vs. depth of product coverage. You have to think about what your software needs to accomplish and what quality level is acceptable—it’s all about identifying and evaluating the key risks. Teams must be comfortable with their decisions and have a clear understanding of the impact of those decisions. 

Sometimes, teams ship too late. If your metrics measure the wrong things or you’re testing the wrong pieces of functionality, you may be making more progress than indicated. Waiting for every last bit of functionality, even less significant features, or waiting for 100% test coverage are also causes for shipping a product late. Delaying a release unnecessarily has as many impacts, good and bad, as releasing early. 

Conclusion

Software testing is an essential part of the development process. It should be integrated into every aspect of the software development cycle, from initial design/requirements planning, through coding, system validation and ongoing support/maintenance. While specific tests may be more or less important for a given application, the principles of software testing remain unchanged.

The guidelines described throughout this article are not meant to be domain-specific. The goal was to give you something that could work well across all types of applications including cloud-based, network-centric, business infrastructure, on-prem, web services heavy architectures, etc. What gets tested, who does the testing, how the tests get performed and the communication mechanisms employed to manage the validation process must be taken into consideration regardless of the purpose of the system or the deployment environment.

Testing a piece of software is not a static event. New releases (major and minor), enhancements and patches are a regular part of the software’s lifecycle, so testing is an on-going event. Features are added and removed from software based on need, so there is no such thing as a steady state. Given the many moving parts and associated complexities of software, it’s recommended that every test team develop their own set of testing ideas and culture to validate their specific software product. Don’t forget; context matters regardless of what’s being tested or at what stage in the process. 

Resources:

Books:

  • Title: Testing Computer Software / Authors: Kaner, Falk, Nguyen
  • Title: Lessons Learned in Software Testing / Authors: Kaner, Bach, Pettichord

Online:

Training: