Non-Technical CEO? Here Are the 6 Metrics You Should Care About

Published

Metrics are a proven way to bring visibility to all aspects of a business. In technology companies there is no single set of metrics that can be applied across domains to provide transparency and deliver value.  Product, Marketing, Finance, Sales, Engineering – what works for one domain doesn’t apply to another. 

This is also true for the CEO’s business metrics. The metrics used by senior executives to manage their teams aren’t all going to be useful to the CEO. For the CEO’s Technology Scorecard, unless a CEO’s background is technology based, many of the measures important to CTOs and VPs of R&D don’t readily transfer to help the CEO manage the business or anticipate issues before they become problems. 

This article focuses on the key technical metrics that CEOs should consider reviewing on a regular basis. The goal is to provide measures that are intuitive and useful for clear communication with technical leaders about current initiatives, and relevant for reviewing changes in projects and priorities. 

CEOs and engineering leaders should note that technical metrics are not necessarily focused on business outcomes or business value creation. Business value is created through the delivery of working software that customers will use and the market will purchase. This is the ultimate and most important result of software development. At the CEO’s level, business success metrics should drive the direction of the engineering team, not the other way around.

Business Value and Tech Metrics

Starting with the premise that the engineering team’s focus is to deliver technology that aligns with the company’s business goals, the metrics used by the CTO or VP Engineering to lead the development team (e.g., engineering efficiency, team size, practice improvement, automation and quality) don’t automatically provide the data that’s important to CEOs (at least in the form often provided). Typical metrics like velocity, story points, burn downs, along with terms like tech debt and latency aren’t measures that business leaders fully understand or relate to. Things like NPS scores, production down time and quality measures are more appropriate, but even these numbers are difficult to directly relate to business value. 

For example, how much quality is too much, generally, and in terms of customer expectations? What’s the cost of improving uptime or performance and how does this impact the customer experience? What does a tech debt metric really tell the CEO? Some amount of tech debt is expected because of the rate at which technology evolves, so where’s the line in the sand beyond which tech debt slows down feature development, response time or quality? 

Based on many conversations with business leaders in Insight’s portfolio companies, tech metrics can be misleading because leaders may not understand, (i) what the tech metric is telling them about the business, and (ii) whether the number is at, or approaching, the level where it’s negatively impacting the business. Is 99.89% uptime acceptable or does it need to be 99.99% to maintain customer satisfaction? Similarly, is delivering 80% of sprint commitments acceptable or does this indicate a problem related to team effectiveness. Does this signal a skill set gap or simply reflect that the team is not good at estimating?

Assigning a business value to tech metrics is one way to address this disconnect. Business value is what CEOs understand. 

Business value can be thought of as the bullseye in a target. The bullseye represents the goals of the business, for example ARR Growth. The ring adjacent to the bullseye contains actions or issues that have the most impact on the goals, positive and negative. The rings further away have less immediate impact but need to be monitored. As issues trend closer to the bullseye, actions can be determined and taken. 

From the software development perspective and for this analogy, tech issues in the ring adjacent to the bullseye represent issues that may soon impact the business. The tech issues further away from the bullseye should have less near-term impact.

Before getting to the actual measures, there is one last consideration. The CEO needs to communicate clearly with tech leaders to ensure they know what is most important in terms of business priorities, value and goals. Engineering leaders use many metrics across the tech team and they may not know what’s most valuable for business leaders. If they haven’t asked, although they should have, the CEO needs to reach out and ensure clarity in terms of what’s required. 

The Measures

The CEO’s Tech Scorecard needs to include the select metrics that are most relevant to what the CEO cares about and are closest to the bullseye. The point of a CEO’s dashboard is to easily verify the information and its implications. Insight’s Periodic Table of Software Development Metrics contains over 60 metrics, across several domains (and it’s by no means exhaustive). A variety of metrics are useful and recommended for technology teams, but the key for the CEO is to focus on those measures that provide transparency to what matters most in the short term. 

Insight suggests that a CEO’s dashboard contains metrics related to cost, value, project success, system uptime, performance and quality. These development and production attributes are table stakes and assumed to be near the bullseye. Along with static numbers, trends matter and should be indicated e.g., are tech costs increasing month over month and if so, why? 

  • Cost is relatively easy to measure and understand. 
  • Value is the kicker, since technology is often an enabler of value rather than a driver and may not be obvious to the tech team. Given this, it’s the responsibility of the business stakeholder or sponsor to define value. It may be based on customer satisfaction, increased sales or feature usage. If the only accepted value measure is dollars then Finance needs to provide a means for translating the business benefits of a tech project into a useful number (and over what time period). 
  • Project success or delivery consistency can be measured as the percentage of committed stories delivered by month or quarter.  90% is a good benchmark. 100% means the team is taking few risks and is overly conservative in estimating work items. Some risk is acceptable and even desired but too much risk causes instability. Delivery consistency below 85% is an early warning of problems or challenges the team must address. 
  • System uptime relates to the stability and scalability of the production environment. Many companies set their goal to be the oft-cited “five nines”. That is, the environment is available and performs to specifications 99.999% of the time. Five nines is a good goal, although not all solutions require this level of uptime or should incur the cost to ensure it. In fact, from 97% to 99% uptime is a pretty common and acceptable range. For certain industries/solutions, the range typically falls between 99.5% and 99.9% uptime. Technical teams and leaders should collaboratively define what is acceptable for their customers.
  • Uptime trending should be included in the measure. A problem in one month may be tolerated but if this occurs more often, corrective action is indicated. Including a simple up-down or horizontal arrow (green-red-yellow) along with the uptime percentage provides the needed transparency for this measure.
  • Performance helps identify how the system performs under different scenarios/conditions and, generally, how consistent system performance is by feature or area. Measuring performance requires a baseline to be created. Once a baseline is established, it is possible to identify if performance is improving or degrading from the baseline and in which feature or area. 
  • Quality is a tricky measure to compute. This is because companies have different appetites for how much quality is acceptable, and definitions of what is meant by quality. 100% quality or zero defects comes at a price, as does some tolerance for defects.  
  • Furthermore, not all defects are created equal. A spelling mistake does not have the same business value as a defect that brings the system down or computes the wrong value for the ending balance of a bank statement.
  • The timing of when a defect is found is also important. Is it prior to release and discovered by the QA team or after release and found by a customer? Some defects discovered post release are acceptable, but (i) what is considered an acceptable number of these, and (ii) how severe is the defect?
  • Even with this complexity, a simple way to measure the quality of a software platform is to split the number of defects into severity categories (A, B, C and D, for example).  From the CEO’s perspective, it’s the A/B’s that are worth a look. Similar to uptime, a simple up-down or horizontal arrow (green-red-yellow) indicates defect trending. 
  • Note: the caveat here is that the number of defects will and should increase as software for a new release is being developed. This is good because it means the QA process is finding bugs prior to the software being released. 

It’s a truism that metrics drive insights, decisions and behavior across all levels in an organization. The measures used by the software development team will be more numerous and detailed than what a business leader requires and need to be abstracted for the CEO. For the CEO’s scorecard, the measures must :

  • Be Intuitive – it’s instantly clear what the metric explains about the business and defines where the CEO should hone in.
  • Work Together – while each metric is useful, together they should tell a story.
  • Relate to Value – a metric that's too far removed from business goals is not helpful and detracts from what’s important.
  • Be Relevant – just because something can be measured doesn’t mean it should. Too much data is as bad as too little and is distracting.
  • Actionable – measures should ideally point to an action or specific concern that need be discussed with the CTO or VP of R&D. The metric shouldn’t elicit the reaction, now what? 
  • Simple – Calculating a measure shouldn’t require super-human effort or be so complex that it requires tracking new data sources. 

If you’re a CEO, you need visibility into what is happening in your technical organization. These are the metrics in your Technical Scorecard. They are the basis for clear communication with technical leaders about current initiatives and enable you to review the business impact of projects and priorities that are in the bullseye. Knowing the above six metrics, how they’re measured, and why they’re being measured is a CEO’s route to understanding the relationship between technical excellence and business value. 

  • Steve Rabin

    Steve Rabin, Chief Technology Officer

    Steve Rabin joined Insight in 2003 and serves as Insight's CTO. Steve has over 20 years of experience designing enterprise-class software and managing technology teams. Prior to Insight, Steve was CTO/ VP of Engineering at InterWorld and American Software. He also served in several engineering management positions at Pfizer. Steve founded and…