Secure SDLC Lessons Learned: #4 Metrics 

Secure SDLC Lessons Learned: #4 Metrics

In this blog series, I am discussing five secure software development lifecycle (SDLC) “lessons learned” from Optiv’s application security team’s conversations with our clients in 2016. Please read previous posts covering:

 

 

SDLC 4

 

Secure SDLC Lesson 4: Metrics

 

As the secure SDLC program matures, vulnerabilities should be caught and remediated earlier in the lifecycle. To know if the program is truly working, organizations must capture metrics. The specific metrics chosen should support and align with the organization’s business objectives and risk management program.

 

Plan

Security metrics related to the SDLC are best captured at security checkpoints. These checkpoints or gates may include in-IDE static analysis, code commit analysis, build-time static-analysis, manual code review, dynamic scanning in QA/integration test, and pre- and post-production penetration testing. Internal and external bug bounty programs may also be included as post-production options.

 

By capturing which checkpoint or tool was used to discover which security defects, you will be able to track trends across your application portfolio. In theory, as the secure SDLC program matures, more security bugs should be caught early in the software lifecycle. This metric will help verify this in practice.

 

In addition, proper metrics will help identify toolchain issues, or conversely, justify their expense. For example, if your static analysis tools fail to capture security defects that surface during penetration testing, then there may be a problem in code coverage or a scanner ruleset.

 

Build

Beyond tracking the discovery point, there are two main classes of key performance indicators (KPIs): efficiency and risk (though some indicators may fall in the gray area between). Each KPI may be filtered by vulnerability type, development team or business area, phase of SDLC discovered, etc., and is often reported per time period.

 

Efficiency indicators may already be available in your bug tracking system. These may include:

 

  • Time to Remediate (TTR) – how long between when a specific vulnerability is first identified and when it’s fully remediated 
  • Average TTR – TTR per time period
  • Current/Average Remediation Queue Size – how much technical security debt exists
  • Velocity – bug fixes completed per time period
  • Pipeline Stress – how many requests for things like manual testing or code review were expedited

 

Other indicators can be designed to track efficiency in security operations, like time to assign an AppSec resource for internal assurance services, time to complete and so on. 

 

Risk indicators describe areas specific to the confidentiality, integrity and availability of applications.

 

  • Coverage – the number of applications per assurance activity (code scan, dynamic analysis) per time period; applications can be ranked according to risk or mission criticality
  • Top Vulnerability Types – per time period
  • Rate of Recurrence – how often known defects come back
  • Churn – how many times a specific vulnerability was incorrectly remediated
  • Composite Risk Rating – often shown graphically, this is a somewhat arbitrary visualization of business risk, defined by something similar to: vulnerability severity x defect count x application risk rating
  • Security Defect Ratio - number of security defects divided by the total number of defects

 

These metrics are usually presented on some form of dashboard, usually with capabilities for drill down and trend reporting. Remember though that the technologies used to implement KPIs are much less important than the information they’re intended to convey. Avoid developing KPIs and visualizations that bring no value to your secure SDLC program.

 

Run

Over time, there should be a reduction in post-release vulnerabilities, which again are the most costly to remediate and pose the most risk to the organization. Expect a downward trend in bugs caught later in the SDLC as the program matures. Additionally, vulnerabilities discovered in-cycle should be remediated more quickly. When unexpected indicator values are observed, root cause analysis is often required to address the issue. For example, if severe vulnerabilities continue to be found by an external bug bounty program, then clearly something upstream isn’t working.

 

Properly designed metrics provide a great way to measure the effectiveness of the secure SDLC program over time, and also provide useful feedback when aspects of the program are tweaked. Capturing key indicators affords organizations the ability to experiment with new vendor tools and alternate approaches to testing, and to determine the impact of different types and sources for developer training. 

 

In short, metrics can take much of the guesswork out of knowing how well your secure SDLC program is working.

 

In the next post I will cover secure SDLC lesson #5, personnel

Shawn Asmus
Practice Director, Application Security, CISSP, CCSP, OSCP
Shawn Asmus is a practice director with Optiv’s application security team. In this role he specializes in strategic and advanced AppSec program services and lends technical expertise where needed. Shawn has presented at a number of national, regional and local security seminars and conferences.
Would you like to speak to an advisor?

How can we help you today?

Image
field-guide-cloud-list-image@2x.jpg
Cybersecurity Field Guide #13: A Practical Approach to Securing Your Cloud Transformation
Image
OptivCon
Register for an Upcoming OptivCon

Ready to speak to an Optiv expert to discuss your security needs?