Understanding Prioritization – Patches and Vulnerabilities

At Tripwire, one of the responsibilities of VERT (Vulnerability and Exposure Research Team) is the monthly publication of the Patch Priority Index (PPI). Equal parts science and art, the PPI is released by VERT researchers who deal with vulnerabilities resolved by these patches on a daily basis. When this process first began, it prompted a very interesting discussion among the project’s stakeholders.

At first, they looked at using scoring as a method of prioritizing patches and considered both our Tripwire IP360 Scoring System and CVSSv2. Neither system is designed to assist with the prioritization of patches; instead they’re designed to describe the criticality of a vulnerability.

While some may argue that we’re dealing in semantics at this point, the concepts are, in fact, very different. When you start to discuss vulnerability prioritization, there are common considerations:

  • What level of access can be gained by a compromise?
  • What is the attack vector?
  • How easily can the vulnerability be exploited?
  • Is the vulnerability being actively exploited?

These can be considered common because they surface in multiple scoring systems and most discussions around the severity of vulnerabilities.

Patches, on the other hand, resolve multiple vulnerabilities, which immediately implies that any prioritization will be much more complex. One immediate thought is to simply combine the vulnerability scores. Consider the following scenario:

You have a system with four vulnerabilities: A, B, C, and D. Patch X resolves A and B, while Patch Y resolves C and D. Time is limited, which patch do you apply first?

CVSS Scores

A – 10.0 (AV:N/AC:L/Au:N/C:C/I:C/A:C)

B – 0.8 (AV:L/AC:H/Au:M/C:N/I:N/A:P)

C – 6.0 (AV:L/AC:H/Au:S/C:C/I:C/A:C)

D – 7.2 (AV:L/AC:L/Au:N/C:C/I:C/A:C)

Ignoring the vectors for a moment, assume you were combining CVSS with simple arithmetic operations. You might mistake strict inequality for prioritization.

A+B = 10.0 + 0.8 = 10.8

C+D = 6.0 + 7.2 = 13.2

Y is the priority C+D > A+B

It’s easy to see how one could assume that you should patch C+D first. This could be further strengthened by documentation from an ASV that indicates two vulnerabilities above the 4.0 CVSS fail threshold as opposed to one. The CVSS vector though indicates that the single 10.0 vulnerability is potentially more severe than the other two vulnerabilities, and this could factor in when prioritizing patches. However, we’re just beginning to scratch the surface of the distinction between vulnerability prioritization and patch prioritization.

Vulnerability scoring is a science. While some would have you believe that there are subjective aspects to measuring the criticality of a vulnerability (e.g. media coverage and fancy names), it simply isn’t true. There’s a reason why the 1-5 scoring system for vulnerability severity didn’t last.

Vulnerability scoring is objective; there are repeatable steps in its reproduction and observed outcomes of exploitation. Since we can measure this, we can say, without a doubt, that this scoring is science. (For more information on vulnerability scoring, please see article series on the subject herehere, and here.)

The mistake that many of us make is assuming that because vulnerability scoring is a science, patch prioritization is also a science. After three separate experiences, I now feel I can say that it is not science but instead an art. The first time I experienced this realization was, as mentioned above, the development of the PPI. The second realization occurred at RSA in 2015.

There it was submitted a P2P session in 2015 on vulnerability scoring with the expectation that we would discuss the math and science behind various scoring algorithms. These sessions are incredibly beneficial because, unlike regular conference talks, they have only a loose outline and grow organically based on attendance. This session shifted in ways I never could have imagined and resulted in a truly great discussion around Patch Management.

During the conversation, questions were raised regarding the failure of vendors to factor in reboot requirements or patch installation complexity. This surprised me, as they are neither aspects of a vulnerability nor systems of measurement for topics like “installation complexity.” It became clear that we were moving away from the science of vulnerability scoring. I was fascinated by some of the revelations made during the session and wrote down a number of facts that I wanted to consider in later research.

This later research ended up being our patch management survey, which resulted in a white paper on the concept of Patch Fatigue. One of the questions we were able to ask was around the elements considered when prioritizing patches. The responses solidified my belief that patch prioritization is a well-practiced art.

While several pieces of objective data were referenced (CVSS, exploit availability and reboot requirements), there were also a number of references to subjective data. This included post patch configuration, multi-stage updates, internal policies, and online resources and publications.

All of this has reinforced my belief that patch prioritization is an art and, while there’s still plenty of science that we can incorporate and lots of subjective data that we could distill to objective buckets, it requires an experienced practitioner to be truly effective. That’s why, with over eleven decades of IT and Security experience, VERT takes pride in our monthly Patch Priority Index and the effort that goes into it.

Via: tripwire


Save pagePDF pageEmail pagePrint page

Leave a Reply

Your email address will not be published. Required fields are marked *