We’ve just added a new severity level called “critical severity” to our security policy. When we first introduced the policy, over a year ago, we just had three levels, “Low”, “Moderate”, and “High”. So why did we add “Critical” and why are we not using someone else’s standard definitions?
After introducing the new policy we started giving everyone a headsup when we were due to release OpenSSL updates that included security fixes. The headsup doesn’t contain any details of the issues being fixed apart from the maximum severity level and a date a few days in the future.
One problem we found is that “High” covers issues from denial-of-service right up to remote code execution. So when we gave out advance notice that we were about to release an update that fixed an “High” severity issue users (and the press) jumped to the conclusion it was the next Heartbleed. We heard of operations teams around the world standing by to do instant patching, then finding it wasn’t something they’d have to patch immediately.
There have been historically very few OpenSSL issues that need that kind of immediate response, so for those issues that do we’ve defined a new critical level as:
Critical severity. This affects common configurations and which are also likely to be exploitable. Examples include significant disclosure of the contents of server memory (potentially revealing user details), vulnerabilities which can be easily exploited remotely to compromise server private keys (excluding local, theoretical or difficult to exploit side channel attacks) or where remote code execution is considered likely in common situations.
Why did we create our own levels and definitions and not use an existing scoring system instead? We’ve created our own levels specific to the ways that OpenSSL vulnerabilities lead to risk.
One industry standard scoring system is the Common Vulnerability Scoring System (CVSS), currently at CVSSv2. CVSSv2 was widely condemned for only scoring Heartbleed as a 5 out of 10 on a base score. The last draft of CVSSv3 does bump it higher, to a 7.5, which tends to feel more like the actual risk. But our definitions include two things that such scoring systems don’t; how likely is this flaw going to affect the common use cases of OpenSSL, and how likely is it this is going to be exploitable. Both of these are based on the OpenSSL development teams expert analysis of the flaw and lead to a more useful measure for prioritising your response.
We may in the future include CVSS scores with our advisories in addition to our own defined levels.