OpenSSL Blog

Engine Building Lesson 2: An Example MD5 Engine

,

Coming back after a month and two weeks, it’s time to resume with the next engine lesson, this time building an engine implementing a digest.

It doesn’t matter much what digest algorithm we choose. Being lazy, I’ve chosen one with a well defined reference implementation, MD5 (reference implementation is found in RFC 1321)

Engine Building Lesson 1: A Minimum Useless Engine

,

In this lesson, we’re going to explore minimalism, in this case in the form of the most minimal engine possible (without obfuscating it).

The least boilerplate code for an engine looks like this:

A not so complete example
1
2
3
4
#include <openssl/engine.h>

IMPLEMENT_DYNAMIC_BIND_FN(bind)
IMPLEMENT_DYNAMIC_CHECK_FN()

This example isn’t complete, it will not compile. However, it contains the absolute minimum required for those module to even be recognised as an OpenSSL engine.

Engine School, a Path to Writing Standalone Engines

,

For the longest time, it seems that people have wanted to have their diverse engines bundled with the OpenSSL source, as if there was no other way to build it or distribute it.

Nothing could be further from the truth. Also, having the engine for some hardware bundled with the OpenSSL source presents a maintainance problem, and the better solution is for those who have an engine to maintain theḿ themselves.

So, how is it done? That’s something that we will discover together in a series of articles, about one or two weeks apart.

First lesson up, warming up with a minimal, silly and utterly useless engine!

At all times, feel free to comment, to make suggestions, ask for specific future lessons, and so on and so forth.

FIPS 140-2: It’s Not Dead, It’s Resting

,

Some of you may have noticed that the upcoming 1.1 release doesn’t include any FIPS support. That omission is not by choice; it was forced on us by circumstances and will hopefully not be permanent.

The v2.0 OpenSSL FIPS module is compatible with the 1.0.x releases, in particular the 1.0.2 “LTS” release that will be supported through 2019. It has proven very popular, used both directly by hundreds of software vendors and indirectly as a model for copycat “private label” validations.

Unfortunately the restructuring done for the 1.1 release means that the v2.0 module can’t be used without contortions that don’t belong in a cleaner and better OpenSSL. We’d like to do a new FIPS module for 1.1 with a new validation to succeed the v2.0 module, but the open source based FIPS 140-2 validations present some extraordinary challenges. Only five such validations have ever been done, out of over twenty four hundred validations, and I’ve been at ground zero for each of them.

Conventional proprietary validations of closed source software modules are relatively easy; not so for the open source based ones even when the code is exactly the same. Those open source based validations take a lot of time, manpower, and money. They also involve a large amount of risk; for those five validations to date we invested those resources without knowing if or when a validation would be obtained. Once obtained, the validations can be (or could be) extended repeatedly at lower risk and cost to include new platforms; thanks to several dozen sponsors the v2.0 module now has over a hundred platforms.

I’ve been looking for sponsorship of a new validation almost since the v2.0 validation was obtained in 2012. Those first five open source based validations each had one or more U.S. government agencies as primary sponsors, an ideal situation as the sponsor incentives align closely with ours. But, we have no such sponsorship prospects at present.

Commercial sponsorship is a possibility. I’ve spent most of this year in discussions with several commercial software vendors willing to cover the substantial costs of a new validation. Understandably enough they would do so only on the basis that we first obtain a validation that they could use themselves, leaving us with ownership of the code and documentation and free to pursue our own open source based validation after a limited period of exclusivity. That struck me as a reasonable tradeoff, and recently we came close to signing such a deal.

In the meantime however it has become increasingly clear that our prospects for obtaining a new open source based validation are very marginal. The CMVP, the government bureaucracy responsible for FIPS 140-2 validations, has instituted new practices and policies inimical to the inherently collaborative nature of open source based validations. That issue has been discussed in detail elsewhere; the TL;DR is that the risks and uncertainties have soared to the point where it is no longer logistically or economically feasible to do the kind of collaborative effort where the contributions of multiple unrelated sponsors are pooled for one shared validation.

So, what would have to happen to make a new validation effort possible? A rather forbidding set of requirements:

  • Funding to pursue a new open source based validation effort, win or lose. I estimate that will cost about a third of a million dollars, though there is huge uncertainty in that estimate. In the past we did these validations “at risk”, meaning we gambled that we’d obtain the validation and make up the cost with subsequent platform additions. I won’t speak for my colleagues but I am no longer willing to take that risk.

  • Sponsor(s) willing to wait until the open source based validation is successfully completed; this could take two or more years (longer than the typical proprietary validation, even of the same code).

  • Sponsor(s) willing and and able to champion the open source based validation within the Department of Commerce (DoC). I’m not privy to all the details, nor do I want to be, but I am aware that the prior validations involved intercession by sponsors at various levels within DoC. There have always been vested interests hostile to the open source based validations, and without a counter to that opposition I do not believe the prior validations would have succeeded.

I don’t have high hopes that we’ll see that white knight sponsor anytime soon. So for now we watch and wait and hope for the right opportunity. I’m not giving up entirely, though. After tilting at the FIPS 140-2 windmill for over a decade I know endless patience is a must.

In the meantime we’re already being asked by multiple commercial interests to assist in their pursuit of proprietary closed source validations. That’s a tough one, but as much as we’d like to be all things to all people we are an organization with limited resources and need to allocate those resources effectively to benefit the greater community. We can justify expending resources for open source based validations that benefit a wide range of the user community. We can’t justify that impact for a handful of U.S. vendors. Contributions of FIPS specific code and updates to handle FIPS specific issues are unlikely to be added into an OpenSSL repository until we have an actual use for them.

New Severity Level, “Critical”

,

We’ve just added a new severity level called “critical severity” to our security policy. When we first introduced the policy, over a year ago, we just had three levels, “Low”, “Moderate”, and “High”. So why did we add “Critical” and why are we not using someone else’s standard definitions?

After introducing the new policy we started giving everyone a headsup when we were due to release OpenSSL updates that included security fixes. The headsup doesn’t contain any details of the issues being fixed apart from the maximum severity level and a date a few days in the future.

As an example we gave the OpenSSL announce list a headsup just before we published an advisory about CVE-2015-1793.

One problem we found is that “High” covers issues from denial-of-service right up to remote code execution. So when we gave out advance notice that we were about to release an update that fixed an “High” severity issue users (and the press) jumped to the conclusion it was the next Heartbleed. We heard of operations teams around the world standing by to do instant patching, then finding it wasn’t something they’d have to patch immediately.

There have been historically very few OpenSSL issues that need that kind of immediate response, so for those issues that do we’ve defined a new critical level as:

Critical severity. This affects common configurations and which are also likely to be exploitable. Examples include significant disclosure of the contents of server memory (potentially revealing user details), vulnerabilities which can be easily exploited remotely to compromise server private keys (excluding local, theoretical or difficult to exploit side channel attacks) or where remote code execution is considered likely in common situations.

Why did we create our own levels and definitions and not use an existing scoring system instead? We’ve created our own levels specific to the ways that OpenSSL vulnerabilities lead to risk.

One industry standard scoring system is the Common Vulnerability Scoring System (CVSS), currently at CVSSv2. CVSSv2 was widely condemned for only scoring Heartbleed as a 5 out of 10 on a base score. The last draft of CVSSv3 does bump it higher, to a 7.5, which tends to feel more like the actual risk. But our definitions include two things that such scoring systems don’t; how likely is this flaw going to affect the common use cases of OpenSSL, and how likely is it this is going to be exploitable. Both of these are based on the OpenSSL development teams expert analysis of the flaw and lead to a more useful measure for prioritising your response.

We may in the future include CVSS scores with our advisories in addition to our own defined levels.

OpenSSL Security: A Year in Review

,

Over the last 10 years, OpenSSL has published advisories on over 100 vulnerabilities. Many more were likely silently fixed in the early days, but in the past year our goal has been to establish a clear public record.

In September 2014, the team adopted a security policy that defines how we handle vulnerability reports. One year later, I’m very happy to conclude that our policy is enforced, and working well.

Our policy divides vulnerabilities into three categories, and defines actions for each category: we use the severity ranking to balance the need to get the fix out fast with the burden release upgrades put on our consumers.

  • HIGH severity issues affect common configurations and are likely to be exploitable. The most precious OpenSSL component is the TLS server, and of the four HIGH severity bugs we had in the last year, two were server DoS. The third was the RSA EXPORT downgrade attack, and the fourth a certificate forgery attack, which luckily was discovered and reported to us very fast and so only affected OpenSSL for one release cycle. When a HIGH severity report comes in, we drop whatever we were doing, investigate, develop patches and start preparing for a release. We aim to get the fix out in under a month.
  • MODERATE severity issues are likely to affect some subset of the OpenSSL users in a notable way. Examples from past year include DoS problems affecting clients and servers with client auth, crashes in PKCS7 parsing, and an array of bugs in DTLS. MODERATE issues don’t kick off an immediate release; rather, we pool them together. But we also don’t wait for a HIGH issue to come along (of course we hope one never does). We’ve been releasing pretty regularly on a bi-monthly basis to get the patches out.
  • LOW severity issues include crashes in less common parts of the API and problems with the command-line tool (which you shouldn’t be using for security purposes!). For those, we’re reasonably confirmed that usage patterns that could lead to exploitation are rare in practice. We do due CVE diligence on every issue that may have a security impact, but in order to reduce the complexity of release and patch management, we commit these fixes immediately to the public git repository.

The graph below (raw data) shows the number of days from first report until the security release for each of the CVEs of the past year. You can see the policy in action: serious vulnerabilities do get fixed faster. (The first 9 issues were released in August 2014, just before adopting the new policy, and don’t have a severity ranking.)

The acceptable timeline for disclosure is a hot topic in the community: we meet CERT’s 45-day disclosure deadline more often than not, and we’ve never blown Project Zero’s 90-day baseline. Most importantly, we met the goal we set ourselves and released fixes for all HIGH severity issues in well under a month. We also landed mitigation for two high-profile protocol bugs, POODLE and Logjam. Those disclosure deadlines weren’t under our control but our response was prepared by the day the reports went public.

We’ve also made mistakes. Notably, the RSA EXPORT man-in-the-middle attack didn’t get the attention or execution speed it deserved. We underestimated the impact and gave it the LOW treatment, only reclassifying it to a HIGH in a later advisory, once we realised how prevalent EXPORT cipher suite support still was. A couple of times, we scrambled to get the release out and introduced new bugs in the process: better release testing is definitely something we need to work on, and we’re grateful to everyone who’s helped us with continuous integration tests.

Of course, the true goal is to not have any CVEs in the first place. So I can’t say it’s been a good year: too many bugs are still being found in old code. But we’re working hard to improve the code quality going forward, and we’ve set the baseline.

Finally, a special thanks to all the security researchers who’ve sent reports to openssl-security@openssl.org: the quality of reports is generally very high and your collaboration in analysing the vulnerabilities has been tremendously helpful.

New Website

,

We just went live with a new website. The design is based on the style included with Octopress; the new logo and some other important CSS tweaks were contributed by Tony Arcieri. The style is also mobile-friendly, so you can take us with you wherever you go. :) We still need a better “favicon.”

The text still needs more work. As someone on the team pointed out, “a worldwide community of volunteers that use the Internet to communicate, plan, and develop [OpenSSL]” … really?

The online manpages aren’t there yet. Our plan is to have all versions online. But if anyone has any suggestions on how to make pod2html work with our style, post a comment below.

And, more importantly, if you find any broken links, please let us know that, too!

License Agreements and Changes Are Coming

,

The OpenSSL license is rather unique and idiosyncratic. It reflects views from when its predecessor, SSLeay, started twenty years ago. As a further complication, the original authors were hired by RSA in 1998, and the code forked into two versions: OpenSSL and RSA BSAFE SSL-C. (See Wikipedia for discussion.) I don’t want get into any specific details, and I certainly don’t know them all.

Things have evolved since then, and open source is an important part of the landscape – the Internet could not exist without it. There are good reasons why Microsoft is a founding member of the Core Infrastructure Initiative (CII).

Our plan is to update the license to the Apache License version 2.0. We are in consultation with various corporate partners, the CII, and the legal experts at the Software Freedom Law Center. In other words, we have a great deal of expertise and interest at our fingertips.

Beyond Reformatting: More Code Cleanup

,

The OpenSSL source doesn’t look the same as it did a year ago. Matt posted about the big code reformatting. In this post I want review some of the other changes – these rarely affect features, but are more than involved than “just” whitespace.

Logjam, FREAK and Upcoming Changes in OpenSSL

,

Today, news broke of Logjam, an attack on TLS connections using Diffie-Hellman ciphersuites. To protect OpenSSL-based clients, we’re increasing the minimum accepted DH key size to 768 bits immediately in the next release, and to 1024 bits soon after. We have also made several other changes to strengthen our cryptographic defaults and have updated our tools and documentation to help servers configure Diffie-Hellman ciphersuites securely - see below for details.