Monday, December 22, 2014



Why the Security Community Should Focus More on API Design

Every year, billions of lines of software code are written and deployed into production. While software security experts frantically review code and conduct penetration tests on some of these applications, a vanishingly small percentage ever undergo serious (or regular) security reviews. Unless something major changes in the current regulatory environment, this trend will continue indefinitely, as there are far more developers than security analysts in the market.

It is pretty obvious that simply playing whack-a-mole with individual vulnerabilities is not an efficient way for security experts and developers to spend their time. Perhaps, if developers understood more about security and the kinds of technical flaws that commonly rear their ugly heads, then many issues can be avoided up front. This has been a major goal of organizations like OWASP, which has been trying for over a decade to educate developers on the kinds of software vulnerabilities that are most common in a given situation and how they can best be addressed.

In my experience, developer education does work when it is well executed. In cases where I have been heavily involved in a software development team, working with them to both test their applications and also hold training sessions on the kinds of things they should watch out for, I've found that the quality of code improves a great deal. It's rarely the case that every developer on a given team takes a strong interest in security, or that the content really "clicks" with every developer. But if just one or two core developers on a team of 5-10 takes it seriously, then the problems sort of clean themselves up during the development cycle. Safer frameworks are developed, APIs are changed, and the basic development "norms" improve to where many categories of flaws become rare.

Developer education: Good, but not as effective as we would like

After a decade of working in software security, I'm convinced that the situation isn't getting better, despite many developer education efforts. Each year, we have more and more CVEs to go with more and more software products. Sometimes, entire classes of vulnerabilities become uncommon, but are only replaced by new kinds of issues to go along with the latest technology fad.

So if developer education is effective, why isn't it improving our situation? Are we simply not providing enough education? It may be true that employers are hesitant to provide extensive security education in a job market where this may train their employees for their next job. But for those that don't receive formal training, do developers learn nothing from the rare security review their applications are subjected to? No. I am optimistic, in that I think most developers do care about their code and how they deliver their end product. The majority of serious software flaws are trivial to fix. I would guess that perhaps 80% consist of a 1 to 5 line code change. And once a certain type of issue is understood, developers typically change their libraries to make it easy to avoid the the problem in the future. So what's going on here?  I suspect it is a combination of factors, but let us look at a one area that I don't think has been discussed at length before...

Consider the developer job market for a moment. As a career technologist it is easy for me, personally, to forget that many people don't stick with a highly technical developer position for their whole careers. But of course many developers stay in the field for only a short time, perhaps 5-15 years, after which they move on to management, to other technical roles, or retire. (I don't have any hard numbers on how long developers tend to stay developers. I'd love it if someone pointed me at hard statistics on this.) At the same time, each year a new batch of college graduates enter the workforce and will require training in secure programming. Wait: don't they learn security in school? No. In my experience, software security is simply not a priority of the higher education system (at least in the USA). The only computer science students that seem to come out of school with software security knowledge are those who have an intense interest in it to begin with.  These kinds of students will take all of the security electives and pick up the rest of what they need to know on their own. Unfortunately, these kinds of security-focused students rarely stick to software development for long, instead opting to move into a security role.

So here we are, trying to educate masses of developers on a wide array of technical issues to watch out for, but at the same time we lose previously educated developers each year. So any education we do in the industry can only reach a certain level of saturation. Each year a huge amount of code is still written by developers with no security training. Changing the way universities teach computer science could certainly help in this area, but this isn't a change we'll see overnight.  In addition, even if security training were provided in University, the specific technical pitfalls one needs to avoid always change over time as the underlying technology changes, making past education increasingly irrelevant.

So what else can we do, with limited resources, to reduce the quantity of vulnerabilities contained in new software each year?  I believe a more effective area to focus our efforts is on providing safer software development environments. When the development world made the transition from writing most software in C/C++ to using higher level languages, a major improvement came about in overall software stability and security. Programming languages that handle memory management for the developer eliminate a dozen or more classes of vulnerability (including buffer overflows, integer overflows, string format injections, use after free, etc). Of course, convincing developers to switch languages is not easy, to put it lightly. Fortunately, we don't need to: the development environments used today are much more dynamic than they used to be; being frequently updated by the core vendors and through new third-party libraries. So how can we, as security professionals, guide the ongoing refinements of development platforms to provide developers with safer environments?

In my mind, what really counts, what really makes a difference in the secure development equation, are the APIs used by developers on a day-to-day basis. Whenever a developer opens a file, runs a query, or fetches a URL, they are reliant on the platform's APIs as a significant component of their development environment. If these APIs are poorly constructed, many developers who are not familiar with the particular technology at hand are likely to make mistakes. On the other hand, if these APIs are well thought through, encourage secure data flows, and are well documented then the frequency of vulnerabilities is greatly diminished, particularly when novice developers are at the keyboard.


What makes for a good API?

This blog post is becoming dangerously close to a rambling manifesto... So I'll just briefly outline the key property I believe is needed for APIs that tend to produce secure software. (In future posts I'll give concrete examples to support my assertions.)

Programming interfaces tend to encourage software security when:
The most obvious way to do something happens to be the secure way

When learning a new API, programmers tend to read the minimal amount of documentation necessary before using it. There are many reasons for this, but primarily they are rooted in the fact that "All programmers are optimists."  ~Frederick Brooks, Jr.

After all, from the programmer's standpoint, if an API doesn't behave way he expects after a brief glance at the documentation, then it probably will break his software right away before he even checks it in, or at a minimum, during the Q/A cycle. Most of the time, this works fine and allows the programmer to achieve his goals quickly. The problem with this, of course, is that programmers and Q/A staff are eternally focused on how their software is intended to be used, and not how an attacker may choose to manipulate it. This is why it is so essential to present programmers with APIs that don't contain corner-case "gotchas" that could leave open attack scenarios that won't be caught by a Q/A team's typical test suite.

Often, developers are using third-party APIs to save them time: they don't want to learn the details of how a certain file format, database protocol, or other technology really works under the hood. They just want to achieve their goals quickly by building on what others have already accomplished. That's why being "obvious" is so important. This demands that API designers should:

  • Not assume that programmers are not going to read all of the API documentation
  • Not assume that programmers are particularly knowledgeable in the API's underlying technology
  • Ask programmers to expend a bit more effort whenever potentially dangerous features are exposed

This is not to say that APIs should not provide advanced features, provably prevent programmers from shooting themselves in the foot, or make any other unreasonable guarantees. This is simply about the default mode of operation, making conservative assumptions about API users competence, and trying to put yourself in your users' shoes now and then. These simple things can make a huge difference in the number of applications vulnerable to attack upon first release, which can free up the security community to do more fun a interesting things.