Secure Code: By Design? Serendipity? Or...?

Posted by Jon, Comments

While researching Struts 2 again to expand our current framework support, I became a bit more familiar with its tag library. This lead to a better understanding of Struts 2 basic OGNL syntax and the creation of some test cases. During the test case development, I was amused at all the different ways one can obtain the value from a getter function. Here's a snippet for your amusement:

"%{tainted}": <s:property value="%{tainted}" escape="false" />
"getTainted()": <s:property value="getTainted()" escape="false" />
"%{getTainted()}": <s:property value="%{getTainted()}" escape="false" />
"#this.tainted": <s:property value="#this.tainted" escape="false" />
"%{#this.tainted}": <s:property value="%{#this.tainted}" escape="false" />
"top.tainted": <s:property value="top.tainted" escape="false" />

There are more ways, for sure, but getter synonyms isn't the purpose of this blog. Secure code by chance is.

Secure code isn't just having code that's devoid of vulnerabilities for whatever reasons. Just as insecure code isn't code that happens to have a couple of weaknesses. I've looked at code that had a couple of issues and it seemed relatively secure to me. Just as I've looked at code without any obvious vulnerabilities that I wouldn't consider secure. There are properties to code, hopefully not too subjective, that convey security. To me, one of these security properties is the ability to understand why something is secure or not. If you cannot understand if something is or isn't secure, it isn't secure; it's just serendipity.


If you look at the last line in the JSP snippet above, notice the use of top as a value. This special value obtains the top value from the current ValueStack. In Struts 2 this often is a controller, which is sometimes a subclass of ActionSupport. Why's this important?

Struts 2 has been making the rounds again because of a couple of security advisories: S2-021 and S2-022. Good ol' ParametersInterceptor had a bad couple of days it seems, leading to remote code execution issues. Curiosity got the best of me and I started looking into the updated ParametersInterceptorTest test case and the updated / added ExcludedPatterns class. Looking at that list, you'll see that top isn't in it. Hmm...


So top can be a parameter name. Does the framework actually do anything special? ParametersInterceptor eventually calls CompoundRootAccessor.getProperty when top is the parameter name. This returns the root or top of the ValueStack, which again is usually the Action controller that's being accessed. Nice! So we have access to the controller via a parameter name. Let's call some methods!

Since we're calling methods with a value, in OGNL, we're setting a property. This is handled by OgnlRuntime.setProperty:

public static void setProperty(OgnlContext context, Object target, Object name, Object value)
        throws OgnlException
    PropertyAccessor accessor;

    if (target == null) {
        throw new OgnlException("target is null for setProperty(null, \"" + name + "\", " + value + ")");
    if ((accessor = getPropertyAccessor(getTargetClass(target))) == null) {
        throw new OgnlException("No property accessor for " + getTargetClass(target).getName());

    accessor.setProperty(context, target, name, value);

When using top the target is the specific class, which is a custom class in this case. Since there's no specific PropertyAccessor for this class, this results in an accessor field set to an instance of ObjectAccessor. This calls its super, ObjectPropertyAccessor.setProperty, which calls ObjectPropertyAccessor.setPossibleProperty:

public Object setPossibleProperty(Map context, Object target, String name, Object value)
        throws OgnlException
// snip
    if (!OgnlRuntime.setMethodValue(ognlContext, target, name, value, true))
        result = OgnlRuntime.setFieldValue(ognlContext, target, name, value) ? null : OgnlRuntime.NotFound;

    if (result == OgnlRuntime.NotFound)
        Method m = OgnlRuntime.getWriteMethod(target.getClass(), name);
        if (m != null)
            result = m.invoke(target, new Object[] { value});
// snip

In this method, three different ways are used to set or write a value to a property:

  • OgnlRuntime.setMethodValue
  • OgnlRuntime.setFieldValue
  • Invoking the method with the user-provided value.

If all of these fail, the caller method ObjectPropertyAccessor.setProperty throws an exception. And when devMode is on, something like the below is logged:

Unexpected Exception caught setting '' on 'class Error setting expression '' with value '[Ljava.lang.String;@1ef1a094'

So if an exception like the above occurs, we know we hit a Whammy. Otherwise, if we don't hit a Whammy we might be in luck. :) So let's request ?top.text=%25{1*2} and get some RCE! No Whammy, no Whammy, no Whammy, and.... stop!

RCE Attempt 1

Whammy :(

Unexpected Exception caught setting 'top.text' on 'class Error setting expression 'top.text' with value '[Ljava.lang.String;@16321271'

Well, what happened? In this case, the call to OgnlRuntime.getWriteMethod returns null in ObjectPropertyAccessor.setPossibleProperty. Hmm...

public static Method getWriteMethod(Class target, String name, int numParms)
// snip 
    if ((methods[i].getName().equalsIgnoreCase(name)
         || methods[i].getName().toLowerCase().equals(name.toLowerCase())
         || methods[i].getName().toLowerCase().equals("set" + name.toLowerCase()))
        && !methods[i].getName().startsWith("get")) {
// snip

D'oh! Notice the last part of that conditional. There's a check disallowing one to set a property on a method that starts with get. Boo!

OK, so any methods outside of get* can be called. Guess what, there's yet another OGNL sink on ActionSupport that doesn't start with 'get': hasKey(String)! Let's request ?top.hasKey=%25{1*2}... No Whammy, no Whammy, no Whammy, and... stop!

RCE Attempt 2

Whammy again, drat.

Unexpected Exception caught setting 'top.hasKey' on 'class Error setting expression 'top.hasKey' with value '[Ljava.lang.String;@36c63134'

Debugging shows that OgnlRuntime.getWriteMethod when called from ObjectPropertyAccessor.setPossibleProperty did return something this time, awesome! So the method was invoked by reflection via m.invoke(target, new Object[] { value}); with tainted data, nice! Except, it wasn't....

Breaking on ObjectPropertyAccessor.setPossibleProperty and stepping through shows an IllegalArgumentException exception thrown with the message argument type mismatch. Hmm. Looking at the log output, you can see [Ljava.lang.String;, which is the mangled name for String[]. So it seems Struts 2 stores request parameters in a String array. That make sense since one could specify the same parameter name but have different values. And the parameter signature for hasKey is expecting a String, not a String[]. Mismatched argument type. :( Well, shucks. What's one to do w/ a vector that doesn't do anything?!


What did we learn?

  • The top value bypasses the regular expression exclusion and passes the acceptable names in ParameterInterceptor.
  • top should return a reference to the instance of the Action associated to the URL.
  • It's common, although not required, that Actions exposed are usually subclasses of ActionSupport
  • Methods that start with get* cannot be used as a parameter key. (ActionSupport.getText failed here.)
  • Methods that don't conform to the JavaBean getter / setter convertion need to accept a String[] parameter. (ActionSupport.hasKey failed here.)

It's just dumb luck that RCE didn't happen. For example, if the value was massaged from a String[] to a String, as what happens when the normal getter / setters are called (see XWorkBasicConverter.convertValue), then this could have been RCE. Is it obvious to anyone supporting this code that a custom public method on a class called addValues(String[] values) is accessible via ?top.addValues=value1&top.addValues=value2&... ?

Final Thoughts

I tried to think what the Struts 2 developers could do but I'm lost. I'd rather they remove the exclusion list and remove the functionality that's causing the code to be evaluated in such a way. Exclusion or black lists are like door braces trying to keep out the invading hordes; eventually the hordes break through and raid the castle. Maybe they could ensure ObjectAccessor isn't called on a parameter name, which is tainted. However, I'm guessing there are a lot of things I dunno about Struts 2 that makes this a horrible (and possibly insecure) design choice. Maybe in this case a black list is as good as it gets? If so, is the code still secure? Or is it just lucky?


'top.hasKey' vs. 'hasKey'

If you're wondering why specifying top.hasKey is different than hasKey, debug the call to OgnlRuntime.setProperty. Here's the snippet from above again:

public static void setProperty(OgnlContext context, Object target, Object name, Object value)
        throws OgnlException
    PropertyAccessor accessor;

    if (target == null) {
        throw new OgnlException("target is null for setProperty(null, \"" + name + "\", " + value + ")");
    if ((accessor = getPropertyAccessor(getTargetClass(target))) == null) {
        throw new OgnlException("No property accessor for " + getTargetClass(target).getName());

    accessor.setProperty(context, target, name, value);

When called, if the target is of type CompoundRoot, which it will be for hasKey, accessor is set to an instance of CompoundRootAccessor. When calling top.hasKey the target is the specific class, which is a custom class in this case. This results in an accessor field set to an instance of ObjectAccessor. These two types perform different checks when calling their setProperty methods, with the top case having some potential holes.

Shenanigans with Numbers

Try out the following :)

  • ?0xdeadbeef['equals']=something
  • ?066['equals']=something
  • ?1L['equals']=something

You shouldn't notice any exceptions firing. The OGNL parser is parsing those as numbers and successfully calling the equals method on the respective boxed number class (e.g. Integer.equals).

Now try these:

  • ?12_['ignored']=whatever // Throws ognl.ParseException in ognl.OgnlParser.topLevelExpression()
  • ?123[066]=whatever // 066 converted to decimal 54, and Integer.54() is called, which results in a "54" property not found exception.

Handling web frameworks; a case of Spring MVC - Part 1

Posted by Romain, Comments

Coverity has been known for years for its static analysis technology for C/C++ applications. A couple of years ago, we started a new project to focus on the security analysis of Java web applications. During development, one of the first issues we faced analyzing open source applications was the prevalence and diversity of web frameworks: we did not find many security defects because we lacked the understanding of how untrusted data enters the application as well as how the control flow is affected by these frameworks. To change this, we started developing of framework analyzers.

This blog post presents examples and explains what the analysis needs to understand and extract to perform a solid analysis. We focus on Spring MVC, one of the most common and complex Java web frameworks.

Our example Spring MVC application

To illustrate the framework analysis, I've created a small Spring MVC application that you can find on Github blog-app-spring-mvc. It has features that most Spring applications are using: auto-binding, model attributes, JSPs, and JSON responses. The application itself is very simple and can add users to a persistent store; there is a simple interface to query it, and we also display the latest user.

To show the different features of the framework, I will use two kinds of defects: cross-site scripting (XSS) and path manipulation. The application can be run, and it's possible to exploit these issues; we have 2 simple XSS and 3 path manipulations that are mostly present to trigger defects from the analysis.

Here's the layout of the application that you can build and run using Maven.

├── java
│   └── com
│       └── coverity
│           └── blog
│               ├──
│               ├──
│               ├── beans
│               │   └──
│               └── service
│                   └──
└── webapp
    └── WEB-INF
        ├── spring
        │   ├── appServlet
        │   │   └── servlet-context.xml
        │   └── root-context.xml
        ├── views
        │   ├── error.jsp
        │   ├── home.jsp
        │   └── user
        │       └── list.jsp
        └── web.xml

The Java code lives under src/main/java and our package names, while the Spring configurations, JSP files, and web.xml are under the webapp directory. This is a very common structure.

Build and run

If you're not using Maven frequently, you'll need to get it here, go to the root of the project (where the pom.xml is), and run:

  mvn package

to create the WAR file.

You can also run the application directly with a Maven command (always good for proving framework behaviors):

  mvn jetty:run

Developer view: some simple code using Spring MVC

Spring MVC is one of the most common web frameworks. If you look at the abysmal documentation, you will see it has tons of features. Spring MVC implements the model-view-controller (MVC) pattern, where its definition of the MVC is essentially:

  • Model: Passing data around (typically from the controller to the view)
  • View: Presentation of the data (e.g., rendered with JSP, JSON, etc.)
  • Controller: What is responsible for getting data and calling business code

Here's our first controller example,

public class HomeController {
  // Being polite
  @RequestMapping(value="/hello", produces="text/plain")
  public @ResponseBody String sayHello(String name) {
    return "Hello " + name + "!"; // No XSS

  // Display our view
  public String index(User user, Model model) {
    model.addAttribute("current_user", user);
    return "home";

In this case, the configuration is very basic. As usual, we need to specify that we want Spring's Servlet to analyze the bytecode to look for @Controller and map the associated entry points (@RequestMapping annotated) with the configuration in servlet-context.xml (we also need to enable Spring in the container, so there are references to it in the web.xml). In this class, we only have 2 entry points:

  1. HomeController.sayHello(...) which takes one parameter "name", and returns a String that contains the body of the response (what's being displayed by the browser).
  2. HomeController.index(...) which has 2 parameters and returns a String that points to the location of the view (a JSP in this case)

Workflow for HomeController.sayHello(...)

Executing this code by reaching /hello?name=Doctor produces the output to be displayed by the browser being:

  Hello Doctor!

The browser also receives the content type as text/plain, so no markup will be rendered here.

Workflow for HomeController.index(...)

The second entry point uses a common feature from web frameworks: auto binding. Spring MVC will instantiate a new User and pass it to the entry point, its fields will be populated by what's passed from the HTTP parameters; the Model is also given by Spring, but its meant to be a map-like object to pass to the view for rendering.

Executing /index?name=TheDoctor& will call the HomeController.index(...) method with the first parameter being the bean User({name: TheDoctor, email:}). We later add it to the model so it will automatically be dispatched to the view and accessible from the JSP using EL.

Our JSP is minimalist and contains the following code:

  <c:when test="${not empty}">
    Hello ${cov:htmlEscape(}! <%-- No XSS here --%>
    The bean field `name` is reflected, such as <a href="/blog/index?name=TheDoctor">here</a>.

where the bean current_user set from the entry point code in the Model is filled with the data populated from the HTTP request. The JSP code will display an HTML escaped name in the current_user bean if it's not null or empty, otherwise display the static contents, body of the c:otherwise tag.

Analysis view: call graph roots, etc.

When running a vanilla analysis on this type of code, not much happens. In fact, the type HomeController has no known instance and HomeController.sayHello(...) is never called in the entire program. A typical analysis would mark this type of method as dead code. The challenge of the framework analysis is to translate how Spring MVC is used in the application into a set of facts that can be acted upon by our static analysis.

The kind of properties we need to extract belong to the following areas:

  • Control flow: How can this function be reached, what's happening when the function returns
  • Data flow: How is the data provided by the framework (automatically dispatched, etc.)
  • Taint analysis: What data is tainted and how (e.g., what parts are tainted or what fields)
  • Domain specific: Facts related to HTTP requests, responses, URLs, Servlet filters in place, etc.

To achieve this, we've created a new phase in the analysis that looks for framework footprints in the source code as well as in the bytecode. The framework analyzers also require access to the configuration files in order to properly simulate how the framework will operate at runtime.

This framework analysis phase extracts the following facts for the first entry point:

  1. HomeController.sayHello(...) is an entry point (or callback)
  2. The name parameter cannot be trusted, so it is tainted (with a particular type of taint)
  3. The entry point is reachable via any HTTP method and the URL is /hello
  4. The return value of this method is a body of HTTP response (so a sink for cross-site scripting)
  5. The response has a content-type of text/plain (so the response is not prone to XSS)

In the case of the second entry point, here are the facts we extract:

  1. HomeController.index(...) is an entry point
  2. Only its first parameter user is tainted with a deep-write (i.e., all fields with a public setter are set as tainted)
  3. The entry point is reachable via any HTTP method and the URL is /index
  4. The return value "home" of this entry point is a location of a view
    1. Inspecting the Spring configuration servlet-context.xml, "home" resolves to WEB-INF/views/home.jsp
    2. Connect the return to the _jspService method of WEB-INF/views/home.jsp through a control-flow join
  5. The model is considered a map-wrapper that contains the bean current_user which is our tainted parameter

With these types of facts integrated in the analysis, it is then possible to properly identify the full execution paths and consider what's tainted or not based on the framework rules itself. We can conceptually create a "trace" that needs to act as a backbone for the analysis:

The "trace" has been annotated with facts directly coming directly from the framework analysis.

Final words

We've seen that the properties extracted by the framework analysis are important for the analysis tool to understand how Spring MVC instantiates the entry points and maps them to URLs. That's how we are able to understand when tainted data is entering the application, how it's entering it, and where it's going.

Without this analysis, we would have to make blunt assumptions and suffer from either a high number of false-negatives when we do not understand the framework configuration, or false-positives if we are overly permissive and don't try to properly resolve what's tainted and where it can possibly go. It's actually a good way to test the capabilities of a static analysis tool: modify the framework configurations, insert EL beans that will always be null at runtime, etc.

However, the framework analysis is not limited to providing value for the taint analysis, but also provides information about how the URLs are reached and what are the constraints attached to them, which is important to identify CSRF issues for example.

On Detecting Heartbleed with Static Analysis

Posted by Andy, Comments

Many of our customers have asked whether Coverity can detect Heartbleed. The answer is Not Yet - but we've put together a new analysis heuristic that works remarkably well and does detect it. (UPDATE: the Coverity platform now detects the Heartbleed defect) We wanted to tell our customers and readers about this heuristic and what it shows about the way we approach static analysis.

John Regehr blogged (1) last week about Coverity Scan, our free scanning service for the open source community. While there were interesting defects found in OpenSSL, Heartbleed was not among them. After adding the new heuristic designed to catch this and other similar defects, we shared our updated results with John and he was gracious enough to write a follow-up blog (2), which we think is fantastic.

Initially, we wanted to independently verify the results so we ran the latest production version of our analysis (7.0.3) against openssl-1.0.1. Coverity Scan uses a particular set of analysis options, and we wondered if different settings might cause the defect to appear. After a few experiments, we determined that analysis settings didn't make a difference for this particular defect.

So we dug into the code further to determine why. At its heart, Heartbleed is an out of bounds memory read based on tainted data being used as an argument to memcpy. The main difficulty in detecting it is in realizing the source data is tainted. Most of the descriptions of Heartbleed begin with this line:

unsigned char *p = &s->s3->[0]

But for a static analysis, it is not obvious that the field data is tainted, and finding the evidence for this in the program can be difficult. One illustration of this is in the definition of the structure that contains data:

typedef struct ssl3_record_st
/*r */  int type;               /* type of record */
/*rw*/  unsigned int length;    /* How many bytes available */
/*r */  unsigned int off;       /* read/write offset into 'buf' */
/*rw*/  unsigned char *data;    /* pointer to the record data */
/*rw*/  unsigned char *input;   /* where the decode bytes are */
/*r */  unsigned char *comp;    /* only used with decompression - malloc()ed */
/*r */  unsigned long epoch;    /* epoch number, needed by DTLS1 */
/*r */  unsigned char seq_num[8]; /* sequence number, needed by DTLS1 */
        } SSL3_RECORD;

The comments aid human comprehension, but static analysis doesn't benefit much from them. Instead, we attempt to trace the flow of tainted data from where it originates in a library call into the program's data structures. This can be difficult to do without introducing large numbers of false positives, or scaling performance exponentially poorly. In this case, balancing these and other factors in the analysis design caused us to miss this defect.

Program analysis is hard and approximations and trade-offs are absolutely mandatory. We've found that the best results come from a combination of advanced algorithms and knowledge of idioms that occur in real-world code. What's particularly insightful is to analyze critical defects for clues that humans might pick up on but are hard to derive from first principles. These patterns form pieces of evidence that can then be generalized and tested empirically to make the analysis "smarter." Our experience is that this is the only way to build analyses that scale to large programs with low false positive rates, yet find critical defects. Many program analysis problems are undecidable in general, and in practice NP-complete problems and severe time/space/accuracy trade-offs crop up everywhere. Giving the analysis intuition and developer "street smarts" is key to providing high quality analysis results.

The Heartbleed bug is a perfect example of this. I sat down with one of our analysis engineers to examine whether there was any hope for finding this defect in a smarter way. It seemed bleak. The flow of tainted data into the data field was convoluted, and even manually we had a hard time understanding exactly how the code worked.

Then we noticed that the tainted data was being converted via n2s, a macro that performs byte swapping:

Byte swaps are relatively rare operations. They can occur in cryptographic and image processing code, but perhaps the most widespread use is to convert between network and host endianness (e.g. ntohs). We had a hunch that byte swaps constitute fairly strong evidence that the data is from the outside network and therefore tainted (this also applies to reading a potentially untrusted binary file format such as an image). In addition to byte swapping, we also look for the bytes being subsequently recombined into a larger integer type. We also require that the tainted value flows into a tainted sink, such as an array index or, as in this case, a length argument to a memory operation. These additional conditions help avoid false positives when byte swapping is being used in a situation which isn't tainted. For example, outgoing data that is byte swapped is unlikely to flow into a tainted sink.

With that, Heartbleed revealed itself.

This heuristic bypasses the complex control-flow and data-flow path that reaches this point in the program, and instead infers and tracks tainted data near near the point where it is used. It generalizes to all programs that use byte swaps so it is not overly specific to OpenSSL. Nor is this restricted to intraprocedural cases. We've added this heuristic to the derivers that compute function summaries, so any tainted data inferred is automatically propagated throughout the rest of the program. By collecting this and other similar idioms together, we can pick up large amounts of tainted data without any codebase-specific modeling.

Beyond Heartbleed, we found a handful of additional issues in OpenSSL with this heuristic which we are investigating. We believe (hope?) they are false positives. If they are, we will further tune the analysis to understand why. Even without such tuning, we have not seen an "explosion" of false positives.

The entire set of results, including the new heuristic, will be made available to a selected group of users on the OpenSSL project on Coverity Scan shortly.

We plan on performing additional experiments on a larger corpus of code including 60M+ lines of open source and some additional proprietary code to validate our assumptions and determine if there are other common idioms for use of byte swapping that do not imply taintedness. These steps are part of our standard process for vetting all analysis changes before releasing them to our customers.

A quick post on Apple Security 55471, aka goto fail

Posted by Jon, Comments

goto... fail?

If you haven't heard about the ironically named "goto fail" vulnerability, please read Adam Langley's well written article. A summary of the issue is as follows:

static OSStatus
SSLVerifySignedServerKeyExchange(SSLContext *ctx, bool isRsa, SSLBuffer signedParams,
                                 uint8_t *signature, UInt16 signatureLen)
    if ((err = ReadyHash(&SSLHashSHA1, &hashCtx)) != 0)
        goto fail;
    if ((err = SSLHashSHA1.update(&hashCtx, &clientRandom)) != 0)
        goto fail;
    if ((err = SSLHashSHA1.update(&hashCtx, &serverRandom)) != 0)
        goto fail;
    if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) != 0)
        goto fail;
        goto fail;
    if ((err =, &hashOut)) != 0)
        goto fail;
    return err;

See the two goto fail; statements. The second ensures subsequent statements within the same block will not be executed, making that code unreachable. The problem, as Adam points out, is that the unreachable statement below actually performs a security-critical function. By not having it called, an invalid certificate can be generated with an incorrect signature when certain types of cipher suites are used. Again, read Adam's detailed blog post on the issue.

Would Static Analysis Find This Issue?


A successful analysis via cov-analyze, part of Coverity Quality Advisor, on the reduced code shows the UNREACHABLE checker firing. We mocked certain portions of the code so others could quickly compile and analyze the code snippet. However, the SSLVerifySignedServerKeyExchange function is structurally equivalent to the original code.

Pic or it didn't happen:

Since the code snippet is open source, it qualifies for Coverity Scan. Click here to view the project. Or if you're an existing Coverity customer, skip to the bottom of the blog for information on how to reproduce the above.

What Is the Defect?

The UNREACHABLE checker reports many instances of code blocks that cannot be executed because there is no possible control flow to reach it. These defects often occur because of missing braces, which results in unreachable code after break, continue, goto or return statements. In the affected code, the second subsequent goto fail; statement causes the control flow to always bypass the line below it, jumping to the fail: block.

Looking at the Linux Kernel, an open source project in Scan, the UNREACHABLE checker fired 112 times since the Linux kernel has been in Scan. In 2013, 33 of those defects were fixed in the kernel. And since it's been in Scan, 74 of these defects have been resolved. This is about 9 million lines of code. The UNREACHABLE checker is a good example of a high-quality checker in that even on such a large project, it's firing about once ever 80,000 lines. Noisy? Not at all.

Even still, developers may have hundreds to thousands of defects to fix in a large code base, especially one that hasn't had a static analysis tool ran on it before. So which ones should be fixed first?

How Can Developers Prioritize Such Defects?

Any defects found via static analysis in security component code should be treated as a guilty until proven innocent. These defects should take priority in triaging and remedying. This begs the question how will developers know what's a security component? Here's a great spot where developers and security engineers can work together.

Security engineers and application architects should work together to identify security sensitive components. Here's a short list:

  • Authentication: passwords, password resets, login / logout, session management.
  • Authorization: function and data access controls, roles, entitlement.
  • User account management.
  • Cryptographic code, such as TLS. :)
  • Common security controls, such as escapers, validators, etc.

We've blogged about quality defects occurring in security components before. MySQL also had a defect found in an authentication component, which resulted in the ability to authenticate with an incorrect password (CVE-2012-2122).

Out of all the code in an application that should be "Coverity Clean", security components come to mind. What would an UNREACHABLE do to your code base?

Showing My Math

If you're a Coverity customer, you can capture the above command with cov-build via Coverity Quality Advisor 7.0.3 since it includes clang support. You can analyze the above on older versions of Coverity Quality Advisor, such as 6.6.x. Just point to your gcc instead of the clang binary. The rest of the command should be the same.

Grab the code:

git clone AAPL-Security-55471
cd AAPL-Security-55471

Build the code, using cov-build to capture the results.

rm *.o
/Applications/cov-analysis-macosx-7.0.3/bin/cov-build --dir ../55471-ir env LANG=en_US.US-ASCII /Applications/ -x c -arch x86_64 -std=gnu99 -isysroot /Applications/ -c sslKeyExchange.c -o sslKeyExchange.o
Coverity Build Capture version 7.0.3 on Darwin 11.4.2 x86_64
Internal version numbers: 95ee7d28a9 p-fresno-push-17316.506

1 C/C++ compilation units (100%) are ready for analysis
The cov-build utility completed successfully.

And then analyze the code via cov-analyze:

/Applications/cov-analysis-macosx-7.0.3/bin/cov-analyze --dir ../55471-ir
Coverity Static Analysis version 7.0.3 on Darwin 11.4.2 x86_64
Internal version numbers: 95ee7d28a9 p-fresno-push-17316.506

Looking for translation units
[STATUS] Computing links for 1 translation unit
[STATUS] Computing virtual overrides
[STATUS] Computing callgraph
[STATUS] Topologically sorting 1 function
[STATUS] Computing node costs
[STATUS] Starting analysis run
Analysis summary report:
Files analyzed                 : 1
Total LoC input to cov-analyze : 3842
Functions analyzed             : 1
Paths analyzed                 : 17
Time taken by analysis         : 00:00:01
Defect occurrences found       : 1 UNREACHABLE

To Escape or Not to Escape, That Is The Question

Posted by Jon, Comments

ESAPI Canonicalization

From the Encoder.canonicalize JavaDoc:

Canonicalization is simply the operation of reducing a possibly encoded string down to its simplest form. This is important, because attackers frequently use encoding to change their input in a way that will bypass validation filters, but still be interpreted properly by the target of the attack. Note that data encoded more than once is not something that a normal user would generate and should be regarded as an attack.

Everyone says you shouldn't do validation without canonicalizing the data first. This is easier said than done. The canonicalize method can be used to simplify just about any input down to its most basic form. Note that canonicalize doesn't handle Unicode issues, it focuses on higher level encoding and escaping schemes. In addition to simple decoding, canonicalize also handles:

  • Perverse but legal variants of escaping schemes
  • Multiple escaping (%2526 or &lt;)
  • Mixed escaping (%26lt;)
  • Nested escaping (%%316 or &%6ct;)
  • All combinations of multiple, mixed, and nested encoding/escaping (%253c or ┦gt;)

What's wrong with this?

Not all data that looks doubly encoded is actually doubly encoded.

Hideous Java example:

public void getTitleSuggestionsEntryPoint(HttpServletRequest req) {
    String hint = req.getParameter("hint");
    // bug 12345, infosec told us to do this
    String safeHint = getEsapiEncoder().canonicalize(hint);
    Suggestions suggestions = dao.getTitleSuggestions(safeHint);


public Suggestions getTitleSuggestions(String query) {
  try {
    Connection conn = getConnection();
    String query = "SELECT * FROM Movies WHERE title LIKE ?";
    PreparedStatement pstmt = conn.prepareStatement(query);
    pstmt.setString(1, query);
    ResultSet rs = pstmt.executeQuery();

OK, the above is totally contrived, I get it. Even more contrived, say the application allows its users to enter the "%" and "_" characters to assist in the query, which are often interpreted by many SQL drivers as "zero or many characters" and "any single character", respectively.

Now I love Kubrick films. And I want to get suggestions based on '2001 Space Odyssey' as a hint. But I can't spell odyssey without having to look it up (truth). So I enter the following: %2001 Space%. If this is via a GET request, the hint parameter should look like this: %252001%20Space%25. When the Java container receives the request, it decodes the first level of URL encoding, setting the hint field to the value of %2001 Space%. Then the field safeHint is set to  01 Space%.

Wait, what?

The value %20 could be interpreted as the URL encoded value for a space character. However, its context actually is in a SQL LIKE context, not a URL, so this interpretation is incorrect. Encoder.canonicalize doesn't know this. As it iterates over its codecs, it calls PercentCodec.decode. PercentCodec.decode checks to see: if the sequence walks like a URL encoded value and quacks like a URL encoded value, it must be a URL encoded value. So, it decodes it. It doesn't understand that the data is going into a SQL LIKE context.

And here's the above as a simple test:

(jython-env)jpasski@jpasski-mac: ~/.virtualenvs/jython-env
$ jip install org.owasp.esapi:esapi:2.1.0
# ...
$ touch /Users/jpasski/.virtualenvs/jython-env/
$ jython-all
Jython 2.5.3 (2.5:c56500f08d34+, Aug 13 2012, 14:48:36)
[Java HotSpot(TM) 64-Bit Server VM (Oracle Corporation)] on java1.7.0_17
Type "help", "copyright", "credits" or "license" for more information.
>>> from org.owasp.esapi.reference import DefaultEncoder
>>> encoder = DefaultEncoder.getInstance()
>>> encoder.canonicalize("%2001 Space%")
u' 01 Space%'

So What To Do?

Don't use Encoder.canonicalize. However, do canonicalize!

Encoder.canonicalize conflates decoding / unescaping with canonicalization. It also decodes / unescapes data without regard to the data's actual context. If it only did canonicalization I'd have no issue with it. But it doesn't.

Canonicalization is reducing multiple ways of describing the same data to just one way. For example, these two UNIX-style file paths are equivalent: /foo/../bar and /bar, with the latter being in the canonical form. In this example, the canonical form ought to be checked against a known trusted prefix, or else CWE-73 rears its ugly head. Using another example, these two sequences are not equivalent: &lt; and <. In an HTML context, the prior is an HTML named character reference that represents the escaped form of the latter character. The latter character is used to start a tag open state. Since they aren't equivalent, they shouldn't be treated as such. However, Encoder.canonicalize makes them equivalent.

Again, what to do?

Ideally, the application developer needs to understand the context to where the data is sent. A control applied at the entry point is farther away from the eventual sink contexts. One piece of tainted data could easily end up in two or more different contexts, each of which have their own security and usability requirements.

But there just isn't a silver bullet.

Canonicalize data when it can have equivalent but different forms. Validate the input, regardless if canonicalization occurs, against business requirements. And then at the point where the context requirements change, understand these new security requirements. This can be either via the use of a security library or via research / custom coding. But that need doesn't change. Once a developer understands the context, no magic needs to be performed. She or he can use the correct sanitizer, be that an escaper, filter, or validator, and move on to other things like actually adding features.