A Second Helping of PIE

Posted by Ian, Comments

A Second Helping of PIE

In a previous post I introduced PIE, with particular emphasis on how it can be used with the Java Security Manager to build a security policy and protect applications against known and unknown vulnerabilities. In this post, I'm going to elaborate on additional features of PIE such as using PIE with different modules (e.g. CSP and Spring Security), using the PIE Maven plugin to verify and update your security policies as part of the software development lifecycle (SDLC), how you can use PIE to increase your test coverage of security-sensitive parts of your application, and how PIE can be used as an intrusion detection system (IDS).

The Many Flavors of PIE

If you take a look at the PIE group in Maven, you'll notice that there are several JARs available. This represents PIE's modular design as a framework for building and enforcing security policies. The core JAR is responsible for hooking into your container's Servlet 3.0 autoscanning (or hooking into your application another way, such as with the PieBundle for Dropwizard) and driving the policy simplification engine. However, if you included just this JAR with your application it wouldn't do anything; it needs a Policy implementation to work.

At the time of writing, PIE has two implementations out of the box: one for the Java Security Manager, and one for CSP. Each has its own Maven module so that you can include just those implementations you want to use. So if you're interested in using PIE's CSP module to protect your application against XSS attacks, just include its JAR in your container's lib directory!

To dig a bit deeper on the how PIE modules work, each PIE module is responsible for implementing 3 things:

  • A PolicyEnforcer which is given a callback with the ServletContext as a parameter. The module can then use the ServletContext to inject itself into the web application in whichever way is appropriate. The intent is for the injected behavior to delegate decisions to the policy returned by the enforcer's getPolicy() method.
  • A Policy, which translates between the application's concrete context and the abstract policy definition used by PIE. It achieves this by defining what class to use for its root FactMetaData implementation.
  • A tree of FactMetaData implementations, which define the concrete logic of what "facts" are in the context of the security policy (such as the filename being accessed, hostname being resolved, authenticated user's permission group, etc.), how to simplify policies applied to that fact, and how to apply matching according to that fact's language (for example, matching files with "/var/lib/**" requires some different logic that matching hostnames with "*.example.com").

If you're looking to extend PIE with your own module, both CSP (pie-csp) and the Java Security Manager (pie-sm) modules of PIE work as examples. There's also an example of a more application-specific PIE module which uses Spring Security to apply method-level policy decisions about what user roles may invoke a given protected method.

Baking PIE Into Your SDLC

If you're looking at PIE's group in Maven, you'll also notice that there's a Maven plugin for PIE. This plugin allows you to rebuild your policies as part of your build lifecycle using observed policy violations from a running application instance. The summary of this workflow is:

  • Launch a development or QA instance of your application, leaving PIE in report-only mode.
  • Run your usual end-to-end tests on the latest version of your application.
  • The plugin retrieves all the policy violations from the application and updates the local version of your security policy. Optionally, the presence of any observed violations can then trigger the build to fail.

By updating the application's security policy during the development and QA process, you can easily bring these groups into the workflow of defining the security requirements of the application. The plugin empowers these groups to make local updates to the security policy, see what needs to be changed and why, and even keep that policy in source control (which makes it easy to track and audit those changes). By making the build fail when there are violations, you can also gain confidence in the correctness of the security policy before pushing it into the production part of your development lifecycle.

Bring PIE Home

If PIE has made your mouth water but you're not quite ready to commit to enforcing policies in production, there are still lots of ways you could use PIE in your environment in order to gain security insights into your application.

Improving Security Test Coverage

One use-case for PIE is as a tool for deciding how well your tests are exercising security-sensitive parts of your application. Consider the following workflow:

  • Generate a security policy using PIE based entirely on the observations seen during automated or security testing.
  • Deploy this security policy in a production environment, but leaving PIE in report-only mode.
  • Look at the security violations reported by PIE as they occur in your production deployment. Any violations most likely indicate execution of security-sensitive code that wasn't exercised during testing (as opposed to an actual case of your application being exploited).

By looking at these violations, you can decide what parts of the application should be a priority for additional testing. Once new tests are written, you can repeat this cycle and over time gain confidence that there aren't any blind spots in the application that you're failing to test.

PIE for Intrusion Detection

In a similar vein, you can use PIE to as a form of intrusion detection without having PIE effect the functionality of your production application. By leaving PIE in report-only mode, you can watch the log of violations and triage those reports as they come in. In the case that behavior is expected, you can go back and update your policy to reduce the false-positive rate (which depending on how well your application was exercised during policy generation, may begin fairly high). However, any true positives represent a case of a user achieving unintended behavior, offering you critical insight into your application.

Still Hungry?

How PIE is best used and how effective it can be is something that is still to be defined by its users. One of our goals with PIE is to encourage the community to explore this space of runtime security tools and see where they can provide layers of defense and insight for your applications. By being free and open source, we hope you'll find ways that PIE can be better; pull requests are welcome!

A Slice of PIE

Posted by Ian, Comments

On May 21, 2015 I gave a presentation at AppSec EU discussing security policies and managers, and specifically noting their utility in blocking known and unknown exploits. I noted that these tools tend to be difficult to use, and as a feature of my presentation introduced PIE, an open source tool for the painless generation of security policies.

In this post, I'm going to discuss how PIE works, walk through a simple use-case of creating a policy for the Java Security Manager, and show how it is able to block a remote code execution vulnerability in an old version of Struts 2 without any specific knowledge of Struts 2 or this vulnerability.

The Java Security Manager

Before we dig into PIE itself, it will help to know a bit about the Java Security Manager, which is responsible for enforcing the policy we'll generate. The Java Security Manager is a part of the JVM that has existed since the first version of Java was released in 1996. Its most common use has been to sandbox untrusted code (such as web applets) so that applications can't, for example, access filesystem resources and execute other processes.

The policy definition for the Security Manager allows for very granular and precise definitions of allowed actions, including conditions such as:

  • The code source, e.g. the class requiring the permission, or less precisely the JAR containing the executing code.
  • The permission class being requested (e.g. java.io.FilePermission or java.net.SocketPermission)
  • A "name" qualifier on the permission requested (e.g. "/tmp/foo.bar" or "foo.bar.example.com")
  • An "action" qualifier on the permission requested (e.g. "read,write" or "resolve,open")

The JDK documentation has a full list of the built-in permissions known to the Security Manager and describes the risk of granting those permissions to an application. Besides those, any application can define their own permission classes and apply security checks to those permissions:

System.getSecurityManager().checkPermission(new MyCustomPermission("contextualName", "contextualAction"));

Another very useful feature of the Java Security Manager is its application of the security policy to all "protection domains" on the stack. For our purposes, you can think of a ProtectionDomain as a class, but it more generally refers to a collection of classes with the same set of granted permissions (such as a JAR). By applying the permission check to all the protection domains on the stack, the Java Security Manager avoids the confused deputy problem (although there are ways a confused deputy may still shoot himself in the foot).

The result of this is an incredibly powerful tool that can be used to not just restrict untrusted code, but help protect trusted code against unintended behavior. For example, by creating a fine-grained policy that removes the option of creating a classloader and disallows the "execute" action from FilePermissions, the security manager can mitigate or even eliminate the exploitability of a remote code execution vulnerability. The downside of such a robust security solution is the difficulty inherent in using it. Not even considering custom permissions defined by frameworks your application may be using, writing a security policy for your application would involve deciding which of the 18 built-in permissions you need to whitelist, which of the dozens of potential "names" you need to add for each of those permissions, and you need to make those decisions for every class/JAR in your application.

As you can imagine, creating a precise policy can be extremely difficult and result in something quite unwieldy. In practice, this means many people will create a less granular policy, or simply forgo use of the Security Manager altogether.

Time For PIE

Recognizing the value of the Java Security Manager, but also recognizing the subtleties in using it well, we embarked on a project to simply and automatically generate a policy for use by the Java Security Manager. The goal of the project was to be able to generate security policies for applications in a way that makes a policy which is not overly permissive, not overly restrictive, and which can be created without reading every last line of the Security Manager's permission documentation or having to know every detail of the application being protected.

The result is PIE -- Policy Instantiation & Enforcement -- which, similar to system-level controls such as grsecurity and SELinux, includes a learning-mode where it observes the application's behavior and generates a policy based on its required permissions.

In order to manage complex policies, and to handle permissions which may have dynamic components (such as file paths and host names), PIE also uses heuristics to collapse and simplify policies, making them easier to read, verify, and manage.

A Use Case

To demonstrate how PIE can protect a web application, we're going to demonstrate how you can deploy PIE in a Tomcat container, and how its generated policy can protect against a remote code execution vulnerability. For this, we use Roller 5.0.0 which uses a version of Struts 2 which is vulnerable to CVE-2013-4212. Assuming that Roller is deployed locally, this vulnerability (an OGNL injection) can be exploited with the following request:

curl -s -X GET -G \
  http://localhost:8080/roller/roller-ui/login.rol \
  --data-urlencode "pageTitle=\${(#_memberAccess[\"allowStaticMethodAccess\"]=true,@java.lang.Runtime@getRuntime().exec('calc'),'')}"

To use PIE in a Tomcat container, all you need to do is the following:

  1. Download the PIE JARs and put them in Tomcat's lib directory.
  2. Restart Tomcat and exercise the application under intended usage. The more coverage your application gets, the more accurate the generated policy will be.
  3. Create a PIE configuration file which puts PIE in enforcement mode instead of learning mode. To do this, you can create a file in Tomcat's lib directory named pieConfig.properties with the line securityManager.isReportOnlyMode = false.
  4. Restart Tomcat. PIE protection will now be enabled!

Once you have done this, you can try running the above exploit again. This time no calculators will pop up on your screen, and if you inspect Tomcat's log you'll see the line:

Observed violation: ("ognl.OgnlInvokePermission" "invoke.com.opensymphony.xwork2.ognl.SecurityMemberAccess.setAllowStaticMethodAccess")

So without any knowledge of Struts, OGNL, Roller, or this vulnerability, PIE has effortlessly protected this application from a remote code execution attack.

A Second Helping of PIE

This article focused on introducing PIE and demonstrating the benefit of using it with the Java Security Manager. But PIE has a number of features built into it which make it useful in an even wider scope. As a first step for production deployment, you may want to use the generated policy but leave PIE in report-only mode. This way, any permission exceptions (which are likely to stem from gaps in your exercising the application) won't break the application and you can subsequently improve the security coverage of your end-to-end tests.

Additionally, PIE includes a Maven plugin which can be used to help verify, update, and maintain your application's security policy as part of the build process. PIE is also designed to be a general framework which can generate policies for more than just the Java Security Manager. Out of the box, PIE can also generate policies for CSP to protect your web application against XSS attacks. Both the Java Security Manager and CSP are written as modules for PIE, and it's easy to write modules for security managers specific to your application; included in the PIE source repository is an example of using PIE with Spring Security. In a follow-up article, I'll take a deeper dive into PIE, exploring in detail other use cases for PIE and how you can use these features.

Unicode Escaping: Is Coverity Affected?

Posted by Jon, Comments

Java Unicode Escaping Background

The Java 8 Language Specification (JLS) Section 3.3 defines the following:

A compiler for the Java programming language ("Java compiler") first recognizes Unicode escapes in its input, translating the ASCII characters \u followed by four hexadecimal digits to the UTF-16 code unit (§3.1) for the indicated hexadecimal value, and passing all other characters unchanged. Representing supplementary characters requires two consecutive Unicode escapes. This translation step results in a sequence of Unicode input characters.

This means someone can embed an escaped Unicode character in Java source code that will be unescaped when it's compiled. Searching Stack Overflow highlights the confusion that can arise from Unicode escaped values in Java source code.

From a security standpoint, a developer could potentially hide malicious code using this technique. Jeff William's BlackHat 2009 presentation on "Enterprise Java Rootkits" described exactly this (see the "Abusing Code Formatting" section), in addition to other attacks. Go read that presentation, it's good!

What's Old is New

Mathias Bynens recently posted a comment regarding Java Unicode escaping, re-discovering what Jeff discussed. This was picked up by Peter Jaric, who created the Java Obfuscator - Lite microsite so people can obfuscate their code. The site had the following:

When you're programming you may sometimes feel that you need to hide some code from a coworker or a static code analysis tool.

Working at Coverity, I saw that statement as throwing down the gauntlet :) Curious, I wanted to see how Coverity's static analysis would deal with Unicode escapes. Coverity uses the Edison Design Group's (EDG) Java front end. Since we're not using Oracle's compiler, there does exist a small chance that EDG's front end could miss it.


I structured the test so any Coverity user could play along at home.

Create the Test Application

I'll use the Struts 2 starter archetype:

mvn archetype:generate -B -DgroupId=com.example \
                          -DartifactId=obfuscation-test \
                          -DarchetypeGroupId=org.apache.struts \
                          -DarchetypeArtifactId=struts2-archetype-starter \
                          -DarchetypeVersion=2.3.20 \

This created a directory called obfuscation-test for me.

Add a Sink

I'll add an OS command injection sink to the application, using Java's Runtime.exec method. Struts 2 uses getter / setter syntax on entry points to pass in user controlled data. (Read up here on a primer for Struts 2 conventions.)

Here's the evil line prior to obfuscation:


Running that through Peter's tool gives the following:

/* TODO: fix this stuff \u002a\u002f\u0052\u0075\u006e\u0074\u0069\u006d\u0065\u002e\u0067\u0065\u0074\u0052\u0075\u006e\u0074\u0069\u006d\u0065\u0028\u0029\u002e\u0065\u0078\u0065\u0063\u0028\u006e\u0061\u006d\u0065\u0029\u003b\u002f\u002a */

Now, just add that line to src/main/java/com/example/HelloWorldAction.java file:

    public String execute() throws Exception {

        /* TODO: fix this stuff \u002a\u002f\u0052\u0075\u006e\u0074\u0069\u006d\u0065\u002e\u0067\u0065\u0074\u0052\u0075\u006e\u0074\u0069\u006d\u0065\u0028\u0029\u002e\u0065\u0078\u0065\u0063\u0028\u006e\u0061\u006d\u0065\u0029\u003b\u002f\u002a */
        return SUCCESS;


Now to compile!

mvn clean package -Dmaven.test.skip=true


[INFO] ------------------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1.688s
[INFO] Finished at: Fri Apr 24 11:14:56 PDT 2015
[INFO] Final Memory: 15M/302M
[INFO] ------------------------------------------------------------------------

OK, so that looks good. Now let's invoke the whole Coverity tool chain.

Coverity Analysis

I'll try to keep things brief here. Coverity captures the Java compilation by wrapping the build using a utility called cov-build. It stores what it captures into an intermediate directory. So I'll invoke that. I also have to invoke another command called cov-emit-java that stuffs the Java web application archive into the intermediate directory. Then, I'll run the actual analysis using cov-analyze, disabling all of the other checkers and only enabling the checker responsible for OS command injection, aptly named "OS_CMD_INJECTION".

  • cov-build:
$COV_BIN/cov-build --dir $IDIR mvn package -Dmaven.test.skip=true

Coverity Build Capture (64-bit) version 7.6.1.s2 on Linux 3.2.0-41-generic x86_64
Internal version numbers: 56023db682 p-jwasharmony-push-21098.188

[INFO] Scanning for projects...

[INFO] ------------------------------------------------------------------------
[INFO] Total time: 23.197s
[INFO] Finished at: Fri Apr 24 11:20:07 PDT 2015
[INFO] Final Memory: 15M/302M
[INFO] ------------------------------------------------------------------------
3 Java compilation units (100%) have been captured and are ready for analysis
The cov-build utility completed successfully.
  • cov-emit-java:
$COV_BIN/cov-emit-java --dir $IDIR --war ./target/struts2-archetype-starter.war
Coverity Java Emit version 7.6.1.s2 on Linux 3.2.0-41-generic x86_64
Internal version numbers: 56023db682 p-jwasharmony-push-21098.188

Processing webapp archive: /home/jpasski/src/obfuscation-test/target/struts2-archetype-starter.war
[STATUS] Compiling 4 JSP files
4 out of 4 JSPs (100%) have been processed successfully. See details in /store/idirs/test-obfuscation/jsp-compilation-log.txt
Done; cov-emit-java took 22s
  • cov-analyze:
$COV_BIN/cov-analyze --dir $IDIR --java --disable-default -en OS_CMD_INJECTION
Coverity Static Analysis version 7.6.1.s2 on Linux 3.2.0-41-generic x86_64
Internal version numbers: 56023db682 p-jwasharmony-push-21098.188

Using 8 workers as limited by CPU(s)
[STATUS] Starting analysis run
Analysis summary report:
Files analyzed                 : 7
Total LoC input to cov-analyze : 170
Functions analyzed             : 659
Paths analyzed                 : 6497
Time taken by analysis         : 00:00:50
Defect occurrences found       : 1 OS_CMD_INJECTION

Looks like we found something! Let's add and view the result to our web UI called "Coverity Connect":

While our display of the defect could be a bit better, we still find the issue. So at least in the case of Unicode escaping, Coverity's static analysis still finds the defect. Whoop!

Closing Thoughts

If your developers are now your adversaries, you've picked very challenging adversaries! If you're an existing customer, and you think we should report on the use of Unicode escaping, especially when it looks suspicious like the above, we're interested in hearing from you.

Eric Lippert Dissects CVE-2014-6332, a 19 year-old Microsoft bug

Posted by Eric, Comments

Today's Coverity Security Research Lab blog post is from guest blogger Eric Lippert.

[UPDATE 1: The MISSING_RESTORE checker regrettably doesn't find the defect in the code I've posted here. Its heuristics for avoiding false positives causes it to suppress reporting, ironically enough. We're working on tweaking that heuristic for an upcoming release.]

It was with a bizarre combination of nostalgia and horror that I read this morning about a 19-year-old rather severe security hole in Windows. Nostalgia because every bit of the exploited code is very familiar to me: working on the portion of the VBScript engine used to exploit the defect was one of my first jobs at Microsoft back in the mid-1990s. And horror because this is really a quite serious defect that has been present probably since Windows 3.1, [Update 2: heard that Windows 3.1 is in fact not affected, so you IE 2-5 users are safe ;)] and definitely exploitable since Windows 95. Fortunately we have no evidence that this exploit has actually been used to do harm to users, and Microsoft has released a patch. (Part of my horror was the fear that maybe this one was my bad, but it looks like the actual bug predates my time at Microsoft. Whew!)

The thirty-thousand foot view is the old familiar story. An attacker who wishes to run arbitrary code on a user's machine lures the user into browsing to a web page that contains some hostile script -- VBScript, in this case. The hostile script is running inside a "sandbox" which is supposed to ensure that it only does "safe" operations, but the script attempts to force a particular buggy code path through the underlying operating system code. If it does so successfully, it produces a corrupt data structure in memory which can then be further manipulated by the script. By cleverly controlling the contents of the corrupted data structure, the hostile script can read or write memory and execute code of their choice.

Today I want to expand a bit on Robert Freeman's writeup, linked above, to describe the underlying bug in more detail, the pattern that likely produced it, better ways to write the code, and whether static analysis tools could find this bug. I'm not going to delve into the specifics of how this initially-harmless-looking bug can be exploited by attackers.

What's so safe about a SAFEARRAY?

Many of the data structures familiar to COM programmers today, like VARIANT, BSTR and SAFEARRAY, were created for "OLE Automation"; old-timers will of course remember that OLE stood for "object linking and embedding", the "paste this Excel spreadsheet into that Word document" feature. OLE Automation was the engine that enabled Word and Excel objects to be accessed programmatically by Visual Basic. (In fact the B in BSTR stands for "Basic".) Naturally, Visual Basic uses these data structures for its representations of strings and arrays. The data structure which particularly concerns us today is SAFEARRAY:

typedef struct tagSAFEARRAY 
  USHORT         cDims;         // number of dimensions
  USHORT         fFeatures;     // type of elements
  ULONG          cbElements;    // byte size per element
  ULONG          cLocks;        // lock count
  PVOID          pvData;        // data buffer
  SAFEARRAYBOUND rgsabound[1];  // bounds, one per dimension

typedef struct tagSAFEARRAYBOUND
  ULONG cElements; // number of indices in this dimension
  LONG  lLbound;   // lowest valid index

SAFEARRAYs are so-called because unlike an array in C or C++, a SAFEARRAY inherently knows the dimensionality of the array, the type of the data in the array, the number of bytes in the buffer, and finally, the bounds on each dimension. How multi-dimensional arrays and arrays of unusual types are handled is irrelevant to our discussion today, so let's assume that the array involved in the attack is a single-dimensional array of VARIANT.

The operating system method which contained the bug was SafeArrayRedim, which takes an existing array and a new set of bounds for the least significant dimension -- though again, for our purposes, we'll assume that there is only one dimension. The function header is:

HRESULT SafeArrayRedim(
  SAFEARRAY      *psa,

Now, we do not have the source code of this method, but based on the description of the exploit we can guess that it looks something like the code below that I made up just now.

Bits of code that are not particularly germane to the defect I will omit, and I'll assume that somehow the standard OLE memory allocator has been obtained. Of course there are many cases that must be considered here -- such as "what if the lock count is non zero?" -- that I am going to ignore in pursuit of understanding the relevant bug today.

As you're reading the code, see if you can spot the defect:

  // Omitted: verify that the arguments are valid; produce
  // E_INVALIDARG or other error if they are not.

  PVOID pResourcesToCleanUp = NULL; // We'll need this later.
  HRESULT hr = S_OK;

  // How many bytes do we need in the buffer for the original array?
  // and for the new array?

  LONG cbOriginalSize = SomehowComputeTotalSizeOfOriginalArray(psa);
  LONG cbNewSize = SomehowComputeTotalSizeOfNewArray(psa, psaboundNew);
  LONG cbDifference = cbNewSize - cbOriginalSize;

  if (cbDifference == 0)
    goto DONE;

  SAFEARRAYBOUND originalBound = psa->rgsabound[0];
  psa->rgsabound[0] = *psaboundNew;
  // continues below ...

Things are looking pretty reasonable so far. Now we get to the tricky bit.

Why is it so hard to shrink an array?

If the array is being made smaller, the variants that are going to be dropped on the floor might contain resources that need to be cleaned up. For example, if we have an array of 1000 variants containing strings, and we reallocate that to only 300, those 700 strings need to be freed. Or, if instead of strings they are COM objects, they need to have their reference counts decreased.

But now we are faced with a serious problem. We cannot clean up the resources after the reallocation. If the reallocation succeeds then we no longer have any legal way to access the memory that we need to scan for resources to free; that memory could be shredded, or worse, it could be reallocated to another block on another thread and filled in with anything. You simply cannot touch memory after you've freed it. But we cannot clean up resources before the reallocation either, because what if the reallocation fails? It is rare for a reallocation that shrinks a block to fail. While the documentation for IMalloc::Realloc doesn't call out it can fail when shrinking (doc bug?), it doesn't rule it out either. In that case we have to return the original array, untouched, and deallocating 70% of the strings in the array is definitely not "untouched".

The solution to this impass is we have to allocate a new block and copy the resources into that new block before the reallocation. After a successful reallocation we can clean up the resources; after a failed reallocation we of course do not.

  // ... continued from above
  if (cbDifference < 0)
    pResourcesToCleanUp = pmalloc->Alloc(-cbDifference);
    if (pResourcesToCleanUp == NULL)
      hr = E_OUTOFMEMORY;
      goto DONE;
    // Omitted: memcpy the resources to pResourcesToCleanUp

  PVOID pNewData = pmalloc->Realloc(psa->pvData, cbNewSize);
  if (pNewData == NULL)
    psa->rgsabound[0] = originalBound;
    goto DONE;
  psa->pvData = pNewData;

  if (cbDifference < 0)
    // Omitted: clean up the resources in pResourcesToCleanUp
    // Omitted: initialize the new array slots to zero

  hr = S_OK; // Success!


  // Don't forget to free that extra block.
  if (pResourcesToCleanUp != NULL)
  return hr;

Did you spot the defect?

Part of the contract of this method is that when this method returns a failure code, the original array is unchanged. The contract is violated in the code path where the array is being shrunk and the allocation of pResourcesToCleanUp fails. In that case we return a failure code, but never restore the state of the bounds which were mutated earlier to the smaller values. Compare this code path to the code path where the reallocation fails, and you'll see that the restoration line is missing.

In a world where there is no hostile code running on your machine, this is not a serious bug. What's the worst that can happen? In the incredibly rare case where you are shrinking an array by an amount bigger than the memory you have available in the process, you end up with a SAFEARRAY that has the wrong bounds in a program that just produced a reallocation error anyways, and any resources that were in that memory are never freed. Not a big deal. This is the world in which OLE Automation was written: a world where people did not accidentally download hostile code off the Internet and run it automatically.

But in our world this bug is a serious problem! An attacker can make what used to be an incredibly rare situation -- running out of virtual address space at exactly the wrong time -- quite common by carefully controlling how much memory is allocated at any one time by the script. An attacker can cause the script engine to ignore the reallocation error and keep on processing the now-internally-inconsistent array. And once we have an inconsistent data structure in memory, the attacker can use other sophisticated techniques to take advantage of this corrupt data structure to read and write memory that they have no business reading and writing. Like I said before, I'm not going to go into the exact details of the further exploits that take advantage of this bug; today I'm interested in the bug itself. See the linked article for some thoughts on the exploit.

How can we avoid this defect? How can we detect it?

It is surprisingly easy to write these sorts of bugs in COM code. What can you do to avoid this problem? I wrote who knows how many thousands of lines of COM code in my early days at Microsoft, and I avoided these problems by application of a strict discipline. Among my many rules for myself were:

  • Every method has exactly one exit point.
  • Every local variable is initialized to a sensible value or NULL.
  • Every non-NULL local variable is cleaned up at the exit point
  • Conversely, if the resource is cleaned up early on a path, or if its ownership is ever transferred elsewhere, then the local is set back to NULL.
  • Methods which modify memory locations owned by their callers do so only at the exit point, and only when the method is about to return a success code.

The code which I've presented here today -- which I want to emphasize again I made up myself just now to illustrate what the original bug probably looks like -- follows some of these best practices, but not all of them. There is one exit point. Every local is initialized. One of the resources -- the pResourcesToCleanUp block -- is cleaned up correctly at the exit point. But the last rule is violated: memory owned by the caller is modified early, rather than immediately before returning success. The requirement that the developer always remember to re-mutate the caller's data in the event of an error is a bug waiting to happen, and in this case, it did happen.

Clearly the code I presented today does not follow my best practices for writing good COM methods. Is there a more general pattern to this defect? A closely related defect pattern that I see quite often in C, C++, C# and Java is:

someLocal = someExternal;
someExternal = differentValue;
//... lots of code ...
if (someError) return;
//... lots of code ...
someExternal = someLocal;

And of course the variation where the restoration of the external value is skipped because of an unhandled exception is common in C++, C# and Java.

Could a static analyzer help find defects like this? Certainly; Coverity's MISSING_RESTORE analyzer finds defects of the form I've just described. (Though I have not yet had a chance to run the code I presented today through it to see what happens.)

There are a lot of challenges in designing analyzers to find the defect I presented today; one is determining that in this code the missing restoration is a defect on the error path but correct on the success path. This real-world defect is a good inspiration for some avenues for further research in this area; have you seen similar defects that follow this pattern in real-world code, in any language? I'd love to see your examples; please leave a comment if you have one.

Detecting SSLv3 in Java

Posted by Jon, Comments

Our SAST product, Security Advisor, recently released a couple of new checkers and updated a couple existing ones. One of the checkers, RISKY_CRYPTO, is now looking for SSLv3. SSLv3 should be considered beyond deprecated because of POODLE, so its use is truly risky at this point. The checker looks for its use implicitly (e.g. some JRE defaults) or explicitly, in either client or server sockets.

An example defect is in org.alfresco.encryption.ssl.AuthSSLProtocolSocketFactory.createSocket, from the Alfresco project. The new version of the analysis flags a defect on line 175, when the socket is bound. Implicitly, the SSLv3 protocol is allowed in the JVM, so this socket potentially exposes itself to the POODLE vulnerability. (If CBC isn't allowed, then this would be a false positive.)

Not bad remediation advice, eh?

As a recovering security consultant, I really hated any tool that reported general crypto usage of say MD5 or something. While RISKY_CRYPTO does this, because sadly people do ask for us to look for this, we're also releasing a smarter crypto checker called WEAK_PASSWORD_HASH.

Take the method com.laoer.bbscs.comm.Util.hash from Java web application called BBS Community System.

public synchronized static final String hash(String data) {
    if (digest == null) {
        try {
            digest = MessageDigest.getInstance("MD5");
        } catch (NoSuchAlgorithmException nsae) {
                    .println("Failed to load the MD5 MessageDigest. " + "We will be unable to function normally.");
    // Now, compute hash.
    return encodeHex(digest.digest());

The RISKY_CRYPTO checker will flag the "MD5" as being bad, mmmkay. The issue here isn't that MD5 is being used, it's that it's being used to hash a password. Now pointing out MD5 might be good enough for a security professional. It's like blood in the water. However, just stating XYZ algorithm is in use isn't necessarily evil. Developers want more. If you're a security person, be ready to answer: "why is it bad in this context, in this piece of code?"

Glossing over some of the details, the new WEAK_PASSWORD_HASH checker flags data it thinks is a password. It then tracks the password data flow until it reaches a hashing sink it thinks is not adequate. (There's a bit about salts in there I'm skipping, but you get the idea.)

Case in point, WEAK_PASSWORD_HASH correctly infers the Struts 2 entry point com.laoer.bbscs.web.action.Cpasswd.setOldPassword as a source of password data. It tracks the data flow of this field to line 99:

UserInfo ui = this.getUserService().findUserInfoById(this.getUserSession().getId());
if (ui != null) {
    String op = Util.hash(this.getOldpasswd());
    if (!op.equals(ui.getRePasswd())) {
        return INPUT;

... where the unsafe Util.hash method is called. Now that's a defect I'd rather see than RISKY_CRYPTO, or whatever your SAST tool's checker is called, flagging the use of MD5. Now, your developers have the answer to their question without involving anyone. Devs like that.

Understanding Python Bytecode

Posted by Romain, Comments

I've been working with Python bytecode recently, and wanted to share some of my experience working with it. To be more precise, I've been working exclusively on the bytecode for the CPython interpreter, and limited to versions 2.6 and 2.7.

Python is a dynamic language, and running it from the command line essentially triggers the following steps:

  • The source is compiled the first time it is encountered (e.g., imported as a module or directly executed). This step generates the binary file, with a pyc or pyo extension depending on your system.
  • The interpreter reads the binary file and executes the instructions (opcodes) one at a time.

The python interpreter is stack-based, and to understand the dataflow, we need to know what the stack effect is of each instruction (i.e., opcode and argument).

Inspecting a Python Binary File

The simplest way to get the bytecode of a binary file is to unmarshall the CodeType structure:

import marshal
fd = open('path/to/my.pyc', 'rb')
magic = fd.read(4) # python version specific magic num
date = fd.read(4)  # compilation date
code_object = marshal.load(fd)

The code_object now contains a CodeType object which represents the entire module from the loaded file. To inspect all nested code objects from this module, meaning class declarations, methods, etc. we need to recursively inspect the const pool from the CodeType; that means doing something like this:

import types

def inspect_code_object(co_obj, indent=''):
  print indent, "%s(lineno:%d)" % (co_obj.co_name, co_obj.co_firstlineno)
  for c in co_obj.co_consts:
    if isinstance(c, types.CodeType):
      inspect_code_object(c, indent + '  ')

inspect_code_object(code_object) # We resume from the previous snippet

In this case, we'll print a tree of code objects nested under their respective parents. For the following simple code:

class A:
  def __init__(self):
  def __repr__(self):
    return 'A()'
a = A()
print a

We'll get the tree:


For testing, we can get the code object from a string that contains the Python source code by using the compile directive:

co_obj = compile(python_source_code, '<string>', 'exec')

For more inspection of the code object, we can have a look at the co_* fields from the Python documentation.

First Look Into the Bytecode

Once we get the code objects, we can actually start looking at the disassembly of it (in the co_code field). Parsing the bytecode to make sense out of it means:

  • Interpreting what the opcode means
  • Dereference any argument

The disassemble function in the dis module shows how to do that. It will actually provide the following output from our previous code example:

2   0 LOAD_CONST        0 ('A')
    3 LOAD_CONST        3 (())
    6 LOAD_CONST        1 (<code object A at 0x42424242, file "<string>", line 2>)
    9 MAKE_FUNCTION     0
   12 CALL_FUNCTION     0
   16 STORE_NAME        0 (A)

8  19 LOAD_NAME         0 (A)
   22 CALL_FUNCTION     0
   25 STORE_NAME        1 (a)

9  28 LOAD_NAME         1 (a)
   33 LOAD_CONST        2 (None)

Where we get:

  • The line number (when it changed)
  • The index of the instruction
  • The opcode of the current instruction
  • The oparg, which is what the opcode takes to resolve to the actual argument, it knows where to look based on the opcode. For example, with a LOAD_NAME opcode, the oparg will point to the index in the co_names tuple.
  • The resolved argument in parentheses

As we can see at the index 6, the LOAD_CONST opcode takes an oparg that points to which object should be loaded from the co_consts tuple. Here, it points to the type declaration of A. Recursively, we can go and decompile all code objects to get the full bytecode of the module.

The first part of the bytecode (index 0 to 16) relates to the type declaration of A while the rest represents the code where we instantiate an A and print it. Even in this code, there are constructs that are not relevant unless you plan on modifying the bytecode and changing types, etc.

Interesting Bytecode Constructs

The overall opcodes are fairly straight forward, but a few cases seem weird as they might come from:

  • Compiler optimizations
  • Interpreter optimizations (therefore leading to extra opcodes)

Variables Assignment with Sequences

In the first category, we can have a look at what happens when the source assign sequences of variables:

(1) a, b = 1, '2'
(2) a, b = 1, e
(3) a, b, c = 1, 2, e
(4) a, b, c, d = 1, 2, 3, e

These 4 statements produce quite a different bytecode.

The first case is the simplest one since the right-hand side (RHS) of the assignment contains only constants. In that case, CPython can create the tuple (1, '2'), use UNPACK_SEQUENCE to put 2 elements on the stack, and create a STORE_FAST for each variable a and b:

0 LOAD_CONST               5 ((1, '2'))
3 UNPACK_SEQUENCE          2
6 STORE_FAST               0 (a)
9 STORE_FAST               1 (b)

The second case however introduce a variable on the RHS, so the generic case is called where an expression is fetched (here, a simple one with a LOAD_GLOBAL). The compiler however does not need to create a new tuple from the values on the stack (at index 18) and use an UNPACK_SEQUENCE; it's sufficient to call the ROT_TWO which swaps the 2 top elements from the stack (it might have been enough to switch 19 and 22 though):

12 LOAD_CONST               1 (1)
15 LOAD_GLOBAL              0 (e)
19 STORE_FAST               0 (a)
22 STORE_FAST               1 (b)

The third case is where it becomes really strange. Putting the expressions on the stack is exactly the same mechanism as in the previous case, but after it first swap the 3 top elements, then swap again the 2 top elements:

25 LOAD_CONST               1 (1)
28 LOAD_CONST               3 (2)
31 LOAD_GLOBAL              0 (e)
36 STORE_FAST               0 (a)
39 STORE_FAST               1 (b)
42 STORE_FAST               2 (c)

The final one represents the generic case, where no more ROT_*-play seems possible and a tuple is created and then a call to UNPACK_SEQUENCE to put them on the stack:

45 LOAD_CONST               1 (1)
48 LOAD_CONST               3 (2)
51 LOAD_CONST               4 (3)
54 LOAD_GLOBAL              0 (e)
57 BUILD_TUPLE              4
60 UNPACK_SEQUENCE          4
63 STORE_FAST               0 (a)
66 STORE_FAST               1 (b)
69 STORE_FAST               2 (c)
72 STORE_FAST               3 (d)

Call Constructs

The last set of interesting examples are around the call constructs and the 4 different opcodes to create calls. I suppose the number of opcodes is to optimize the interpreter code, since it's not like in Java where it makes sense to have one of the invokedynamic, invokeinterface, invokespecial, invokestatic, or invokevirtual.

In Java, invokeinterface, invokespecial and invokevirtual are originally coming from the static typing of the language (and invokespecial is only used for calling constructors and superclasses AFAIK). invokestatic is self describing (no need to put the receiver on the stack) and there is no such concept (down to the interpreter and not through decorators) in Python. In short, Python calls could always be translated with an invokedynamic.

The different CALL_* opcodes in Python are indeed not here because of typing, static methods, or the need to have a special access for constructors. They are all targeting on how a method call can be specified in Python; from the grammar:

  Call(expr func, expr* args, keyword* keywords,
       expr? starargs, expr? kwargs)

The calls structure allow for code like this:

func(arg1, arg2, keyword=SOME_VALUE, *unpack_list, **unpack_dict)

The keyword arguments allow for passing formal parameters by name and not just position, the * puts all elements from the iterable as arguments (inlined, not in a tuple), and the ** expects a dictionary of keywords with values.

This example actually uses all possible features of the call site construction:

  • Variables argument list passing (_VAR): CALL_FUNCTION_VAR, CALL_FUNCTION_VAR_KW
  • Keyword based dict passing (_KW): CALL_FUNCTION_KW, CALL_FUNCTION_VAR_KW

The bytecode looks like this:

 0 LOAD_NAME                0 (func)
 3 LOAD_NAME                1 (arg1)
 6 LOAD_NAME                2 (arg2)
 9 LOAD_CONST               0 ('keyword')
12 LOAD_NAME                3 (SOME_VALUE)
15 LOAD_NAME                4 (unpack_list)
18 LOAD_NAME                5 (unpack_dict)

Usually, a CALL_FUNCTION takes as oparg the number of arguments for the function. Here however, more information is encoded. The first byte (0xff mask) carries the number of arguments and the second one ((value >> 8) & 0xff) the number of keyword arguments passed. To compute the number of elements to pop from the stack, we then need to get:

na = arg & 0xff         # num args
nk = (arg >> 8) & 0xff  # num keywords
n_to_pop = na + 2 * nk + CALL_EXTRA_ARG_OFFSET[op]

where CALL_EXTRA_ARG_OFFSET contains an offset specific to the call opcode (2 for CALL_FUNCTION_VAR_KW). Here, that gives us 6, the number of elements to pop before accessing the function name.

To relate to other CALL_* keywords, it then all depends if the code is either using the list passing or dictionary passing argument; it's all about combination here!

Building a Minimal CFG

For understanding how the code actually works, it's interesting to build a control-flow graph (CFG) so we can follow which unconditional sequences of opcodes (basic blocks) will be executed, and under what conditions.

Even if the bytecode is a fairly small language, building a reliable CFG requires more details than this blog post can allow, so for an actual implementation of a CFG construction, you can have a look at equip.

Here, we'll focus on loop/exception free code, where the control flow only depends on if statements.

There are a handful of opcodes that carry a jump address (for non-loop/exceptions); they are:

  • JUMP_FORWARD: Relative jump in the bytecode. Takes the amount of bytes to skip.
  • JUMP_IF_FALSE_OR_POP, JUMP_IF_TRUE_OR_POP, JUMP_ABSOLUTE, POP_JUMP_IF_FALSE, and POP_JUMP_IF_TRUE all take absolute index in the bytecode.

Building the CFG for a function means creating basic blocks (sequence of opcodes that have unconditional execution -- except when an exception can occur), and connecting them in a graph that contains conditions on branches. In our case, we only have True, False, and Unconditional branches.

Let's consider the following code example (which should never be used in practice):

def factorial(n):
  if n <= 1:
    return 1
  elif n == 2:
    return 2
  return n * factorial(n - 1)

As mentioned before, we get the code object for the factorial method:

module_co = compile(python_source, '<string>', 'exec')
meth_co = module_co.co_consts[0]

The disassembly looks like this (minus my annotations):

3           0 LOAD_FAST                0 (n)
            3 LOAD_CONST               1 (1)
            6 COMPARE_OP               1 (<=)
            9 POP_JUMP_IF_FALSE       16              <<< control flow

4          12 LOAD_CONST               1 (1)
           15 RETURN_VALUE                            <<< control flow

5     >>   16 LOAD_FAST                0 (n)
           19 LOAD_CONST               2 (2)
           22 COMPARE_OP               2 (==)
           25 POP_JUMP_IF_FALSE       32              <<< control flow

6          28 LOAD_CONST               2 (2)
           31 RETURN_VALUE                            <<< control flow

7     >>   32 LOAD_FAST                0 (n)
           35 LOAD_GLOBAL              0 (factorial)
           38 LOAD_FAST                0 (n)
           41 LOAD_CONST               1 (1)
           44 BINARY_SUBTRACT
           45 CALL_FUNCTION            1
           48 BINARY_MULTIPLY
           49 RETURN_VALUE                            <<< control flow

In this bytecode, we have 5 instructions that change the structure of the CFG (so adds constraints or allows for quick exit):

  • POP_JUMP_IF_FALSE: Jump to the absolute index 16 and 32,
  • RETURN_VALUE: Pop one element from the stack and returns it.

Extracting the basic blocks becomes easy since these instructions that change the control flow are the only one we're interested in detecting. In our case, we don't have jumps that impose no fall-through, but JUMP_FORWARD or JUMP_ABSOLUTE do that.

Example code to extract such structure:

import opcode

def find_blocks(meth_co):
  blocks = {}
  code = meth_co.co_code
  finger_start_block = 0
  i, length = 0, len(code)
  while i < length:
    op = ord(code[i])
    i += 1
    if op == RETURN_VALUE: # We force finishing the block after the return,
                           # dead code might still exist after though...
      blocks[finger_start_block] = {
        'length': i - finger_start_block - 1,
        'exit': True
      finger_start_block = i
    elif op >= opcode.HAVE_ARGUMENT:
      oparg = ord(code[i]) + (ord(code[i+1]) << 8)
      i += 2
      if op in opcode.hasjabs: # Absolute jump to oparg
        blocks[finger_start_block] = {
          'length': i - finger_start_block
        if op == JUMP_ABSOLUTE: # Only uncond absolute jump
          blocks[finger_start_block]['conditions'] = {
            'uncond': oparg
          false_index, true_index = (oparg, i) if op in FALSE_BRANCH_JUMPS else (i, oparg)
          blocks[finger_start_block]['conditions'] = {
            'true': true_index,
            'false': false_index
        finger_start_block = i
      elif op in opcode.hasjrel:
        # Essentially do the same...

  return blocks

And we get the following basic blocks:

Block  0: {'length': 12, 'conditions': {'false': 16, 'true': 12}}
Block 12: {'length': 3, 'exit': True}
Block 16: {'length': 12, 'conditions': {'false': 32, 'true': 28}}
Block 28: {'length': 3, 'exit': True}
Block 32: {'length': 17, 'exit': True}

With the current structure of the blocks:

Basic blocks
  start_block_index :=
     length     := size of instructions
     condition  := true | false | uncond -> target_index
     exit*      := true

we have our control flow graph (minus the entry and implicit return blocks), and we can for example convert it to dot for visualization:

def to_dot(blocks):
  cache = {}

  def get_node_id(idx, buf):
    if idx not in cache:
      cache[idx] = 'node_%d' % idx
      buf.append('%s [label="Block Index %d"];' % (cache[idx], idx))
    return cache[idx]

  buffer = ['digraph CFG {']
  buffer.append('entry [label="CFG Entry"]; ')
  buffer.append('exit  [label="CFG Implicit Return"]; ')

  for block_idx in blocks:
    node_id = get_node_id(block_idx, buffer)
    if block_idx == 0:
      buffer.append('entry -> %s;' % node_id)
    if 'conditions' in blocks[block_idx]:
      for cond_kind in blocks[block_idx]['conditions']:
        target_id = get_node_id(blocks[block_idx]['conditions'][cond_kind], buffer)
        buffer.append('%s -> %s [label="%s"];' % (node_id, target_id, cond_kind))
    if 'exit' in blocks[block_idx]:
      buffer.append('%s -> exit;' % node_id)

  return '\n'.join(buffer)

To produce the source of that graph:

Why Bother?

It's indeed fairly rare to only have access to the Python bytecode, but I've had this case a few times in the past. Hopefully, this information can help someone starting a reverse engineering project on Python.

Right now however, I've been investigating the ability to instrument Python code, and especially its bytecode since there are no facilities for doing so in Python (and instrumenting source code often leaves with very inefficient instrumentation code with decorators, etc.). That's where equip comes from.

Shell Shock in Java Apps

Posted by Ian, Comments

A few weeks ago, security researchers disclosed a vulnerability in Bash, a shell commonly installed on most Unix-style operating systems. This vulnerability, commonly referred to as "Shell Shock," has the potential to allow arbitrary code execution on the target system. Furthermore, the ubiquity of Bash means that the majority of web servers are potentially vulnerable to this issue.

The vulnerability in Bash is caused by the shell executing the entirety of environment variables which represent functions. By appending commands to a function definition, an attacker could execute arbitrary commands on the system. However, for this vulnerability to be exploited the server must present some interface for an attacker to control environment variables before a shell is launched. The most direct avenue for such an attack is CGI which directly places user-specified values into environment variables before executing a target script or command. However, there are many other server applications which pass user-supplied values into environment variables such as Postfix and OpenVPN. SSH is also vulnerable to this attack when depending on a restrictive command in the authorized_keys file (in which case the original command is placed in the SSH_ORIGINAL_COMMAND environment variable). Gitolite is a very common use case for this and can also be exploited.

The original defect in Bash is not one that could be detected through a general static analysis. Although it likely wasn't, for all an analyzer could tell this semantic behavior of Bash could very well have been intentional, with the developer holding the expectation that any process executing it would set a sane environment before invoking the shell. To make the distinction an automated tool would require a formal specification of the intended behavior of Bash. On the other hand, server applications passing user-controllable data into environment variables is something that can be detected by static analysis, and it is worth looking for since this would provide precisely what an attacker needs in order to exploit a defect such as Shell Shock.

Allowing user-controllable data into environment variables has always been a vulnerability, but the ability to exploit it has been raised dramatically with the disclosure of Shell Shock. Given the raised impact of this defect, we at Coverity set upon the task of answering the question "Is it a common anti-pattern for Java web applications to pass user-controllable data into environment variables when spawning applications?" To answer this question, we built a new checker which looked for tainted data (i.e. input from an HTTP request, database, filesystem, etc.) flowing into the environment variables of a new process.

To implement the new checker we utilized our existing dataflow analysis tools which are used for our other checkers such as SQL injection, XSS, and OS command injection. The sinks for this dataflow are the JDK interfaces for creating processes: java.lang.Runtime and java.lang.ProcessBuilder. The former was a simple dataflow analysis, with the second parameter of the various Runtime.exec() methods acting as the sink. Analyzing ProcessBuilder, on the other hand, is a slightly more complicated task since the sink for tainted data is the Map.put() and Map.putAll() methods returned from the ProcessBuilder.environment() method. Just specifying the methods on the Map interface would be too general, but we found a number of applications which passed the Map returned from environment() into other methods (which themselves make no reference to ProcessBuilder), so we also cannot just rely on the contextual presence of a ProcessBuilder.

To handle this we modeled ProcessBuilder.environment() as the source of a new taint type. To report a defect on Map.put() and Map.putAll(), we required both the Map to have a dataflow path in which through which this new taint type was propagated, and that the second parameter have a dataflow path in which it received an untrusted source of taint (such as a servlet request). That these dataflow paths are considered independently is an over-approximation that could lead to false positives. For example:

public void entryPoint(HttpServletRequest request) {
  doPut(request.getParameter("name"), new HashMap<String, String>());
  doPut("name", new ProcessBuilder().environment());
private void doPut(String name, Map<String, String> env) {

This code would result in a false positive at the env.put(name) call since it has the two requisite dataflow paths described above. By not engineering a solution in which our dataflow engine understands a necessary overlap of two searches we may see false positives such as the above, but for the sake of an experiment this allowed us to create the checker from our existing tools with only half a day of work.

With the checker in-hand, we ran an analysis on a test suite of 76 Java applications. Although the checker did find some defects, the only source of taint flowing into the new environment variables was from the JVM's own environment variables. In some cases — such as in Bash — environment variables should be considered an untrusted source, but this is not usually part of the threat model for long-running Java web applications. With this taint source ignored, we were pleased to discover that no additional defects were detected (in spite of the aforementioned over-approximation). Although our necessarily limited search is not going to be a representative sampling of all Java applications, it suggests that it is not a typical pattern for Java applications to pass user-controlled data to other processes through environment variables. So although developers should remain vigilant in their handling of user-controllable inputs in all contexts, environment variable injection is unlikely to be a high-frequency defect in Java web applications.

Have you encountered Java applications which put user-controllable data into environment variables? If you have seen examples or believe this to be common, let us know!

Secure Code: By Design? Serendipity? Or...?

Posted by Jon, Comments

While researching Struts 2 again to expand our current framework support, I became a bit more familiar with its tag library. This lead to a better understanding of Struts 2 basic OGNL syntax and the creation of some test cases. During the test case development, I was amused at all the different ways one can obtain the value from a getter function. Here's a snippet for your amusement:

"%{tainted}": <s:property value="%{tainted}" escape="false" />
"getTainted()": <s:property value="getTainted()" escape="false" />
"%{getTainted()}": <s:property value="%{getTainted()}" escape="false" />
"#this.tainted": <s:property value="#this.tainted" escape="false" />
"%{#this.tainted}": <s:property value="%{#this.tainted}" escape="false" />
"top.tainted": <s:property value="top.tainted" escape="false" />

There are more ways, for sure, but getter synonyms isn't the purpose of this blog. Secure code by chance is.

Secure code isn't just having code that's devoid of vulnerabilities for whatever reasons. Just as insecure code isn't code that happens to have a couple of weaknesses. I've looked at code that had a couple of issues and it seemed relatively secure to me. Just as I've looked at code without any obvious vulnerabilities that I wouldn't consider secure. There are properties to code, hopefully not too subjective, that convey security. To me, one of these security properties is the ability to understand why something is secure or not. If you cannot understand if something is or isn't secure, it isn't secure; it's just serendipity.


If you look at the last line in the JSP snippet above, notice the use of top as a value. This special value obtains the top value from the current ValueStack. In Struts 2 this often is a controller, which is sometimes a subclass of ActionSupport. Why's this important?

Struts 2 has been making the rounds again because of a couple of security advisories: S2-021 and S2-022. Good ol' ParametersInterceptor had a bad couple of days it seems, leading to remote code execution issues. Curiosity got the best of me and I started looking into the updated ParametersInterceptorTest test case and the updated / added ExcludedPatterns class. Looking at that list, you'll see that top isn't in it. Hmm...


So top can be a parameter name. Does the framework actually do anything special? ParametersInterceptor eventually calls CompoundRootAccessor.getProperty when top is the parameter name. This returns the root or top of the ValueStack, which again is usually the Action controller that's being accessed. Nice! So we have access to the controller via a parameter name. Let's call some methods!

Since we're calling methods with a value, in OGNL, we're setting a property. This is handled by OgnlRuntime.setProperty:

public static void setProperty(OgnlContext context, Object target, Object name, Object value)
        throws OgnlException
    PropertyAccessor accessor;

    if (target == null) {
        throw new OgnlException("target is null for setProperty(null, \"" + name + "\", " + value + ")");
    if ((accessor = getPropertyAccessor(getTargetClass(target))) == null) {
        throw new OgnlException("No property accessor for " + getTargetClass(target).getName());

    accessor.setProperty(context, target, name, value);

When using top the target is the specific class, which is a custom class in this case. Since there's no specific PropertyAccessor for this class, this results in an accessor field set to an instance of ObjectAccessor. This calls its super, ObjectPropertyAccessor.setProperty, which calls ObjectPropertyAccessor.setPossibleProperty:

public Object setPossibleProperty(Map context, Object target, String name, Object value)
        throws OgnlException
// snip
    if (!OgnlRuntime.setMethodValue(ognlContext, target, name, value, true))
        result = OgnlRuntime.setFieldValue(ognlContext, target, name, value) ? null : OgnlRuntime.NotFound;

    if (result == OgnlRuntime.NotFound)
        Method m = OgnlRuntime.getWriteMethod(target.getClass(), name);
        if (m != null)
            result = m.invoke(target, new Object[] { value});
// snip

In this method, three different ways are used to set or write a value to a property:

  • OgnlRuntime.setMethodValue
  • OgnlRuntime.setFieldValue
  • Invoking the method with the user-provided value.

If all of these fail, the caller method ObjectPropertyAccessor.setProperty throws an exception. And when devMode is on, something like the below is logged:

Unexpected Exception caught setting 'top.foo' on 'class com.coverity.testsuite.property.PropertyAction: Error setting expression 'top.foo' with value '[Ljava.lang.String;@1ef1a094'

So if an exception like the above occurs, we know we hit a Whammy. Otherwise, if we don't hit a Whammy we might be in luck. :) So let's request ?top.text=%25{1*2} and get some RCE! No Whammy, no Whammy, no Whammy, and.... stop!

RCE Attempt 1

Whammy :(

Unexpected Exception caught setting 'top.text' on 'class com.coverity.testsuite.property.PropertyAction: Error setting expression 'top.text' with value '[Ljava.lang.String;@16321271'

Well, what happened? In this case, the call to OgnlRuntime.getWriteMethod returns null in ObjectPropertyAccessor.setPossibleProperty. Hmm...

public static Method getWriteMethod(Class target, String name, int numParms)
// snip 
    if ((methods[i].getName().equalsIgnoreCase(name)
         || methods[i].getName().toLowerCase().equals(name.toLowerCase())
         || methods[i].getName().toLowerCase().equals("set" + name.toLowerCase()))
        && !methods[i].getName().startsWith("get")) {
// snip

D'oh! Notice the last part of that conditional. There's a check disallowing one to set a property on a method that starts with get. Boo!

OK, so any methods outside of get* can be called. Guess what, there's yet another OGNL sink on ActionSupport that doesn't start with 'get': hasKey(String)! Let's request ?top.hasKey=%25{1*2}... No Whammy, no Whammy, no Whammy, and... stop!

RCE Attempt 2

Whammy again, drat.

Unexpected Exception caught setting 'top.hasKey' on 'class com.coverity.testsuite.property.PropertyAction: Error setting expression 'top.hasKey' with value '[Ljava.lang.String;@36c63134'

Debugging shows that OgnlRuntime.getWriteMethod when called from ObjectPropertyAccessor.setPossibleProperty did return something this time, awesome! So the method was invoked by reflection via m.invoke(target, new Object[] { value}); with tainted data, nice! Except, it wasn't....

Breaking on ObjectPropertyAccessor.setPossibleProperty and stepping through shows an IllegalArgumentException exception thrown with the message argument type mismatch. Hmm. Looking at the log output, you can see [Ljava.lang.String;, which is the mangled name for String[]. So it seems Struts 2 stores request parameters in a String array. That make sense since one could specify the same parameter name but have different values. And the parameter signature for hasKey is expecting a String, not a String[]. Mismatched argument type. :( Well, shucks. What's one to do w/ a vector that doesn't do anything?!


What did we learn?

  • The top value bypasses the regular expression exclusion and passes the acceptable names in ParameterInterceptor.
  • top should return a reference to the instance of the Action associated to the URL.
  • It's common, although not required, that Actions exposed are usually subclasses of ActionSupport
  • Methods that start with get* cannot be used as a parameter key. (ActionSupport.getText failed here.)
  • Methods that don't conform to the JavaBean getter / setter convertion need to accept a String[] parameter. (ActionSupport.hasKey failed here.)

It's just dumb luck that RCE didn't happen. For example, if the value was massaged from a String[] to a String, as what happens when the normal getter / setters are called (see XWorkBasicConverter.convertValue), then this could have been RCE. Is it obvious to anyone supporting this code that a custom public method on a class called addValues(String[] values) is accessible via ?top.addValues=value1&top.addValues=value2&... ?

Final Thoughts

I tried to think what the Struts 2 developers could do but I'm lost. I'd rather they remove the exclusion list and remove the functionality that's causing the code to be evaluated in such a way. Exclusion or black lists are like door braces trying to keep out the invading hordes; eventually the hordes break through and raid the castle. Maybe they could ensure ObjectAccessor isn't called on a parameter name, which is tainted. However, I'm guessing there are a lot of things I dunno about Struts 2 that makes this a horrible (and possibly insecure) design choice. Maybe in this case a black list is as good as it gets? If so, is the code still secure? Or is it just lucky?


'top.hasKey' vs. 'hasKey'

If you're wondering why specifying top.hasKey is different than hasKey, debug the call to OgnlRuntime.setProperty. Here's the snippet from above again:

public static void setProperty(OgnlContext context, Object target, Object name, Object value)
        throws OgnlException
    PropertyAccessor accessor;

    if (target == null) {
        throw new OgnlException("target is null for setProperty(null, \"" + name + "\", " + value + ")");
    if ((accessor = getPropertyAccessor(getTargetClass(target))) == null) {
        throw new OgnlException("No property accessor for " + getTargetClass(target).getName());

    accessor.setProperty(context, target, name, value);

When called, if the target is of type CompoundRoot, which it will be for hasKey, accessor is set to an instance of CompoundRootAccessor. When calling top.hasKey the target is the specific class, which is a custom class in this case. This results in an accessor field set to an instance of ObjectAccessor. These two types perform different checks when calling their setProperty methods, with the top case having some potential holes.

Shenanigans with Numbers

Try out the following :)

  • ?0xdeadbeef['equals']=something
  • ?066['equals']=something
  • ?1L['equals']=something

You shouldn't notice any exceptions firing. The OGNL parser is parsing those as numbers and successfully calling the equals method on the respective boxed number class (e.g. Integer.equals).

Now try these:

  • ?12_['ignored']=whatever // Throws ognl.ParseException in ognl.OgnlParser.topLevelExpression()
  • ?123[066]=whatever // 066 converted to decimal 54, and Integer.54() is called, which results in a "54" property not found exception.

Handling web frameworks; a case of Spring MVC - Part 1

Posted by Romain, Comments

Coverity has been known for years for its static analysis technology for C/C++ applications. A couple of years ago, we started a new project to focus on the security analysis of Java web applications. During development, one of the first issues we faced analyzing open source applications was the prevalence and diversity of web frameworks: we did not find many security defects because we lacked the understanding of how untrusted data enters the application as well as how the control flow is affected by these frameworks. To change this, we started developing of framework analyzers.

This blog post presents examples and explains what the analysis needs to understand and extract to perform a solid analysis. We focus on Spring MVC, one of the most common and complex Java web frameworks.

Our example Spring MVC application

To illustrate the framework analysis, I've created a small Spring MVC application that you can find on Github blog-app-spring-mvc. It has features that most Spring applications are using: auto-binding, model attributes, JSPs, and JSON responses. The application itself is very simple and can add users to a persistent store; there is a simple interface to query it, and we also display the latest user.

To show the different features of the framework, I will use two kinds of defects: cross-site scripting (XSS) and path manipulation. The application can be run, and it's possible to exploit these issues; we have 2 simple XSS and 3 path manipulations that are mostly present to trigger defects from the analysis.

Here's the layout of the application that you can build and run using Maven.

├── java
│   └── com
│       └── coverity
│           └── blog
│               ├── HomeController.java
│               ├── UsersController.java
│               ├── beans
│               │   └── User.java
│               └── service
│                   └── UsersService.java
└── webapp
    └── WEB-INF
        ├── spring
        │   ├── appServlet
        │   │   └── servlet-context.xml
        │   └── root-context.xml
        ├── views
        │   ├── error.jsp
        │   ├── home.jsp
        │   └── user
        │       └── list.jsp
        └── web.xml

The Java code lives under src/main/java and our package names, while the Spring configurations, JSP files, and web.xml are under the webapp directory. This is a very common structure.

Build and run

If you're not using Maven frequently, you'll need to get it here, go to the root of the project (where the pom.xml is), and run:

  mvn package

to create the WAR file.

You can also run the application directly with a Maven command (always good for proving framework behaviors):

  mvn jetty:run

Developer view: some simple code using Spring MVC

Spring MVC is one of the most common web frameworks. If you look at the abysmal documentation, you will see it has tons of features. Spring MVC implements the model-view-controller (MVC) pattern, where its definition of the MVC is essentially:

  • Model: Passing data around (typically from the controller to the view)
  • View: Presentation of the data (e.g., rendered with JSP, JSON, etc.)
  • Controller: What is responsible for getting data and calling business code

Here's our first controller example, HomeController.java:

public class HomeController {
  // Being polite
  @RequestMapping(value="/hello", produces="text/plain")
  public @ResponseBody String sayHello(String name) {
    return "Hello " + name + "!"; // No XSS

  // Display our view
  public String index(User user, Model model) {
    model.addAttribute("current_user", user);
    return "home";

In this case, the configuration is very basic. As usual, we need to specify that we want Spring's Servlet to analyze the bytecode to look for @Controller and map the associated entry points (@RequestMapping annotated) with the configuration in servlet-context.xml (we also need to enable Spring in the container, so there are references to it in the web.xml). In this class, we only have 2 entry points:

  1. HomeController.sayHello(...) which takes one parameter "name", and returns a String that contains the body of the response (what's being displayed by the browser).
  2. HomeController.index(...) which has 2 parameters and returns a String that points to the location of the view (a JSP in this case)

Workflow for HomeController.sayHello(...)

Executing this code by reaching /hello?name=Doctor produces the output to be displayed by the browser being:

  Hello Doctor!

The browser also receives the content type as text/plain, so no markup will be rendered here.

Workflow for HomeController.index(...)

The second entry point uses a common feature from web frameworks: auto binding. Spring MVC will instantiate a new User and pass it to the entry point, its fields will be populated by what's passed from the HTTP parameters; the Model is also given by Spring, but its meant to be a map-like object to pass to the view for rendering.

Executing /index?name=TheDoctor&email=doc@future.com will call the HomeController.index(...) method with the first parameter being the bean User({name: TheDoctor, email: doc@future.com}). We later add it to the model so it will automatically be dispatched to the view and accessible from the JSP using EL.

Our JSP is minimalist and contains the following code:

  <c:when test="${not empty current_user.name}">
    Hello ${cov:htmlEscape(current_user.name)}! <%-- No XSS here --%>
    The bean field `name` is reflected, such as <a href="/blog/index?name=TheDoctor">here</a>.

where the bean current_user set from the entry point code in the Model is filled with the data populated from the HTTP request. The JSP code will display an HTML escaped name in the current_user bean if it's not null or empty, otherwise display the static contents, body of the c:otherwise tag.

Analysis view: call graph roots, etc.

When running a vanilla analysis on this type of code, not much happens. In fact, the type HomeController has no known instance and HomeController.sayHello(...) is never called in the entire program. A typical analysis would mark this type of method as dead code. The challenge of the framework analysis is to translate how Spring MVC is used in the application into a set of facts that can be acted upon by our static analysis.

The kind of properties we need to extract belong to the following areas:

  • Control flow: How can this function be reached, what's happening when the function returns
  • Data flow: How is the data provided by the framework (automatically dispatched, etc.)
  • Taint analysis: What data is tainted and how (e.g., what parts are tainted or what fields)
  • Domain specific: Facts related to HTTP requests, responses, URLs, Servlet filters in place, etc.

To achieve this, we've created a new phase in the analysis that looks for framework footprints in the source code as well as in the bytecode. The framework analyzers also require access to the configuration files in order to properly simulate how the framework will operate at runtime.

This framework analysis phase extracts the following facts for the first entry point:

  1. HomeController.sayHello(...) is an entry point (or callback)
  2. The name parameter cannot be trusted, so it is tainted (with a particular type of taint)
  3. The entry point is reachable via any HTTP method and the URL is /hello
  4. The return value of this method is a body of HTTP response (so a sink for cross-site scripting)
  5. The response has a content-type of text/plain (so the response is not prone to XSS)

In the case of the second entry point, here are the facts we extract:

  1. HomeController.index(...) is an entry point
  2. Only its first parameter user is tainted with a deep-write (i.e., all fields with a public setter are set as tainted)
  3. The entry point is reachable via any HTTP method and the URL is /index
  4. The return value "home" of this entry point is a location of a view
    1. Inspecting the Spring configuration servlet-context.xml, "home" resolves to WEB-INF/views/home.jsp
    2. Connect the return to the _jspService method of WEB-INF/views/home.jsp through a control-flow join
  5. The model is considered a map-wrapper that contains the bean current_user which is our tainted parameter

With these types of facts integrated in the analysis, it is then possible to properly identify the full execution paths and consider what's tainted or not based on the framework rules itself. We can conceptually create a "trace" that needs to act as a backbone for the analysis:

The "trace" has been annotated with facts directly coming directly from the framework analysis.

Final words

We've seen that the properties extracted by the framework analysis are important for the analysis tool to understand how Spring MVC instantiates the entry points and maps them to URLs. That's how we are able to understand when tainted data is entering the application, how it's entering it, and where it's going.

Without this analysis, we would have to make blunt assumptions and suffer from either a high number of false-negatives when we do not understand the framework configuration, or false-positives if we are overly permissive and don't try to properly resolve what's tainted and where it can possibly go. It's actually a good way to test the capabilities of a static analysis tool: modify the framework configurations, insert EL beans that will always be null at runtime, etc.

However, the framework analysis is not limited to providing value for the taint analysis, but also provides information about how the URLs are reached and what are the constraints attached to them, which is important to identify CSRF issues for example.

On Detecting Heartbleed with Static Analysis

Posted by Andy, Comments

Many of our customers have asked whether Coverity can detect Heartbleed. The answer is Not Yet - but we've put together a new analysis heuristic that works remarkably well and does detect it. (UPDATE: the Coverity platform now detects the Heartbleed defect) We wanted to tell our customers and readers about this heuristic and what it shows about the way we approach static analysis.

John Regehr blogged (1) last week about Coverity Scan, our free scanning service for the open source community. While there were interesting defects found in OpenSSL, Heartbleed was not among them. After adding the new heuristic designed to catch this and other similar defects, we shared our updated results with John and he was gracious enough to write a follow-up blog (2), which we think is fantastic.

Initially, we wanted to independently verify the results so we ran the latest production version of our analysis (7.0.3) against openssl-1.0.1. Coverity Scan uses a particular set of analysis options, and we wondered if different settings might cause the defect to appear. After a few experiments, we determined that analysis settings didn't make a difference for this particular defect.

So we dug into the code further to determine why. At its heart, Heartbleed is an out of bounds memory read based on tainted data being used as an argument to memcpy. The main difficulty in detecting it is in realizing the source data is tainted. Most of the descriptions of Heartbleed begin with this line:

unsigned char *p = &s->s3->rrec.data[0]

But for a static analysis, it is not obvious that the field data is tainted, and finding the evidence for this in the program can be difficult. One illustration of this is in the definition of the structure that contains data:

typedef struct ssl3_record_st
/*r */  int type;               /* type of record */
/*rw*/  unsigned int length;    /* How many bytes available */
/*r */  unsigned int off;       /* read/write offset into 'buf' */
/*rw*/  unsigned char *data;    /* pointer to the record data */
/*rw*/  unsigned char *input;   /* where the decode bytes are */
/*r */  unsigned char *comp;    /* only used with decompression - malloc()ed */
/*r */  unsigned long epoch;    /* epoch number, needed by DTLS1 */
/*r */  unsigned char seq_num[8]; /* sequence number, needed by DTLS1 */
        } SSL3_RECORD;

The comments aid human comprehension, but static analysis doesn't benefit much from them. Instead, we attempt to trace the flow of tainted data from where it originates in a library call into the program's data structures. This can be difficult to do without introducing large numbers of false positives, or scaling performance exponentially poorly. In this case, balancing these and other factors in the analysis design caused us to miss this defect.

Program analysis is hard and approximations and trade-offs are absolutely mandatory. We've found that the best results come from a combination of advanced algorithms and knowledge of idioms that occur in real-world code. What's particularly insightful is to analyze critical defects for clues that humans might pick up on but are hard to derive from first principles. These patterns form pieces of evidence that can then be generalized and tested empirically to make the analysis "smarter." Our experience is that this is the only way to build analyses that scale to large programs with low false positive rates, yet find critical defects. Many program analysis problems are undecidable in general, and in practice NP-complete problems and severe time/space/accuracy trade-offs crop up everywhere. Giving the analysis intuition and developer "street smarts" is key to providing high quality analysis results.

The Heartbleed bug is a perfect example of this. I sat down with one of our analysis engineers to examine whether there was any hope for finding this defect in a smarter way. It seemed bleak. The flow of tainted data into the data field was convoluted, and even manually we had a hard time understanding exactly how the code worked.

Then we noticed that the tainted data was being converted via n2s, a macro that performs byte swapping:

Byte swaps are relatively rare operations. They can occur in cryptographic and image processing code, but perhaps the most widespread use is to convert between network and host endianness (e.g. ntohs). We had a hunch that byte swaps constitute fairly strong evidence that the data is from the outside network and therefore tainted (this also applies to reading a potentially untrusted binary file format such as an image). In addition to byte swapping, we also look for the bytes being subsequently recombined into a larger integer type. We also require that the tainted value flows into a tainted sink, such as an array index or, as in this case, a length argument to a memory operation. These additional conditions help avoid false positives when byte swapping is being used in a situation which isn't tainted. For example, outgoing data that is byte swapped is unlikely to flow into a tainted sink.

With that, Heartbleed revealed itself.

This heuristic bypasses the complex control-flow and data-flow path that reaches this point in the program, and instead infers and tracks tainted data near near the point where it is used. It generalizes to all programs that use byte swaps so it is not overly specific to OpenSSL. Nor is this restricted to intraprocedural cases. We've added this heuristic to the derivers that compute function summaries, so any tainted data inferred is automatically propagated throughout the rest of the program. By collecting this and other similar idioms together, we can pick up large amounts of tainted data without any codebase-specific modeling.

Beyond Heartbleed, we found a handful of additional issues in OpenSSL with this heuristic which we are investigating. We believe (hope?) they are false positives. If they are, we will further tune the analysis to understand why. Even without such tuning, we have not seen an "explosion" of false positives.

The entire set of results, including the new heuristic, will be made available to a selected group of users on the OpenSSL project on Coverity Scan shortly.

We plan on performing additional experiments on a larger corpus of code including 60M+ lines of open source and some additional proprietary code to validate our assumptions and determine if there are other common idioms for use of byte swapping that do not imply taintedness. These steps are part of our standard process for vetting all analysis changes before releasing them to our customers.