On Testing

Yesterday, I refurbed an old joke for a current time and problem I had faced. This is 'edge case' testing; posting values to a system that really don't belong there. It came to mind because of a problem I had encountered in a system I was working on earlier. There is a procedure that accepts a FirstName and a LastName, and generates a UserName for Active Directory for it. The original code read:
CREATE PROCEDURE [dbo].[GetNextADUserName]
-- Add the parameters for the stored procedure here
@FirstName as VARCHAR(50),
@LastName as  VARCHAR(50),
And the new code read:
CREATE PROCEDURE [dbo].[GetNextADUserName]
-- Add the parameters for the stored procedure here
@FirstName as VARCHAR(7),
@LastName as  VARCHAR(7),
The engineer who altered the code was just trying to make it better. It was written very loosely for a stored procedure, and he simply tightened it up. That's not a  problem, but the front end designers didn't know, and most importantly - all of our test names were under 7 characters. We would have never found this in testing.
As it turns out, there are a lot of people who are all about this. The replies to my tweet over the last 24 hours covered a lot of ground, but by far were those that wanted to push the edge testing to the max - and I love it.
Oh, and then, of course: Surprising how many people like a little violence with their testing: Some QA engineers piped in with thoughts on structured testing: A lot of security folks follow me, so a lot of application pentesting tweets showed up as well. If you are a developer and don't recognize these, shoot me an email: And some of the business people had some things not-so-nice things to say about the QA process, most of which I agree with: But what, of course, really happens? And that's kind of the crux of it. Making your testing as real world as possible is an important part of QA. Don't let those tell you otherwise. Be it unit testing, integration, QA or pentesting, assuring that all tests push the edges of what happens in the real world will make your software better. And your Twitter timeline!

How the AWS CloudHSM Eases the Pain of Security Audits

Amazon offers a large selection of security products that help with compliance, privacy and data protection. IAM, intra-VM encryption and a swath of other products help make your users and your auditors breathe easier. There is still the problem of key storage. CloudHSM brings a reliable solution to that problem.

Exactly what is CloudHSM

CloudHSM is a dedicated hardware security appliance in the Amazon cloud that provides security key storage and cryptographic operation to a specific user.

A hardware appliance

Most of Amazon Web Services is based on virtualization. Virtualization allows for a software-only instance of something – like a server, router, or switch – to be created within a larger computing infrastructure. Cloud HSM is not virtualized – it is a standalone piece of hardware that only you have access to.

Specifically, CloudHSM is a Luna SA HSM appliance from Safenet. The Luna SA is Federal Information Processing Standard (FIPS) 140-2 and Common Criteria EAL4+ standard compliant.

A storage in the Amazon cloud for your encryption keys

CloudHSM provides a cryptographic partition for the storage of keys related to your AWS infrastructure. For instance, if a particular application requires a key to access a database stores in S3, it can retrieve that key from the hardware appliance.

How does is help with compliance?

Various regulatory agencies have very strict requirements when it comes to encryption.

Separation of concerns

With most AWS systems, Amazon has credentials to the underlying server that could allow an administrator access to the data. Not so with the CloudHSM. Amazon has administrative credentials that would allow them to repurpose the device, but those credentials cannot be used to retrieve the keys on the device. That privilege is only for the client user.


Simply, put, PCI has remarkably strict key management standards. CloudHSM is one of the list of AWS services validated for the 2013 PCI DSS compliance package. Specifically, just using CloudHSM in your key storage program will meet the requirements for PCI 3.5 and 3.6.


In order to meet the HIPAA requirements for storage of personal medical data, data at rest must be encrypted. This previously required a local storage component for personally identifiable information, significantly slowing any cloud initiative. Adding CloudHSM to the mix allows for data at rest within the Amazon cloud to be safely encrypted and still meet the key storage requirements of HIPAA.

What do I need to know?

There are always a few caveats to any new technology and CloudHSM is no different.

You need to have a VPC

CloudHSM doesn’t work on the open cloud. You’ll need to be using a Virtual Private Cloud to make it all come together. Fortunately a VPC is very easy to set up, and you night already be using one. It is part of the package for a number of AWS suite systems.

It is possible to use CloudHSM with your custom applications

You bet! Many of the AWS applications have capabilities to use keys from CloudHSM. EBS Volume encryption and S# object encryption are two that have the most obvious benefit for custom applications.

CloudHSM helps with security compliance

A reliable hardware appliance, well implemented, will help with your security compliance. Getting CloudHSM configured and integrated requires some effort, but the end result is as secure as your own data center.

We want YOU at the CodeMash Security Track

Earlier this year I was asked by the incomparable Rob Gillen to manage the Security track at CodeMash That's a pretty big deal, seeing as how developer outreach is such a big deal for the security community.

CodeMash is as well known for its awesome content as it is its awesome community. The vibe there is just beyond compare, and it's because so many awesome people get together in one place and just relax.

If you, gentle reader, have application security research that you would like to present, I would ask you to put in a submission on the CodeMash Call For Speakers. I'd like to build a value-filled track. This is a great chance to get in front of 2000 developers from all over the midwest, and talk the good talk.

Cracking and Fixing REST APIs

REpresentational State Transfer, or REST, is more of a force on the web than most think. It is essentially a Web Service implementation of the HTTP protocol that runs the entire world wide web. I’m not hear to talk about REST, though, others have done that better than me.

I’m hear to talk about breaking REST.

When pentesting, I see the same pattern over and over. Organizations that went with a Service Oriented Architecture in the 2005 time frame had all of their business logic available as services in 2010 when the mobile boom hit. To make sure the IPhone app had the same functionality as the web app, they pushed the services through the DMZ without sufficient testing.

In this post, I’ll cover some of the common vulnerabilities that I find in REST APIs, and how to fix them. There are three main messages I want to get across: REST can be attacked like the rest of the web, REST can be attacked in special ways, and REST has special architectural considerations.

REST Can be attacked like the rest of the web

A REST API isn’t much different from a website. You start with a URL:


And then you get some markup back. Unlike normal web sites, however, we get JSON back rather than HTML.

    "data": {
        "translations": [
                "translatedText": "Hallo Welt"

This means that we can use all of the attack vectors that one would use on a normal website. They might look a little different, but they end with essentially the same result.


SQL Injection and Cross Site Scripting (Browser injection) are possible because the parameters of a REST API call are what we would usually think of as directories in a normal web request. As long as we remember to check,

Parameters themselves can be tested too, if they are used by the API. You never know how they might be used.



How can this be fixed? On the SQL side, parameterized queried, as usual. XSS is a little tougher, but really it is just using the same techniques that one would use for a usual website.

Information Disclosure

REST responses use the same header format as regular web browser responses. Leaving unneeded information in those headers leads to information leakage:

HTTP/1.1 200 OK
Date: Thu, 07 Aug 2014 17:09:34 GMT
Server: Apache/2.2.17 (Unix) mod_ssl/2.2.17 OpenSSL/0.9.7l DAV/2 PHP/5.2.15
Last-Modified: Mon, 07 May 2012 17:58:32 GMT
ETag: "1a273c-37e-4bf7604a7e200"
Accept-Ranges: bytes
Content-Length: 894
MS-Author-Via: DAV
Keep-Alive: timeout=15, max=500
Connection: Keep-Alive

The same is true of error messages. REST services should use HTTP response codes, and avoid pushing default web server errors to the clients, like this:



Authentication in REST is a bit of a pain. When there is No Human Involved, you can’t just use a username and password in a SSL protected POST. There has to be something at play that is automated.

One thing you don’t want to do is put the secret key right in the URL. Even under SSL this is a bad idea.




What you do want to do is look into HMAC.








HMAC, or Hashed Message Authentication Code, is a process where the client and server both know a public key and a secret key. The client creates a request, concatenates it with a secret key, and hashes it. Then the request (without the secret key of course) is sent, along with the hash. The server will them accept the request, use the public key to look up that secret hey, concatenate the request and key and hash.

Session Management

Session Management is hard in web development, but the server just has to know a little about you to give you a smooth browsing experience. This isn’t true for REST API calls. There is simply no reason to keep a session alive.







Just authenticate every time. It will save you so many headaches.

REST Can Be Attacked In Special Ways

Bad enough that all of the usual tricks work with REST, but there are special attacks and weaknesses.

Management of Secrets

The API key that acts effectively as a password needs to be protected, and this is very hard when you are running a JavaScript only application. For instance, in a Windows 8 app or an Angular.JS site, you might just end up leaving your keys hanging out in the open:

twitterTimeout = 20000;
var twitterClientSecret = "kXFKUW9t2spHa3zgJtYX77aaRKfT1swvF9yfFC2tX34";
var twitterConsumerKey = "3NgwT8Xc0BcHJtH60h4cvw";
var twitterAppsUrl = "https://twitter.com/settings/applications";
(function () {
    "use strict";
    var roamingSettings = Windows.Storage.ApplicationData.current.roamingSettings;

    WinJS.Namespace.define("Twitter", {
        Service: WinJS.Class.define(function Twitter_ctor() {

So what are we going to do? We need to exchange the secret key for a single session token on the server, then write it to the JavaScript. Facebook does this very wellFacebook does this very well. When the client app requests the login page the server generates a unique token based on information sent in the request. The information used is always something the server knows, something the client knows, and something both know. So for example the server can generate a unique key based on User agent + current time + secret key. The server generates a hash based on this information and then stores a cookie containing only the hash on the client machine.



CSRF is a big topic that is handled very well by OWASP. I’ll let them explain it if you aren’t familiar, but you need to know that if your REST API depends on the site’s session cookies, it is completely and totally susceptible to CSRF. Just don’t use state on your REST APIs.

Unused HTTP Verbs

REST uses the HTTP Verbs, usually GET, PUT, POST and DELETE as the action words in your APIs domain language. Trouble is, there are a lot of HTTP verbs and we probably aren’t going to use them all. That said, if you had to open up PUT and DELETE on your server, you might have opened up others (like, all of them.)

Once others are opened, if they aren’t specifically in your authorization configuration, then we can use a verb like HEAD and bypass authentication and perhaps get a token:

telnet www.example.com 80
HEAD /admin/pageIwannaSee.aspx HTTP/1.1

So what do we do? Turn off the unused verbs, or include them in your configuration.

Direct Object Reference

Direct object reference is not specific to REST but it is particularly relevant to REST. For instance, take a look at a call to the Facebook API at https://graph.facebook.com/v1.0/1138975845:

  "id": "1138975845", 
  "first_name": "Mary", 
  "gender": "female", 
  "last_name": "Loaiza", 
  "link": "https://www.facebook.com/mary.loaiza.921", 
  "locale": "es_LA", 
  "name": "Mary Loaiza", 
  "updated_time": "2014-06-08T01:32:22+0000", 
  "username": "mary.loaiza.921"

And then check out the next integer, https://graph.facebook.com/v1.0/1138975846

  "id": "1138975846", 
  "first_name": "Chase", 
  "gender": "male", 
  "last_name": "Krywaruchka", 
  "link": "https://www.facebook.com/chase.krywaruchka", 
  "locale": "en_US", 
  "name": "Chase Krywaruchka", 
  "updated_time": "2013-09-17T03:17:00+0000", 
  "username": "chase.krywaruchka"

With parameters in the URL at this level, you need to be especially careful to use unique identifiers, like GUIDs perhaps, for your object references. Otherwise you run the risk of someone downloading your entire user list – not to give you any ideas.

Mass Assignment Vulnerability

The Mass Assignment Vulnerability is a special flaw in ActiveRecord, a database access methodology commonly used in REST APIs. For instance, take this Person object, right from Uncle Bob:






It is possible, in many languages, to instantiate a new Person in such a way that it just sucks in all of the correctly named form fields. However, a malicious user can change the client side JavaScript to set other fields that might not be on the form:

params[:person] = { isFlaggedForAudit: true}

This is because ActiveRecord will autogenerate the underlying class on the API side, and it won’t distinguish between fields that the user should be able to set than those the user shouldn’t have write access to. For publically accessible classes, it’s recommended that you explicitly exclude sensitive fields, as shown.

[Bind(Exclude = “IsFlaggedForAudit")]
public class User
  public int FirstName{ get; set; }
  public string LastName{ get; set; }
  public string NumberOfDependents{ get; set; }
  public bool IsFlaggedForAudit{ get; set; }

REST has special Architectural Considerations

Aside from the general good practices and specific code protection, there are a few overriding considerations that should go into planning an API.

Carefully Consider Your Authentication

Consider the audience for your API and then plan for authentication early. If you have a small number of discrete users, consider using digital certificates for authentication. They are a pain to set up, but it doesn’t get more secure. If you have a public audience, then look into HMAC. It is supported on most platforms.

Treat your API Keys like PKI

If you go with HMAC, make sure your user base understands to treat their secret key like a private key in a PKI environment. It is literally the key to the kingdom. I can’t even begin to tell you haw often I have found the secret key in the URL (not secret) or the comments of a JS file (also not secret).

Treat Your URL Like A Method

In REST, the URL is your method signature. Plan them that way. Name them intelligently, and make sure they aren’t leaking information that shouldn’t be in there.








Burp even has a special tool for fuzzing REST style parameters. Just because we don’t have the parameter names doesn’t mean they can’t be fuzzed.

Treat Your API Like A Web Site

Finally, don’t assume that because your web site was tested, that you can just go and expose previously internal services as external REST APIs. The API needs to be tested and reviewed separately. Treat it like a site of its own.

Building a tool for REST testing

I am working on a BURP plugin for checking for several of these. You can find the project on Google Code. If you’d like to be in on the project, please let me know and you can take a piece and work on it.

How I make pickles

We’re going to step away from web application security and identity access management for a little while and talk about pickles. I make my own pickles, and enough people have asked me about it that I thought I should write it down.

In short form, I use pickling cucumbers, press them in a pickle press for 36 hours, then can them in widemouth quart Bell jars with dill, salt, vinegar, garlic, and peppercorns.

The pickle press

The pickle press I use is this little one from Amazon. It only holds about 12 cucumbers at once.

I cut the pickles in half first to help them lose water.







Then put three tablespoons of salt on them and shake them around. Put the press on and stir it and tighten the press every 6 hours or so.







When they are done, take the top off.







Get your stuff ready. You’ll need 2 jars with lids, 8 garlic cloves, 3 tablespoons of pepper, 6 sprigs of dill, and 4 tablespoons of salt.







Get the liquid ready

Boil some water. I use a hot pot and boil 7 cups – that’s usually enough. Then arrange the dill, garlic and pepper in the jars. By arrange, I mean dump them in there. Then start stuffing the cukes from the press in the jar.







Once you have them all in, put the jars in the sink and fill them all the way to to the top with the boiling water. Pour some on the lids too to sterilize them.

Once the water is in, pour half of it out. I know,, I know. Add the salt, divided evenly between the jars. Replace the lost liquid with vinegar of your choice. I use basic white vinegar, but rice or apple works good too. Then put the lids on and shake.







Put them immediately in the fridge.

This is not a real canning procedure. They can’t go in the pantry. If you leave them out, they will probably spoil. Fridge them, and eat them. They are better fairly fresh. Give them maybe 5 days to soak up spices and whatnot. And then enjoy!!

The OWASP .NET project

I'm helping OWASP with the awesome but recently neglected .NET project. There is a lot of great .NET security stuff out there <cough> troyhunt </cough> and I am helping them organize and broaden it.

There is a roadmap started and I would like the community's feedback. There is a lot of work to do and we are going to need a lot of help doing it. 

Feel free to email me, use the contact form, contact OWASP, sign up for the .NET Project email list, tweet me, or do what ever makes the most sense. We need your input.

Why you do vulnerability assessments on internal sites

As both a software architect and a vulnerability assessor, I am often asked why bother to test applications that are inside the firewall. 

It's a pretty valid question, and one that I asked a lot when working in the enterprise space. To the casual observer, network access seems to be an insurmountable hurdle to getting to an application. For years, I argued against even using a login on internal sites, to improve usability. That perspective changed once I started learning about security in the 90s, but I still didn't give applications that I knew would be internal to the firewall due rigor until I started testing around 2002.

This all comes down to the basic security concept of Security In Depth. Yes, I know it is a buzzword (buzzphrase?) but the concept is sound - layers of security will help cover you when a mistake is made. Fact is, there are a fair number of reasons to make sure internal apps meet the same rigor as external apps. I have listed a few below. If you can think of any more, list them in the comments below.

The network is not a barrier

Protecting the network is hard. Just like application vulnerabilities are hard to glean out, network vulnerabilities are hard to keep up with. Unlike application vulnerability management, handling vulnerabilities is less about ticket management and more about vendor management.

A lot of attacks on companies are through the network. Aside from flaws in devices and software, we have social attacks too.

Password Audit Department

Fact is, the network layer isn't a guarantee against access. It is very good, but not perfect. If there is a breach, then the attackers will take advantage of whatever they find. Now think about that: once I have an IP address, I am going to look for a server to take over. Just like if I am on the Internet: finding a server to own is the goal. Once I am inside your network, the goal stays the same.

People who shouldn't have access often do

You probably heard about the Target breach. If not, read up. The whole thing was caused by a vendor with evisting VPN access getting breached, and then that VPN access being used to own the Point Of Sale systems. Here's a question for you:

How did a HVAC vendor have access to the POSs?

It's possible to give very specific access to users. It's just hard. Not technically hard, just demanding. Every time something in the network changes, you have to change the model. Because there are a limited number of hours in the day, we let things go. After we have let a certain number of things go, the authentication system becomes a little more like a free for all.

Most vendors have a simple authentication model - you are in or you are out. Once you have passed the requirements for being 'in' you have VPN access and you are inside the firewall. After that, if you want to see what your ex-girlfriend's boyfriend is up to, then it is up to you. The network isn't going to stop you.

You can't trust people inside the network

In the same vein, even employees can't totally be trusted. This gets into the social and psychological sides of this business where I have no business playing, but there is no question that the people that work for you have a vested interest in the data that is stored. Be it HR data or product information, there are a number of factors that could persuade your established users to have a let us say 'gathering interest.' I know it is hard to hear - it is hard for me to write. Fact is, the people that work for you need to be treated with some caution. Not like the enemy, mind you, but certainly with reasonable caution. 

Applications are often moved into the DMZ

From the developer's perspective, frankly this is the biggest issue. Applications, particularly web applications, are often exposed after time. A partner needs it, the customers need it, some vendor needs it, we have been bought, we bought someone, whatever. Setting up federated identity usually doesn't move at the speed of business, and middle managers will just say 'put it in the DMZ.'

This happens a LOT with web services. Around 2004 everyone rewrote their middle tier to be SOAP in order to handle the requests of the front end devs, who were trying to keep up with the times. Around 2011, when the services were old and worn and everyone was used to them servicing the web server under the covers, the iPhone came out.

Then you needed The App. you know that meeting, after the CIO had played with her niece's iPhone at Memorial Day, and prodded the CEO, and he decided The App must be done. But the logic for the app was in the services, and the CIO said 'that's why we made services! Just make them available to the app!

But. Were they tested? Really? Same rigor as your public web? I bet not. Take a second look.

Just test everything 

Moral of the story is: just test everything. Any application is a new attack surface, with risk associated. If you are a dev, or in QA, or certainly in security, just assume that every application needs to be tested. It's the best overall strategy.

Encoding. It's not just a good idea ... well, yeah, it's just a good idea.

Working in primarily ASP.NET for the last 13 years, I didn't think much about Cross Site Scripting. The AntiXSS tools in ASP.NET are best of breed. Even without any input encoding, it's really, really tough to get XSS vectors into an ASP.NET site - especially Web Forms.

ASP.NET isn't the only platform out there, as it turns out. In fact, so far as the open web goes, it isn't even close to the most popular. Ruby on Rails, PHP, and JSP still show up everywhere, and they are not, by default, protected from XSS. What's more, mmisconfigured ASP.NET sites are more, not less, common. 

With the power of today's browsers, XSS is more of a threat than ever. It used to be virtual spray paint; a method for defacing a site. Not it can be used to steal credentials, alter the functionality of a site, or even take over parts of the client computer. It's a big deal.

You can make it all go away by simply encoding your inputs and outputs. There are some simple rules to help make this happen.

First, never put untrusted data in a script, inside an HTML comment, in an attribute name, in a tag name or in styles. There is no effective way to protect those parts of a page,, so don't even start.

OK, now that we have that covered, sometimes you DO need to put untrusted data in an HTML document. If you are putting data into an HTML element, such as inside a div, use the HTML encoding that is built into your platform. In ASP.NET it is Server.HtmlEncode. Just do it. Build it into your web controls, whatever. Assume that the data coming in and going out is bad, and encode it.

That's for HTML content. How about the attributes? Width or color or whatnot? Attribute encoding. There is a good reference implementation in ESAPI.

If you need to put content into a trusted JavaScript element, say for instance the text of an Alert, then JavaScript encoding is what you want. Most platforms have unicode escaping - that's your best bet. In Ruby, just use .encode() to handle it. But handle it.

In general, the idea is this: different parts of HTML pages require a different encoding style. The canonical reference for this is the OWASP XSS Prevention Cheat Sheet. When you are looking at your user control library for that next project, or a current one, or an old one, whatever: take a look. Does it encode inputs? Does it encode them correctly?

Applied Application Security - followup

I had a number of requests for my slides at CodeMash for the Applied Application Security class.

Those slides are from the deck that I use to give my paid training, so I don't toss them around lightly. They have more detail in them than we had time to get into at the con.

Instead, I though I would edit up my whitepaper that I provide to see that it covers the materials that we went over. You can find it attached to this post.

Codemash 2014 Applied Application Security Whitepaper.pdf (657.56 kb)

The exercises were all completed using the OWASP Web Goat on the SamuraiWTF virtual machine.

if you have any questions, feel free to contact me using the contact form above.


The importance of securing file upload

On a recent vulnerability assessment, I discovered an unsecured file upload and was challenged by the client.

"Was the file uploaded to the webroot, where it was executable?"

"Well, no." I said. "I don't know where it was uploaded to, but there weren't the necessary protections in place, I can assure you that."

"What can happen if it isn't executable by a web user, then?"

Oh, let me count the ways.

What we are talking about

Allowing file upload from a web page is a common activity. Everything form web email clients to interoffice management systems allow for file upload. Profile pictures are uploaded form client machines. Data is upload to support websites. Anytime you click a Browse button and look at your local hard drive from a web browser, you are uploading a file.

Keep in mind, file upload by itself isn’t a security risk. The web developer is taking advantage of built-in browser functionality, and the site itself doesn’t have any insight into your PC’s internals. File upload is governed by RFC 1867 and has been around for a long time. The communication is one way.

The code for a file upload just has to invoke an enctype of multipart/form-data, and that gives us an input type of ‘file’ as seen in this example snippet:

<form action="http://localhost/handlefile.pl"
enctype="multipart/form-data" method="post">
Please select a file:<br>
<input type="file" name="datafile" size="40">
<input type="submit" value="Send">

If the upload isn’t intrinsically bad, then what is? Well, what you upload can be bad – aiming to exploit something later, when the file is accessed.

The classic case – upload to the web root

The case that my clients were interested in is the classic problem. Back in the day, we used to let users upload files into the web directory. So, for instance, you might be able to save an image directly to the /img/ directory of the web server.

How is that a problem? If I upload a backdoor shell, I can call it with http://yourserver/image/mywebshell.php and you are pwned. Anytime I can write and execute my own code on your server, it is a bad thing. I think we can all agree on that. As a community of people that didn’t want our servers hacked, we stopped allowing upload of files to the web root, and the problem (largely) went away.

But the problem of unrestricted file upload didn’t go away. There are still a number of things that pentesters and bad guys both can do to get your server under their control. Here we’ll look at a handful of them.

Uploading something evil for a user to find later

The first, most common and probably most dangerous act is most likely the uploading of malware. Often, business and social sites both allow for uploading of files for other user’s use. If there is insufficient protection on the file upload, an attacker just uploads evilmalware.docx.exe and waits until the target opens the file. When the file is downloaded, the .exe extension isn’t visible to the user (if on Windows) and bang, we got them.

This attack isn’t much different from phishing by sending email messages. Since the site is trusted by the user, however, the malware executable might have a little more change of getting executed.

Mitigation is fairly straightforward. First, in a Windows environment, check extensions. If you are expecting a docx file, rename the file with the extension. Whitelist the extensions you expect. Second, run a pattern analyzing virus scanner on your server. There are a number of products that are designed to be run on web servers for just this case.

Taking advantage of parsers

A far more subtle attack is to directly affect the server by attacking file handlers on the host machine itself. Adobe PDF, Word and images are all common targets of these kinds of attacks.

What happens, is some flaw in the underlying system – say a PDF parser that reads a PDH form and uses the information to correctly forward the document – is exploited by a malware author. Then documents can be created that exploit this flaw and uploaded somewhere where the attacker knows they will be opened.

Let’s talk about just one of these, as an example. MS13-096 is a Microsoft security advisory that warns us that TIFF images could cause a buffer overflow in Windows. This overflow, then, could be used to execute arbitrary code on the user’s context. Remember what we said before: anytime I can write an execute my code on your machine, bad things will happen.

All of that said, doesn’t’ take a lot of technical expertise to write an exploit for that? I mean, you would need to make an image that exploited the flaw, safely put it into a Word document, then write more code to be injected into the overflow, and then have something that takes advantage of whatever we ran.

Well, the answer to that is yes, it is hard to write. So enter Metasploit. This penetration testing tool has a framework for the enablement, encoding, delivery and combination of exploits. Specifically, this one vulnerability can be baked in with already existing tools to deliver, run and get a remote shell using this exploit.

In fact, one already has.

Stealing hashes with UNC pathed templates

Not everything that Metasploit does revolves around a flaw in a software system. Sometimes you can use Metasploit to take advantage of the way software is SUPPOSED to work. For example, did you know it was possible to build a word document that uses a template that is on a UNC path? Neither did I!

Of course, in order to use that feature, Word will have to send it’s NTLM session hashes to the share. That’s only a problem if we have a listener collecting them on the other end of the wire. That’s exactly what Metasploit will do for us.

For this particular example, I use the word_unc_injector Metasploit module. This module allows me to create a Word document that calls to a template at an address of my choosing; I usually use an AWS instance for the target. This example is done in my local lab.

Once the document is made, I try and upload it into the site, as shown in Figure 1. Since it is expecting a Word document, and it got a Word document, we are in good shape.

Uplolading the sample file

Now we wait. When the user opens the file, Word will call out to the template as seen in Figure 2.

The Word opening dialog calling the network share

If it doesn’t find it, it just gives up after a while. Meanwhile, though, the NTFS session hashes are bring stores by my ‘template’ server, like Figure 3.

Woah Hashes

So what do we do about this and other document parser problems? Honestly, there isn’t much you can do. General security principles are best here:

  • Whitelist so you only are allowing the document types you want uploaded.
  • Don’t process files on the server.
  • Run virus scanners on all workstations, and the web server.

DoS with massive files

Sometimes installing or stealing things isn’t the best path at all. Sometimes, I just want to make your server freak out and crash, so I can see the error messages. I keep a Digital Download of Disney’s Brave on my testing box just for that purpose – 3.4 gigabyte file to upload a few times, just to see how your box handles it. Usually it isn’t good.

Only the beginning

These core categories of attacks are the most common, but they are 1) not everything and 2) much broader than presented here. There are several dozen parser exploits in Metasploit, and those are just the ones that are deemed worth the effort. It is much safer to just follow slightly more-strict-than-usual security policy when it comes to file upload and save yourself the trouble later.

Bill Sempf

Husband. Father. Pentester. Secure software composer. Brewer. Lockpicker. Ninja. Insurrectionist. Lumberjack. All words that have been used to describe me recently. I help people write more secure software.



profile for Bill Sempf on Stack Exchange, a network of free, community-driven Q&A sites