The Pen Tester's Framework: nikto

I have a confession: nikto is one of my favorite tools. It is a web server scanner that checks for near 7000 known vulnerabilities in software, and also for CGI applications with known vulnerabilities. Even if the developer of the application I am testing has no idea that the software in question is on the server the attacker can use the vulnerability to get a certain level of access on the server, and use that to get into the application.

What we are talking about is things like checking for known default credentials, looking for unprotected CGI files, fishing for hidden content, deep header enumeration, and even pushing for exceptional error messages. It's a neat package, and it is customizable as a new level - the underlying tests are all stored in a database. No code changes are needed to add new checks.

One thing you should know though is this: the version in the PTF is not the newest. nikto is constantly updated, and you should update the PTF file if it hasn't been recently.  After that, you should run sudo nikto -update to get the latest versions of the attacks in the database. 

And as it turns out, Nikto is stupid easy to use. If you have a new host or a new server, you can do a quick check with 

nikto -host -output domain.txt

That will find a load of stuff on anything other than Azure or EC2. Seriously, if you have an application that someone decided to 'put on that server over there' then you should run this check. Just takes a second.  Well, OK, a few minutes. Still.

Some of the important parameters that I have used in the past to solve particular problems include:
  • -ssl or -nossl is essential if the site doesn't support HTTPS or only supports HTTPS. This will cut 30 minutes from your scan time.
  • -id is super important if you have an authenticated site. Username:password is the parameter.
  • -no404 is so important if a misconfigured server returns 200 to everything.
  • -evasion will push past an intrusion detection system. If you are getting a ton of timeouts, try this. Parametes include:
    • 1     Random URI encoding (non-UTF8)
    • 2     Directory self-reference (/./)
    • 3     Premature URL ending
    • 4    Prepend long random string
    • 5     Fake parameter
    • 6     TAB as request spacer
    • 7     Change the case of the URL
    • 8     Use Windows directory separator (\)
    • A     Use a carriage return (0x0d) as a request spacer
    • B     Use binary value 0x0b as a request spacer
  • -nocache turns off the response cache. Oh goodness this is major sometimes.
Understand what you are testing, Watching responses form something like nikto and then tweaking the options until you get it working is key to a good scan.  

The Pen Tester's Framework

I've spend much of the last several years doing application vulnerability analysis. It's a lot like pentesting, except instead of finding one path through an environment to take something of value, we find anything in an application that could be used to create that path. It's pretty cool.

Anyway, people that do vulnerability assessment use a lot of tools that help us find known vulnerabilities.  There are many problems that are well understood - both known flaws in existing software, and patterns that cause problems in new software.  These tools take the drudgery out of testing for all these known flaws, so we can spend our time looking for things that can't be scripted out easily, like business logic flaws.

There are a lot of tools out there, and many of them are crap.  TrustedSec, Dave Kennedy's security consultency, has compiled a cool installer that collects the best of the best tools, all of their prerequisites, and those weird mappings that we all need to make our lives easy. It's called the Pen Tester's Framework.

For the next several weeks, I am going to walk through most of the tools in the Pen Tester's Framework and give a developer's view into them. I'll break into lab machines, review code, look at usability, and generally make a mess of things. I hope you'll join me.

My environment will consist of my Acer Aspire S7 with Ubuntu 15 as the test machine, and mostly the OWASP BWA as the target.  I'll also probably be scanning the FALE lab occasionally, and probably my own dev lab as well; I'll describe those as I use them.  

To install the Pen Tester's Framework, you need to get it from Github.  For the simplest solution in Linux, just install git (sudo apt-get install git) and then clone the repository with:

git clone

In Windows, try the download link in Github. The URL you want to start with is

Once you have gotten the files in place you just need to execute the installer in Linux.  That just means:

sudo ./ptf
use modules/install_update_all

That will put everything in /pentest. One thing I have to recommend is using this command often.  These tools are updated constantly, and you should respect that.  I ran into all kinds of problems with updates while writing the series.  The tools need their updates!

The PTF is divided into several sections, and I am going to start with ... guess it ... vulnerability analysis! Pretty awesome, eh? I thought so.  Some of these tools I know, and some are totally new. Looking forward to getting started.

Developers: Care and Feeding

I just finished giving my new talk on the care and feeding of your developers in a security culture at DerbyCon, and I wanted to lay out some of the thoughts I presented in written words as well.

There is a solvable set of problems that are causing the divide between infosec and development.  Almost all of it revolves around risk-averse infosec practitioners and risk-accepting developers seeing the same issue from different sides.  For instance, infosec may see a feature like framability as a risk (due to UI Redress) where developers see it is a nice thing to offer users. Even when a security flaw is found, differences abound. Infosec view and reports the flaw as a vulnerability, wheras to developers it is just a defect.  Severity is an issue; infosec inflates the severity of questionable vulnerabilities for legal or headline-grabbing reasons but developers just don't see the urgency.  This language shift is exaserbated by the divergent personality types in infosec and development.

That's not all though.  infosec fails to understand the basics of how a developer lives their life. The Software Development Lifecycle, or SDLC, be it a more traditional document-style or a newer agile-style process, rules the developer's day to day existence. Introduction of non-customer impacting bugs just doesn't have a slot in the process.  Also, information security needs to understand their products' underlying platform, along with it's strengths and weaknesses.

So now that we have a grip on the problem, what do we do? 

First, we automate. Some things in development just need to be autiomated.  Testing, deployment, and confirmation, for instance, are three areas where good, well vetted automation will make it a whole lot easier to focus on the real hard things. Give dev and QA the tools to automate their jobs, and teach them to use them.  When it comes time for application vulnerability analysis, put your static code analysis in the QA build, and let it focus on what it does best; same with the dynamic code analysis.  Don't make someone have to push a button, just automate it.

But some things a human does well. You can't have a report run from an automated scanner and then jsut thriw it over the wall to the devs.  Find the vulnerabilities that matter, focus on those, speak about them in terms of defects, and add them to bug tracking. 

There are other things that humans do too, like pentesting and code review. This is the human side of static and dynamic analysis. It must be understood that a grasp on the platform you are testing, and the business case in question, makes all of the difference.  If you take a report, especially from someone outside of the company with an imperfect understanding of your environment, the report will be of no use to the developers at all.

A third thing that humans do well is apply foresight.  Use of tools like the OWASP Proactive Controls or Application Security Verification Standard will make the transition into a security culture so much easier because the developer can see a path to more secure software.  They look ahead - way better for a dev than looking behind.

So what should you do today? Well, remember that devs like spicy food and expensive beer. Get your dev staff together over a few beers and don't just train - teach! A pattern that has worked well for me is to start with a 'test yourself' approach, then talk about secure coding, and then tie it down with security principles.  Or start with the principles.  Or start with the coding - just make sure to hit for the cycle. There are very few appsec specialists out there, so it behooves us all to make a few. Giving the developer the tools, and then teaching them how to use them, is more likely to make you an appsec person than just about anything.

We're in this fight together, folks, take it or leave it.  The end goal is to make usable AND secure software, in a reasonable amount of time and under a reasonable budget.  All of us do different things well, so if we agree to sit around the table and communicate, we can all get the job done.  Remember that security isn't developer's first concern and shouldn't be.  Give them the tools, teach them how to use them, and provide reasonable, valuable analysis of the apps, and you'll see things get better soon.

A year of "On Testing"

So a year ago, while debugging a SQL statement in an identity system, I jotted a stupid joke into Twitter.

It has garnered some popularity, for some reason.

I probably should use this time to discuss the ins and outs of software testing, how it integrates with security, and why there is such a response to the joke. But that's all been done.

Instead, I'd just like to take a moment to marvel at the insane power of social media. I mean, that joke has touched over four MILLION people. That's a lot. And it's still going! If I go right now and look at my notifications, 42 more folks have retweeted it.

I have had to turn off my notifications on all devices, otherwise everything buzzes constantly. It's nuts!

But take a second and compare this 15 seconds of fame to the larger issue.  Take Ahmed Mohammed, who went from arrested at school to a white house invite in what, 36 hours? The whole internet stood up! I couldn't believe how aligned my timeline was.  But now, we start to learn that there might be two sides to that story, and that it might have all been a setup. How about that? Someone playing the Social Network? Like a fiddle? Say it ain't so!

With great power comes great responsibility, but what if that power is distributed? And anonymous? Who bears the responsibility? There are some things you just "don't do," but they get done all the time. Someone - someone anonymous, in the network - doxxes someone who the Social Network has decided is worth contempt and then WHOOPS, we were wrong. But now their life is in shambles, and the horde moves on to the next worthy adversary.

I don't really have a solution, but having had a taste of the immense power of the network in a very small way (seriously, 610 replies!) I can just imagine what it would have been like if one of my more off-color or politically or morally charged posts caught the interest of the horde.

Be careful what you share. Make sure your family does as well. You never know what's gonna catch on.

Taking it to the people

This  has been quite a year of community. I have been honored to present at a load of user groups and OWASP meetups this year, and I still have quite a few to go. Here's been some of the talks I have done so far this year:

CodeMash - Developer Security Training
Cleveland OWASP - Cracking and Fixing REST Services
Central Ohio Infosec Summit - Developer's Guide to Pentesting
The State of Security Podcast - Working with Developers 
Arena Tech Night - Why the Web  is Broken
Columbus ISSA - Weaving Security into the SDLC
CodePaLOUsa - Weaving Security into the SDLC and Developer Security Training
CircleCityCon - Developer Security Training
BSides Cleveland - Why the Web is Broken
Great Lakes Area .NET User Group - Developer's Guide to Pentesting
Converge Detroit - Weaving Security into the SDLC and Cracking and Fixing REST Services
Pittsburg OWASP - Cracking and Fixing REST Services

The rest of the year looks to be just as eventful, and I am very much looking forward to it!

Central Ohio .NET User Group - Developer's Guide to Pentesting (August 27)
DerbyCon - Developers: Care and Feeding (September 25-27)
DogFoodCon - Why the Web is Broken (October 7-8)
OSU CyberSecurity Day - Developer Security Training
Vermont Coder's Connection - Developer's guide to pentesting

Hope to see you at one of these awesome events!

Timing attacks in account enumeration

Yesterday, Troy Hunt posted a very well written article showing how account enumeration can cause information disclosure. Essentially, in an attempt to be useful, your site inadvertently tells an attacker who is and isn't a user of your site.

For instance, if you use email address for username, when a user logs in your site has the option to tell them if their email address or password is incorrect.  If you are specific: "Email address not found" or "Password incorrect", then account enumeration is possible. I can send my list of 144 million email addresses and a password of 'asdf' to your login page, using ZAP Fuzzer. Then, those that said the password was incorrect mean that the email WAS correct, and I have a list of valid accounts.

As an aside, 20% of user accounts use the top 100 most popular passwords. If I can bypass your account lockout procedure with timing or parameter tampering, and send those 100 passwords to each of the legit accounts that I have enumerated, I will statistically gain access to one in five. I have tried this four times, and it has worked all four times.

So, anyway, I am here to tell you that simply making sure that you post a 'Your credentials are incorrect' isn't enough.,  Troy found a really good subtle indicator in his post, and to find those I recommend that you send each response to a comparer - both the failed username and password - to make a bitwise comparison. I have found stylesheets slightly different, an extra linefeed, all kinds of things.  But with the prevalence of hashing, I have discovered something even more interesting.

"But Hashing, Bill? I thought hashing passwords was a good thing!!"

You are right.


Let's look at a login procedure.  In the POST action, you have something that looks a little like this:

checkCreds(username, password)
    user = LookupUser(username)
    if user then:
        hash= hashPass(password)
        if hash = user.hash then:

Follow me? If the user exists then hash the password and compare. Seems like a pretty sensible model.

There is a problem, and I discovered it because I usually run tests with F12 tools loaded up. When I passed in the wrong username, the response took about 100 milliseconds. If I passed in the right username and the wrong password, the response took around 500 milliseconds. The time it took to hash that password resulted in account enumeration.

So I tossed my list of email addresses against it with a password of "password". Those that took 100 milliseconds I discarded. Those that took 500 milliseconds were valid accounts.

What's the takehome from this? Well first, get a application vulnerability analysis from a qualified auditor. These things are hard to find. Second, very carefully review the code for your login procedure. For the average attacker, this is the only page that is available for attack. Don't give away the freebies.

Some exciting news - now find my Application Security training on Wintellect Now

Ninja edit: OK, NOW it is live!

I'm so proud to announce that you can find my application security training on Wintellect Now!

My first course is my Developer's Guide to Security, now titled 'Writing Secure Applications, Part 1: Threats, Principles, and Fundamentals'.

Soon, I will put  up the rest of my training for application developers, including threat analysis and remediation for injection flaws, information disclosure, data protection and more.

If you have enjoyed my training at CodeMash or elsewhere in the past, you should certainly check out this growing series. If you haven't heard me before, give it a try! All you can get is more secure out of the bargain.

Using the OWASP ASVS for secure software development

Monday, at BSides Columbus, I premiered a new talk about using the OWASP Application Security Verification Standard as the basis for a secure SDLC, or a software security test plan, or a code review guide, or anything else your company needs to get off the starting blocks with regards to application security. I think the talk was well received, and was asked to put a synopsis on 'paper' for reference.

A system for verifying security controls
The purpose for the ASVS is providing a standard of communication between software vendors and customers. The customer can ask 'How secure are you,' the vendor can answer 'THIS secure,' and everyone is on the same page.

By nature, the ASVS is platform independent and free of technical detail. It is simply a listing of security controls, subcategorized by topic and ordered by relative difficulty to implement. This lends itself tremendously well to supporting the development of an application security platform for any software - not just for communication with tool vendors.

Embedded security principles

The ASVS is tightly integrated with two projects that are core to OWASP: The Top 10 and the Security Principles Project. The Top 10 is nothing new, but the integration of security principles into the core of a security program is strong sauce that isn't easy to make. Using the ASVS to help you integrate the core principles into your program brings a lot of value, virtually for free.
  • Defense in Depth
  • Positive Security Model
  • Fail Securely
  • Principle of Least Privilege
  • Separation of Duties
  • “Security by Obscurity”
  • Do Not Trust the Client
Four verification levels
Since the ASVS is designed to let the tool vendor inform the customer as to 'how secure' they are, it makes sense that the would be 'levels' for the standard. These include:
  • Level 0 (Cursory) indicates that the application has undergone some type of certification.
  • Level 1 (Opportunistic) indicates that the application adequately defends against security vulnerabilities that are easy to discover.
  • Level 2 (Standard) indicates that the application adequately defends against prevalent application security vulnerabilities whose existence poses a moderate to serious risk.
  • Level 3 (Advanced) indicates that the application adequately defends against all advanced application security vulnerabilities.

Thirteen verification requirements
The thirteen verification requirements, and their sub-points - represent some of the best thinking I have ever  seen on the distilling of application security principles into actionable items without specifying platform. These are the core of the standard, and should be used to map to your individual needs. From there you can build a secure SDLC, a test plan, whatever you need.

  • Authentication
  • Session management
  • Access control
  • Input handling
  • Cryptography at rest
  • Error handling and logging
  • Data protection
  • Communication security
  • HTTP security
  • Malicious controls
  • Business logic
  • Files and resources
  • Mobile security
Going forward
Going forward, I recommend a 5 step plan for getting the ASVS installed in your development process:
  1. Approach Management, and tell them you have a plan for application security
  2. Determine your starting level. I recommend Level 1.
  3. Match the requirements to your software - the hardest part. Go point by point and figure out where in your software you need to implement changes based on the requirements.
  4. Assign responsibility to development staff, even if you ahve to break out Microsoft Project.
  5. Implement. Pull the trigger.

Crushing bugs in OWASP ZAProxy

At the latest OWASP meeting in Columbus, we got set up to crush some bugs in ZAProzy, the OWASP attack proxy project. ZAP is written in Java, and the project is run by Simon Bennetts and sponsored by the Mozilla Foundation.

So for the dev crowd, ZAP is an attack proxy. Attack proxies are pentesting tools, used to observe the raw HTTP requests and responses in a web application. It sits between your browser and the web server and allows you to interact with the traffic. It works just like Fiddler, except instead of the value added tools revolving around debugging, they revolve around security.

To get set up, we followed the Building OWASP ZAP using Eclipse IDE for Java document, version 3.0. I strongly recommend you start here, even if you have other Java or Subversion tools installed. I did and I am so happy. One thing if you are on Windows 8 – run Eclipse as Administrator. Just like with Visual Studio, you’ll have trouble if you don’t.

Once I followed that document, I could build ZAP. Awesome.  Now I needed something to do. Off to the bug list I went. Simon has things organized well, so you can do as I did and search for ‘idealfirstbug’ to get some bugs that are good to start with. I picked Issue 1145, which was a simple tooltip. Good for me, because as you probably know, I am not a Java programmer.




Alright, I need to find that text. In Eclipse, I went to Search –> Search and clicked on the File Search tab. Then I entered the faulty text, ‘Show Tab Names and Tab Icons’ and searched. Aaaaand, I got 27 responses. That’s not good at all.

But wait! They are all properties files! And they are different languages. Guess there are only so many ways to say “Tab Names’ or something. Anyway, I figure I can search the property name ‘showNames’ and see about the logic that uses it, right?

Bingo! Searching for that get me to, Right there in JToggleButton there are the properties being set, just reverse of how the logic works.

[sourcecode language='java' ]
btnShowTabIconNames = new ZapToggleButton();
btnShowTabIconNames.setIcon(new ImageIcon(MainToolbarPanel.class.getResource("/resource/icon/ui_tab_icon.png")));
btnShowTabIconNames.setSelectedIcon(new ImageIcon(MainToolbarPanel.class.getResource("/resource/icon/ui_tab_text.png")));


I switched the two properties names, showIcons and showNames, and bang, bug is squashed.

Now I have to get it in the code base. This is where subclipse really shines – the ability to quickly create patches. If you come from a forking Github world, patches might be foreign, but they are a really simple, straightforward, manageable way to  submit changes to an open source system sing subversion.

First though you have to find the file you are working on in the project explorer, This is where I discovered my new favorite button, the Link With Editor button. It syncs the view in the Package Explorer with the code view. Now the file I am editing is highlighted.






Seriously, new favorite button.

Anyway, right click on the changed file and select Team –> Create Patch from the context menu. The Create Patch dialog will confirm what files you are building a patch for,







I’d advise saving to a file, and then click Next. Then select Project as the scope (Workspace is just for multiproject workspaces, which is rare) and click Finish. That’s it, you’re done.

At that point, I went back to Google Code and added a comment to the bug, with the patch file attached. I;m not a committer on the project, so that’s what you do – just like a pull request in git. Hopefully, but the time you read this, the bug will be closed, and all will be right with that toggle button, at least.

Why Geography Matters When Using Amazon Web Services

When setting up an EC2 instance or configuring a profile, you have the choice to set the Region and Availability Zone. If you were wondering how that mattered, you aren’t alone.

What’s the difference between Region and Availability Zone?

Regions are actual physical locations of Amazon computers. While they would like us to think of the cloud as some magical server in the sky, in reality there are big buildings all over the world full of servers. The Regions, shown in Table 1, are the actual physical locations of these servers.

Table 1: Regions and Availability Zones




Availability Zones


Northern California

US West

6 zones shared with us-west-2



US West

6 zones shared with us-west-1



US East

5 zones



Asia Pacific

2 zones



Asia Pacific

2 zones




2 zones




3 zones


Sao Paulo

South America

2 zones

Availability zones are isolated areas inside regions that are designed to protect against failures in other availability zones. They have separate power, cooling and physical space.

Why should I pay attention?

Amazon is designed for global access. Their web site is global, and their servers are global. If you are using AWS you have the option to support a truly global architecture as well.

There are requirements that may cause you to carefully consider the location of your servers. These requirements are why you have the ability to choose your region.

Legal considerations

There are a number of privacy laws on the books – especially in Europe and the U.S. – that restrict the passing of government data outside the bounds of a region. AWS supports this, implicitly in the regional settings, and explicitly with GovCloud.

GovCloud is a physically restricted cloud service that is designed to explicitly prevent data from leaving the borders of the US. When building a governmental web application in the U.S. that’s probably your best path.

Regions implicitly segregate data, too. While regions are connected via the open internet, if you select the EU for your S3 instance, that’s where your data will be stored.

Guarding against failure

AWS does go down. It isn’t very common, but it happens. Regions and Availability Zones were created to protect against just such a happening. You must, however, architect your application against such a happening.

Regions are not duplicated among themselves by default. In order to create a truly fault-tolerant application, you must set up something like the Cross Region Read Replicas in Amazon RDS.

Plain old bits on the wire

Of course there is a much more straightforward reason for the correct management of regions: the actual physical distance between your application and your customers. Those bits still have to travel the wire, so make sure your application is close to the folks that use it.

Geographic restrictions

Under some circumstances, your content might be restricted to users only living in one geographic area. For instance, some content can’t – by law – be exported outside of the European Union.

I for one was surprised that the Region can’t be used for this restriction. CloudFront offers a geographic restriction, but it is one a list by country basis, and you set it up separately from the region.

How to make a plan

Long before you set up that EC2 instance, take careful stock of your situation. Consider your user location, your needs for fault tolerance and the legal landscape if your application. Then you can map out your regions to best make use of the AWS servers.

Bill Sempf

Husband. Father. Pentester. Secure software composer. Brewer. Lockpicker. Ninja. Insurrectionist. Lumberjack. All words that have been used to describe me recently. I help people write more secure software.



profile for Bill Sempf on Stack Exchange, a network of free, community-driven Q&A sites