Application Security is a Solved Problem

The vulnerabilities you hear about aren’t really the problem much of the time.

I don’t want to dismiss the OWASP Top 10, because that’s what started the focus on Application Security.  And that’s important - it really is.  That said, we are kinda past that now, and while there are applications that harbor the flaws described in the Top 10, and many of the vulnerabilities described in the Top 10 that matter, the way that list is derived is not relevant to the way the world works anymore. If you dig into the data that makes up the list, you’ll find that 98% of it is unexploited static analysis findings.  Anyone who has spent time with static analysis will assure you that the only way static analysis if workable in a development environment is with triage.  And I’m sure you have guessed by now how much application owner triage goes into these results: 0%.  These findings might be legit, but they are probably not exploitable.

What’s number one on the top ten?  Injection.  I agree, injection sucks.  Anytime you can make an application run some code that the developer didn’t intend to run, you gonna have a bad day.

In reality? It’s a unicorn these days.  SQL injection, command injection, browser injection (which is explicitly a separate entry in the Top 10), doesn’t matter, vulnerability analysts just don’t find exploitable versions that much.  99.4% of the injections vulnerabilities cited in the data that makes up the Top 10 are from static analysis.  Did they break in? No, it was just possible.  Was there a mitigating factor - the fabled Security Onion? We’ll never know.

“OK, Bill, what ARE the Top 10 then?”

I don’t know. I’m an N=1 case, so I can just speak for me, and what I read.  That said, seems like a lot of people are using social engineering to get access to things.

“But Bill, that’s not an application security problem!”

Actually, it is. About a third of common application security vulnerabilities can be exploited with a social networking attack, and yet that’s the third we most frequently dismiss because it’s hard to demo during a BlackHat demonstration.  And guess what?  Those are the vulnerabilities I see most frequently.

But it isn’t vulnerabilities that are the problem.  It is the bugs.  Let’s look at the bugs.  We are always talking about the outcomes, not the causes, but here I want to talk about causes.  We can talk about the outcomes later.  Let’s look at a breakdown:

  1. Out of date components: By far the biggest one I see is jQuery, which is arguably a big deal.  Mostly that is a DOM XSS problem, or info disclosure, and often the application isn’t using the feature that is exposed.  That said, there are a lot of holes here.  It is VERY hard to test all of the DOM XSS possibilities.  It’s far easier to just get rid of the sources and sinks.  
  2. Information Disclosure: Not really on the Top 10 at all, but used by attackers to build the phishing messages that make the sysadmins answer emails. A6 Sensitive Data Exposure doesn’t cover it - this is an account number in the URL, or returning a password on a change screen.  I am talking about commenting out a block of JavaScript because it is “causing a problem on the backend” or leaving a developer name in the HTML comments.  
  3. Cross Site Request Forgery: CSRF is a very complicated vulnerability with a very complicated exploit that has the simplest fix ever, and I have no idea why the fix isn’t part of every framework on the planet.  We have a session cookie already.  This is the problem. If we also have a session variable in the form post, the session can’t be forged (unless there is XSS).  Bang.  Done.  Why don’t we all do this?  Well, multihoming makes it hard, that’s for sure for starters, but I’ll have more on this later.
  4. Cross Site Scripting: Yeah, OK, it’s in the Top 10 and it is still a problem  But you know where I see it? In the DOM! These never make it back to the server at all, no logging, no encoding, no nothing, what a pain in the butt.  And they still trash your CSRF protection.  XSS does not make me happy, which is why I put it on the report even if I can’t write a POC (which I rarely have time for).  Yes, I know that is grouchy and makes you dig through your JavaScript.  Sorry.
  5. Insufficient cookie protection: There is absolutely no reason to fail to add SECURE and HttpOnly to your session cookie.  It’s like one line of config code. Oh, I’m sorry, you have a fancy JavaScript session management scheme?  Too bad, rewrite it.  It’s likely broken anyway (from the security perspective).  Let your servers manage session, stop doing the JavaScript thing when it comes to your sessions.
  6. Vertical privilege escalation: The problem with VPE is that it requires some existing knowledge of the application.  In a 100% custom written application, that isn’t likely, aside from an insider attack.  The thing is, there aren’t that many 100% custom written applications.  Most projects start SOMEWHERE that is known. The authorization system is understood (along with weaknesses) or the framework has known page URLs (like WordPress) or something of the sort.  If the attacker knows where they are going and the authorization isn’t perfect, people can get to the administrator pages.
  7. Unpatched servers: So I probably don’t need to say Equifax but … Equifax.  Seriously, if the Struts flaw doesn’t convince you that actively exploited flaws in your framework aren’t a risk, then nothing will.  When your vendor - open source or otherwise - tells you that you need to patch right now, you need to patch right now.  Not after your test cycle.  Not when management gives the OK.  Right now.  I’m a dev, I know it doesn’t work like that, but it has to, and soon.  This is getting bad.
  8. Horizontal privilege escalation: When a developer keeps important account information somewhere that an attacker can edit it, and that information is used to decide what an attacker is looking at, they might be able to look at things they shouldn’t see.  Appropriate authorization solves this problem but it is very hard to do right,  It’s a lot better to not give the user a chance to change this value at all.  Insecure Direct Object Reference is the flaw in question, and it’s still out there.
  9. Lack of Input Validation: A positive security model is a requirement for every application that faces the internet (and really internal ones as well, but that’s another topic). Every time you can, you should be checking the input against all of the possible inputs.  Can it not be negative?  Is it?  Reject it.  Is there a list?  Is the input on it? No?  Reject it.  Is it a free text field?  Fine.  Use the HTMLEncoder by OWASP.  Everyone (myself included) needed to do a better job looking for flaws in the validation of inputs, and removing them.
  10. Weird stuff: There is so much weird stuff.  I got an application to give me account details because of a malformed USER AGENT.  Found the user’s role tacked onto the session ID in Base 64.  Discovered an application that parsed a word document - including a call the the template at a random IP on the internet - in order to allow editing.  From there to here, from here to there, funny things are everywhere.  If you find yourself thinking “Hey, that’s a weird neat compelling way to solve that problem” look for something simpler.  Complexity is the enemy of security.

I should probably talk about mobile.  The biggest thing that developers need to understand about mobile is that the compiled app is not invulnerable to being analyzed.  Don’t put anything in there that you wouldn’t want the user to have.  API keys, private encryption keys, and paths to functions the user shouldn’t have are three of the most common.  Just the other night someone stole the keys to send alerts from an app and sent random messages to all 100,000 users.  At 3AM.  I was not amused.

Anyway, this is just my take.  Again, I am not ripping on the Top 10, I love the Top 10.  It’s just useful to get the perspective from other-than-static-analysis companies.  Manual dynamic analysis has its place.  So in that spirit, I’d like to take you on a tour of how I go about performing vulnerability analysis - looking for the bugs that I have listed here. I’m no expert BUT I do it every day, so you might be able to glean something interesting out of my stories.

On Application Vulnerability Analysis

We live in a world where applications run the technology that we all use.  There was, once, a time where hardware was custom developed to solve certain problems, but these days we have general use hardware and applications designed to solve our problems.  Everything from apps on our phones to websites to alarm systems to the management screens for our internet gateways are applications, coded in common languages, using common protocols.

For every 100 lines of code, there are five security vulnerabilities.

The average application is 15,000 lines of code.

Let’s take a minute to talk about pentesting.  Pentesting, or “penetration testing” to expand the vernacular, is the art and science of finding a path through the security of a system in order to achieve a goal.  It is different from red teaming, because rather than making for a continuous series of tests like contemporary attackers would, penetration testing is a scheduled event that is designed to move from point A to point B.

Vulnerability analysis is different.  The goal is to take an application and find every single thing that could be used to circumvent the security of that application, then report on those items and provide a solution.  It is similar to penetration testing because it is a scheduled event. It is is easier than penetration testing because you are far less likely to go to jail for the night.  It is harder than pentesting because you don’t have to find one flaw, you have to find all of the flaws.

Quality assurance and vulnerability assessment have a lot in common. Both practices have the goal of making sure the application in question is as good as it can be.  Quality assurance is focused on the end user experience.  To that end, the both use a test plan. The plan has a starting state, an end state, and steps to get from one to the other.  With QA it is best if those tests succeed.  With vulnerability analysis, it is best if those tests fail.

That test plan is key.  In quality assurance, the business owners give a detailed description of how the application should respond under every circumstance.  If the user submits a valid application, it should be sent to the processing center.  If it has an invalid date, then this error will be presented.  If it fails in processing, then this message will be sent.

In vulnerability analysis, the test plan is determined by the attackers.  Whatever the flavor of the week is, it’s added to twenty years of attacks on the HTTP Protocol, language specifics, side channel attacks, and other weirdness.  The analyst will, rather than checking how the application responds to valid business requests, check how the application responds to this huge collection of known threats.  Not just the ones that work.  All of them.  The goal is the find all of the vulnerabilities.

It is true, not all of the vulnerabilities can be found. There are a lot of tests, and a lot of fields, and a lot of POSTs, and a lot of URLs.  They can’t all be checked  and fixed with machines (at least not yet). There isn’t the time or money to check them all by hand. Things will be missed, and that’s how we end up with 81% of breaches in the last 10 years having an aspect of application security involved. This is why we still have SQL injection, even in the age of ORMs.  This is why we still see CSRF even though it is a well understood vulnerability.  There are far too many legacy applications and far too few talented developers.

So why am I writing about this?  I performed my first vulnerability analysis in 2002.  That seems like a recent date to me - I wrote my first paid application in 1986.  But in the arena of application security this is a recent date.  The folks that wrote the Internet didn’t design it for security. That wasn’t the goal - sharing was the goal. I am writing this because I participated in the process of the web becoming the hub for commerce and communication - where security mattered.

And I, along with a boatload of others, failed miserably.

I got paged at 11PM on a Saturday in 1997 by a trigger I’d set up on a web server because the hard drive was full.  Long story short: the drive was full of German porn because of a SQL injection flaw I’d written into an application running on that server. Someone had broken in and set up a convenient FTP server.

At the time I didn’t know SQL Injection was even possible.

I’ll leave the path from then to now to the reader, but suffice it to say, I was the security guy on every project from then on.  I still strive to teach developers how to write more secure code - I’m the Security Track advisor to three conferences, and I speak to developers monthly about security awareness.  I train a thousand folks a year on secure coding standards.

That’s not why I am writing today, though.  There are way too few people checking applications for vulnerabilities, and OWASP isn’t making things obvious enough.  They have a greater reach than I, and that’s awesome, but I wanted to put together this book to lay out how I see vulnerability analysis in plain language, in hopes that it would help a few other folks get into the field.

What’s in this guidance certainly isn’t the only way.  It probably isn’t the best way.  It might be a bad way. I’m not sure, but it has worked for me, and I’m including things that you won’t hear in some breakdowns, like client management and report writing. Feel free to ignore everything, or take just the pieces that you like.  And send me feedback!  I’m more public than I should be on Twitter (@sempf) and Linkedin.  My Skype is if you want to tell me how awful it was privately. I’ll take your perspective in any form.

ABC interviewed me about being on the good guys team

Bryant Maddrik at ABC6 interviewed me and Todd Whittaker at Franklin about the plight of the good guys in the information security wars. Here's the link to the post:

We met at the Idea Foundry, where I am a member, and it went well, I thought.  The Smart Columbus kickoff was happening in the main room though, and they had a live band! Can't hear it at all in the recording though.

Love you hear your thoughts about who is winning the battles.

The fork debacle

Monday this week, I was at lunch with the family and Adam (my son) kept bumping Gabrielle (my wife) with his elbow while cutting up his breakfast burrito.

"You are on the left corner of the table! Why are you manspreading to the right?"

"I'm cutting!"

"With your right hand?"

"... yeaaaaahhhh?"

Then she looked at me.  I also, as a left hander, fork with my left and cut with my right. Apparently after 25 years together we never noticed that we did this differently.  Gabrielle, as a righ hander, uses the OLD european style of using the knife in the right hand then moving the fork to the right hand to eat.  She's a switcher.

So I did what I do, and asked Twitter.

It's only 24 hours later currently, and I have 70ish replies - more than enough to do some analysis, so here goes.

Not a surprise, 80% of left handers do not switch.  some folks cut left and some cut right, but most of us do not change hands while eating.  You know what they say, left handers are in their right minds.

What WAS a surprise is that 70% of RIGHT handed respondents also do not switch. This is survey bias of my followers though. My community is, well, made up of geeks.  Fully half of the folks that replied admitted that they taught themselves to not switch because it is more efficient. The other half - still bias.  They are European. Which leads to a whole new question. Why?

As it turns out, there have been a few articles written about this. Originally, in Europe, switching was cool, and then it wasn't.  But like with measuring, Americans didn't get the memo. I understand that. It used to be hip to put to spaces after a period. Things change.

What I don't understand is why, my results notwithstanding, it appears that left handed people keep their fork in one hand by default, and right handed people switch by default (at least in the U.S.) If you have any feedback on that, I'd appreciate insight in the comments.


The Pen Tester's Framework: Skipfish

Skipfich is another web mapping vulnerability scanner, along the lines of my preferred Nikto.  Skipfish brings three specific things to the table: performance with very large sites, super easy use, and a super well designed set of rules for edge case vulnerabilities.  Huh, I am kinda convincing myself - I should be using this more.

Written in C by Michal Zalewski, Niels Heinen, and Sebastian Roschke, Skilpfish is one of the best architected tools I have seen. There re some weird things: there is a database of tests, except this one which is in the source:

 struct lfi_test lfi_tests[] = {
    "file:///etc/hosts", 0
   }, "", "File /etc/hosts was disclosed." },

    "file:///etc/passwd", 0
   }, "root:x:0:0:root", "File /etc/passwd was disclosed."},

    "file:///boot.ini", 0
   }, "[boot loader]", "File boot.ini was disclosed."},

Like Nikto, though, mot of the tests are in an editable database so you can customize things and keep the tests up to date. It is highly configurable, too, without busting up the rulesets.  For instance, in the config you have control over the wordlists.

## Dictionary management

# The read-only wordlist that is used for bruteforcing
wordlist = dictionaries/medium.wl

# The read-write wordlist and where learned keywords will be written
# for future scans.
#rw-wordlist = my-wordlist.wl

# Disable extension fuzzing
no-extension-brute = false

# Disable keyword learning
no-keyword-learning = false

Anyway, running skipfish is (as they wanted) stupid easy.

skipfish -o ~/sempfnet -S medium.wl

Then you get one of the best dashboards in all of open source software, in my opinion.  Seriously, if you need to look good, AND get good results, skipfish if your tool.

What's more, skipfish puts together a nice report, and I'm a big fan. I don't give clients these reports, though, I had write reports, but if you are doing quick tests this might be good enough!  Just double check for false positives.

All in all, a solid tool and a competitor to Nikto. Good addition to the PTF.

The Pen Tester's Framework: crackmapexec

As I mentioned in my intro post, I have started with the vulnerability-analysis modules and just went in alphabetical order.  

So we start with crackmapexec

Crackmapexec is a recon tool for when you are on a Windows network. It is called the swiss army knives of pentesting tools, and I agree. As the file says, there are a selection of core functions that can be called using arguments, but you can read a man page, right?

  • Credential Gathering
  • Mapping
  • Enumeration
  • Spidering
  • Command Execution
  • Shellcode injection
  • File System Interaction
So this is just a scanning tool. I am going to run it on my POINT network, and not just because the FALE VPN is down. AGAIN. (Dammit, Matt, you had. ONE. JOB.)  How much damage can it cause, right?

Oh, awesome. I ran it and nothing happened.

It turns out that the alias that PTF puts in for crackmapexec doesn't work. I am not sure why. I consistently get a 'too few arguments' error when I tried to use it.  Instead, I needed to run /pentest/vulnerability-analysis/cracmapexec/ directly (as su) to make it work.

So once I got that far, everything seemed smooth. The basic commands work well and scanned my network for shares here at POINT.

sudo python -t 10 --shares

Interestingly, it had some problems with my NAS

[+] is running  (name:BAGOFHOLDING) (domain:BAGOFHOLDING)
[-] SMB SessionError: STATUS_USER_SESSION_DELETED(The remote user session has been deleted.)

If I got dig into the underlying Impacket, which drives crashmapexec, I find that there probably is an SMB issue:

    def smb2Close(self, connId, smbServer, recvPacket):
        connData = smbServer.getConnectionData(connId)
        # We're closing the connection trying to flush the client's
        # cache.
        if connData['MS15011']['StopConnection'] is True:
            return [smb2.SMB2Error()], None, STATUS_USER_SESSION_DELETED
        return self.origsmb2Close(connId, smbServer, recvPacket)

Fascinating stuff, but did it get the shares? No, I apparently need to give it credentials. That's interesting.  There aren't very many things that you can do without creds as it turns out, so you at least need to know one user's login.  That's kinda a bummer, but I guess it depends on what you are looking for. As it is, I would file this under post-exploitation, rather than vulnerability-analysis, since it is for use after you are already in the network.  Maybe that's just me.

And it really is just for Windows PCs.  It saw my three Windows 10 workstations, a client's Windows 7 box, and strangely my NAS, which is a Synology DS412+. I'm sure the SMB shares are what kicked it off, and the non-windows underlying environment is what caused the user session failure.  Interestingly, crackmapexec didn't see my Windows 10 Rasberry Pi 2, or the Windows Phones that were on my network. Wonder why.

So, in the final analysis, crackmapexec looks like a really slick tool, almost could be a management tool for Windows environments given the ability to run a command on a number of machines. If the underlying code were a little more modular then some really great stuff could be done with it at the scripting level.  And it turned me on to Impacket, with which I was not familiar.

The Pen Tester's Framework: ftpmap

It's true, FTP isn't something that you think of first when conducting an assessment We look at web servers. FTP is something that people used in the 90s, right? 

Well, no. As it turns out a lot of organizations use FTP to move files from app to app, from partner to partner, from location to location. Legacy apps exist, and they are a large percentage of the vulnerable applications out there. FTP is a reality that everyone checking security needs to consider.

ftpmap "scans remote FTP servers to identify what software and what versions they are running" according to the man file.  It's certainly something that should be in your 'hey lets look at this server' test group.  So let's give it a try.

Interestingly, it wasn't installed! The configuration file was there, but there is no ftpmap directory in the vulnerability-analysis directory for the compiled code. So, well, I'll jsut do it manually!

sudo git clone
cd ftpmap
sudo ./configure
sudo make
sudo make install

Waaaiiit a minute. Error city. This shouldn't be happening! Guess that's why the PTF didn't put it on my system - something weird is missing:

CDPATH="${ZSH_VERSION+.}:" && cd . && /bin/bash /pentest/vulnerability-analysis/ftpmap/missing aclocal-1.15 
/pentest/vulnerability-analysis/ftpmap/missing: line 81: aclocal-1.15: command not found
WARNING: 'aclocal-1.15' is missing on your system.
         You should only need it if you modified 'acinclude.m4' or
         '' or m4 files included by ''.
         The 'aclocal' program is part of the GNU Automake package:
         It also requires GNU Autoconf, GNU m4 and Perl in order to run:
make: *** [aclocal.m4] Error 127

Well, crap. A quick run of apt-get shows that I have the latest version of automake, so there must be something subtle wrong. Aah well, I don't see THAT many FTP sites out there ...

Wikistrat predictions for 2016

Some of you know that I am the curator of the Information Security desk at Wikistrat, a virtual strategy consulting company. We have fun over there, and a recent project was collating some predictions for 2016.

It's a little buzzword-rich, as these things are wont to be, but there is some cool stuff in there if you are into geopolitical stuff. I also submitted my take on the direction of cybercrime and malware, with an assist from Brent Huston.

Take a read, and let me know what you think.

The Pen Tester's Framework: dotdotpwn

There are a few vulnerabilities that are so complex that it is best to use a special tool to test for them. SQL Injection is a great example, and sqlmap is the tool. Another of these examples is directory  traversal - flaws in server setup or application configuration that allow a user to access files and directories that are stored outside the web root folder. For that dotdorpwn is the tool.  Referred in the OWASP Testing Guide and Kali Linux, as well as the Pent Tester's Framework, dotdotpwn is the tool of choice for directory traversal.

Dotdotpwn is designed to test for paths to interesting files outside of the web root using an intelligent fuzzing of servers like http, ftp, or stdout, as well as software on top of those protocols, like blogs, ERP, CMS, and others  It uses a comprehensive ruleset and a fairly comprehensive machine learning system combines with a database of existing known flaws in this software to find files that could be accessible outside of the usual use of an application.

Oh, and it is written in Perl by the way.

Using dotdotpwn is super easy.  Just need to give it a URL and a protocol and it goes to town. It is a database centered script, testing paths that are known to be a problem.  When I sent it at it didn't find much (but I'm hosted on Azure so that's not a huge surprise).

sempf@sempf-Aspire-S7-391:/pentest/vulnerability-analysis/dotdotpwn$ perl -m http -h
#                                                                               #
#  CubilFelino                                                       Chatsubo   #
#  Security Research Lab              and            [(in)Security Dark] Labs   #
#                      #
#                                                                               #
#                               pr0udly present:                                #
#                                                                               #
#  ________            __  ________            __  __________                   #
#  \______ \    ____ _/  |_\______ \    ____ _/  |_\______   \__  _  __ ____    #
#   |    |  \  /  _ \\   __\|    |  \  /  _ \\   __\|     ___/\ \/ \/ //    \   #
#   |    `   \(  <_> )|  |  |    `   \(  <_> )|  |  |    |     \     /|   |  \  #
#  /_______  / \____/ |__| /_______  / \____/ |__|  |____|      \/\_/ |___|  /  #
#          \/                      \/                                      \/   #
#                               - DotDotPwn v3.0 -                              #
#                         The Directory Traversal Fuzzer                        #
#                                       #
#                                              #
#                                                                               #
#                               by chr1x & nitr0us                              #

[+] Report name: Reports/sempf.net_11-04-2015_22-36.txt

[========== TARGET INFORMATION ==========]
[+] Hostname:
[+] Protocol: http
[+] Port: 80

[=========== TRAVERSAL ENGINE ===========]
[+] Creating Traversal patterns (mix of dots and slashes)
[+] Multiplying 6 times the traversal patterns (-d switch)
[+] Creating the Special Traversal patterns
[+] Translating (back)slashes in the filenames
[+] Adapting the filenames according to the OS type detected (generic)
[+] Including Special sufixes
[+] Traversal Engine DONE ! - Total traversal tests created: 21144

[=========== TESTING RESULTS ============]
[+] Ready to launch 3.33 traversals per second
[+] Press Enter to start the testing (You can stop it pressing Ctrl + C)

[*] HTTP Status: 403 | Testing Path:
[*] HTTP Status: 403 | Testing Path:
[*] HTTP Status: 403 | Testing Path:
[*] HTTP Status: 403 | Testing Path:
[*] HTTP Status: 403 | Testing Path:
[*] HTTP Status: 403 | Testing Path:
[*] HTTP Status: 403 | Testing Path:
[*] HTTP Status: 403 | Testing Path:
[*] HTTP Status: 403 | Testing Path:
[*] HTTP Status: 403 | Testing Path:
[*] HTTP Status: 403 | Testing Path:
[*] HTTP Status: 403 | Testing Path:
[*] HTTP Status: 403 | Testing Path:
[*] HTTP Status: 403 | Testing Path:
[*] HTTP Status: 403 | Testing Path:
[*] HTTP Status: 403 | Testing Path:
[*] HTTP Status: 403 | Testing Path:
[*] HTTP Status: 403 | Testing Path:
[*] HTTP Status: 403 | Testing Path:
[*] HTTP Status: 403 | Testing Path:
[*] HTTP Status: 403 | Testing Path:

there you have it - it will just test path after path.  That's what pentesting tools do well: patience. This toll will simply poke through everything on every platform to get a path to a file that isn't protected.  There are a lot of options for altering how the scan works, but I am not going to copy them all here.  Check out the examples here:

Path traversal is a very common vulnerability and should be checked on every application.  dotdotpwn is constantly updated, and does the job well. All in all, this is a good match.

The Pen Tester's Framework: nikto

I have a confession: nikto is one of my favorite tools. It is a web server scanner that checks for near 7000 known vulnerabilities in software, and also for CGI applications with known vulnerabilities. Even if the developer of the application I am testing has no idea that the software in question is on the server the attacker can use the vulnerability to get a certain level of access on the server, and use that to get into the application.

What we are talking about is things like checking for known default credentials, looking for unprotected CGI files, fishing for hidden content, deep header enumeration, and even pushing for exceptional error messages. It's a neat package, and it is customizable as a new level - the underlying tests are all stored in a database. No code changes are needed to add new checks.

One thing you should know though is this: the version in the PTF is not the newest. nikto is constantly updated, and you should update the PTF file if it hasn't been recently.  After that, you should run sudo nikto -update to get the latest versions of the attacks in the database. 

And as it turns out, Nikto is stupid easy to use. If you have a new host or a new server, you can do a quick check with 

nikto -host -output domain.txt

That will find a load of stuff on anything other than Azure or EC2. Seriously, if you have an application that someone decided to 'put on that server over there' then you should run this check. Just takes a second.  Well, OK, a few minutes. Still.

Some of the important parameters that I have used in the past to solve particular problems include:
  • -ssl or -nossl is essential if the site doesn't support HTTPS or only supports HTTPS. This will cut 30 minutes from your scan time.
  • -id is super important if you have an authenticated site. Username:password is the parameter.
  • -no404 is so important if a misconfigured server returns 200 to everything.
  • -evasion will push past an intrusion detection system. If you are getting a ton of timeouts, try this. Parametes include:
    • 1     Random URI encoding (non-UTF8)
    • 2     Directory self-reference (/./)
    • 3     Premature URL ending
    • 4    Prepend long random string
    • 5     Fake parameter
    • 6     TAB as request spacer
    • 7     Change the case of the URL
    • 8     Use Windows directory separator (\)
    • A     Use a carriage return (0x0d) as a request spacer
    • B     Use binary value 0x0b as a request spacer
  • -nocache turns off the response cache. Oh goodness this is major sometimes.
Understand what you are testing, Watching responses form something like nikto and then tweaking the options until you get it working is key to a good scan.  

Bill Sempf

Husband. Father. Pentester. Secure software composer. Brewer. Lockpicker. Ninja. Insurrectionist. Lumberjack. All words that have been used to describe me recently. I help people write more secure software.



profile for Bill Sempf on Stack Exchange, a network of free, community-driven Q&A sites