Reconnaissance means something different for pentesters as it does from vulnerability analysts. It is, truthfully, the first obvious break between the two forms of testing. For vulnerability analysts, reconnaissance means doing all of the research that is required to understand what one apps does, and how it works. For pentesters, reconnaissance means doing the vulnerability analysis. Remember, their job is to exploit the vulnerabilities. Finding the vulnerabilities is part of the reconnaissance! For vulnerability analysts, though, we have to go a little deeper into research the application itself, in order to find every possibly vulnerability.
I have quite a list - reconnaissance usually takes me a whole day of the week long test. I recommend not just running a scan - the idea is to get an idea of what the app does, and how well it does it. If I can, I schedule a chat with the dev lead to talk about what the app should do, and then I write the application summary of the report after I have indexed the app. But I am getting ahead of myself.
Beginning reconnaissance starts ahead of the application. Usually when I start, I just have a URL and credentials, just like an attacker would (we'll assume he stole the credentials from a user, cause that's pretty easy). The absolute first thing I want to do is look at the network and the server. Fortunately, there are two awesome free tools that will help you do this.
However, in order to do these, you will need an environment in which to do so. If you read my earlier posts you will know that I like Windows 10 for testing - putting me at odds with most of the industry. I do, however, acknowledge the weaknesses of the platform for some kinds of testing, and Perl and Python scripts are included in that. Therefore, I usually use a Kali Linux VM for scripts, because nearly everything I use is preinstalled. And there is a new tool on the horizon, PentestBox, which does everything in a modified command prompt right in Windows. It's pretty slick.
Start with the network (for one thing, this will give you the IP of the server). Really, we will probably just "network scan" one machine, depending on your scope, but that's OK. The tool you want to use is nmap. It's a fantastic open source vulnerability scanner for the network surrounding the web server in question. I got a favorite set of parameters from the awesome Jon Welborn, and I recommend it:
nmap -sS -Pn -v --script=default,vuln,safe -oA nameOfOutputFile IPofTarget
For the web server itself, there are a couple of options, and I still like Nikto. I like Nikto because I have a custom ruleset for Nikto, and no you can't have it because it has client information in it. That said, there are a LOT of other tools, and there is a very interesting and more updated tool for Windows (see my earlier posts for my thoughts on that) that might suit you better. That said, Nikto is easy to run, and it does catch a lot of stuff, especially on older installs, and even kinda not that old installs. It's still pretty slick.
nikto -host IPofTarget
Another part of recon is getting details about the encryption certificate being used to protect communicate with the browser. If the site is public facing, I use Qualys's free SSL Test. If the site is in an internal network, SSL Test can't see it, though, so you have to use another script. My usual go-to is SSLScan, which is also super easy in either Kali or PentestBox.
SSLScan IPofTarget
OK, enough of running scripts. Next I turn to my proxy. There are two I use, Burp Suite and ZAP. Burp Suite is a paid, closed source application with a company backing it, complete with researchers and devs and everything. It is well supported and has lots and lots of features. ZAP is a free, open source application with a company backing it, complete with researchers and devs and everything. It is well supported and has lots and lots of features. It's your call. However, for whatever reason, if you are working for an organization with a standard vulnerability program, the results are expected to be turned in using Burp. Otherwise, ZAP is a fantastic tool.
What we are going to do with the proxy is index the site. This works differently in Burp and ZAP, so I am just going to talk generalities here. First, with the proxy running, exercise the application completely. Some folks like to make a separate file from the proxy for each role, but I just use the labeling feature to label important interactions with their role. This is the admin login. This is the Admin adding a user. This is a user adding a user. WHOOPS. Shouldn't be able to do that. You get the idea.
Next we want to write the Application Summary. This is part of the report that tells what the application does. Why do we need to tell the developer what the application does? To make sure we are on the same page. You would not believe the number of times I have had the developer say "Didn't you test the Admin screens?" WHAT admin screens? That wasn't in the scope? Well, it was supposed to be. What you do business wise from there is up to you, but it lays a level set as to what the scope really was.
Then it is time to brute force. I use FuzzDB to get a list of known web directories and files, and then use the brute force tools to implement them. In ZAP, it's just called Forced Browse in ZAP. but for Burp you need to use Intruder and get fancy. I load up the first GET (for /) and then put the weird § signs right after, and run a directory scan, then a file scan. Then I run a file scan on any interesting looking directories. You will not BELIEVE what you will find, sometimes. There are 30,000 words in that Medium directory listing.
Finally, spidering. This is just like the old days, or like the Google spider - the proxy will look for any URLs, and attempt to follow them. Nice thing about attack proxies, they will look in comments, JavaScript, CSS, text files, anything it can for URLs. Then it adds them to the site map.
Once we have a solid look at the application, we scan. I don't suggest just right clicking on the host and selecting Active Scan. It takes forever, makes a crapload of extra stuff in the Burp file, and won't earn you much. Instead, look for interesting POSTs, or GETs with neat stuff in the URL. Stuff that gets edited. That's where the magic is.
While the scanning is happening, we do the "insight" portion of the recon. First, get the comments. In Burp, that's engagement tools, and ZAP uses an addon. Read them. Look for developer names, open source packages, interesting stuff. The hit Google. No, I'm not kidding. Look for existing, known vulnerabilities. Do they have a vulnerable version of JQuery? Look up the devs. Their LinkedIn, Facebook. Then get their StackOverflow profile. Asked any security questions recently? Any code in there? Are you smelling what I'm cooking here?
This is also the time when you look for weird stuff. Is there file upload? Notate it. Method name in a URL? Point that out. Redirects, like a URL in a querystring? Make a note. Encoded strings? Weird hidden HTML INPUT fields? Cookies you've never seen? Those are all Things That Are Weird. You need to add them to the file, and check on them in the analysis.
Last thing - need to take apart anything binary. Flash? Java applets? ActiveX (pleaseno)? Take them apart. There are decompilation tools out there, and at least run strings (it's in Kali) to see what's in there. You'd be amazed.
That's just about it for recon. I store everything in Evidence, and then step away from the app, even if just overnight. Fresh eyes and all that. The analysis part will likely take a bit, so we will break that over several posts.