On reporting

When all finished with your testing, and have collected all the evidence, it is time for the report.  The report is expressly in existence to make it easier for the development team to fix the bugs. A lot of people don't like reporting.  I am distinctly not one of those people.  I love reporting.  Of course, I wrote a bunch of books, so I have good patterns for writing, and I am sure that makes a difference.  Here is my pattern.

Never start with a blank page.  Always have a template.  If you are starting your own report format, you might be able to get something out of this post.  

Start the report with an executive summary.  Believe it or not, the executives do read these.  It should explain:

  • What a vulnerability assessment is
  • When it happened
  • What exactly was tested
  • A high level overview of the findings

Next you want to get into the nitty gritty.  Start this with a technical narrative.  This can be as complex or as simple as fits your process.  I just list the tools I used and devices - if any - and leave it at that.  Many go whole hog though, and include screen shots of use of the tools from the various tests, what was and wasn't found during each, and like that.  I don't bother with this.  I include the proxy log as part of the evidence of my test.  If anyone wants to see what I did they can look there. I've never had anyone ask for more than that.

This is, however, a good place to talk about report length.  Vulnerability assessments are expensive, often between four and twelve thousand dollars.  Companies large enough to have relationship managers of same kind - program managers or sales staff - feel bad turning in short reports for that much money.  I get that, I really do.  I'd personally rather turn in a short but complete report than deliver a tome that will never get read.  Different strokes for different folks.

Now we get to the nitty gritty -  the findings themselves. After making a basic list of the findings that is easy to read - finding, one sentence description, severity - I get straight into the meat and potatoes.

  • Personally, I like a page break at the beginning of every finding.  This makes it easier for a development lead to print a finding, hand it to a staff developer, and say "Here, fix this" if they happen to not have a bug tracking system. Remember, the whole point of the report is to make it easy to fix the bugs.
  • Title the finding, give it a reference number, and list the severity.  I like to list how easy it is to fix, too.
  • Describe the finding.  "DOM Cross Site Scripting is browser injection.  It could allow an attacker to acquire the session information of the logged in user.  DOM XSS is especially dangerous because there is no server-side check possible, as it entirely happens in the JavaScript in the browser."  Like that. Remember that you know all of the findings and what they mean, but some of your end customers will not. It is worth having a template of common finding descriptions for copy and paste purposes (I have a spreadsheet).  
  • Give the technical details.  I do this in a Request and Response format. For Cross Site Scripting, as an example, I would show the request with the payload painted in red, the response with the unencoded attack painted in red, and then a screenshot of the outcome (If I am in a hurry you often get an alert(1);)  I do NOT use screenshots of code.  Don't let the sales folks talk you into that.  Yes, I know, having a table of figures is cool and all that but you can't copy and paste example code from a screenshot.  Just say no to screenshots, unless it is an alert box or something.
  • Finally, and most importantly, remediation advice.  This is how to fix the problem, Here is where a coding background is helpful.  Sure, you can Google XSS and then type "validate your inputs" in the remediation advice, but that is useless.  The advice is where you really earn your money, so take the time to figure it out.  For instance, in the DOM XSS above - have you ever tried to validate inputs using JavaScript? It's a screaming pain in the ass. Find out what the latest libraries that make it easier are, and recommend them.

Finally, this is not a marketing paper.  You don't need to tell 'em what you're gonna tell 'em, tell 'em, and then tell 'em what you told 'em. The conclusion should just be in invitation to schedule a readout call, where you can walk the development staff through the findings, and that's it. After that, proofread, save as a PDF, and call it a day.

If you liked this, start the beginning of the series with On Tools.

Vulnerability Analysis is just fancy QA

Test an application for vulnerabilities is just like testing an application for meeting the business requirements.  The analyst has to have access to the application, an understanding of how the application works, and a test plan in both cases in both cases. Let's take each one at a time.

First is access.  While it seems like cheating, it isn't. First, discovery of credentials takes time and social engineering (usually) which we don't have and/or is out of scope.  Second, the analyst needs to work from the assumption of an insider threat.  I usually ask for two sets of credentials for each role in the system. (Two Admin accounts and two User accounts, for example).  This makes it a lot easier to test for privilege escalation. 

Second is how an application works.  Usually the QA team on an application has been building the test plan from the start of the development, with the business analysis.  Vulnerability analysts don't have that benefit, so we have to do recon.  This is my recon list:

  • Qualys SSL Test
  • Scan network with nmap
  • Run Content Discovery in Burp
  • Index site with proxy: highlight important interactions
  • Run Content Discovery
  • Analyze uRLs for sensitive data
  • Brute force: FuzzDB directory and file scan
  • Search Google/SO/Pastebin for selected filenames in the application/names in comments
  • Take apart anything binary
  • Isolate common hidden form fields, cookies and URL Parameters

It helps out a lot.

Finally, is a test plan. Again, we haven't been building a test plan as the application is developed, but that's OK.  We don't need to.  We are almost always testing for exactly the same things. So, for this, I use the OWASP ASVS. For most applications I use Level One requirements on the ASVS, and they spreadsheet helpfully provided.

I want to rind everyone - the ASVS is open source.  It doesn't spring out of thin air.  It needs help from the community: you and me.  Check out the issues list on Github and see where you can help out.

Hmm, I did forget something - reporting.  I suppose that will be the third in this series, as there is a lot to consider there too. Sometimes you need to use the client format, sometimes the contractor, and sometimes you own. I'll share mine, and you all can go from there!

Good testing, all.  If you liked this, start with On Tools earlier.

On Tools

Not too long ago, I was asked to do a technical interview for a set of tests.  This isn't unheard of, but it is odd. Usually, folks have heard about me from someone and that is good enough.  In this case, however, there was a special reason.  They were trying to avoid testers that were overly reliant on tools.

That's something I can get behind.  Too often I have come in on a forensic test just to find out that the last tester just ran a report out of ZAP or Burp, and turned it over - no triage, no nothing. That is, admittedly, more or less useless. I was confident that I could explain the realities of the situation.

ZAP and Burp are "proxies" in the application sense.  They sit between the web browser and the web server and capture traffic for analysis.  The developer tools (F12 button in the web browser) do the same thing, but ZAP and Burp are designed with vulnerability analysis in mind, and as such, they have a lot of tools to help a tester out.  For instance, right now I am using a tool inside Burp to check commonly used usernames and passwords on the login screen.  There are 1,340,656 combinations.  Could I type those all in? Of course!  Can I have a week for just that test? No? OK, we'll I'll script it then.  That's a tool.

There are a lot of tools that are available for Burp and ZAP.  The one in the example above is built-in, and it is called Intruder.  Everything Intruder does the tester can do manually.  However, Intruder will save you a lot of time.  What's most important is that the tester understands what they are doing with Intruder. It's not enough to just push buttons - everything that the tester does with a ZAP or Burp toolset should be something that could be done manually and understood as to what exactly is being done.

I have an upcoming post with part of an answer to knowing exactly what is being done - a good test plan. Sneak Peek: I heartily recommend the Application Security Verification Standard from OWASP. But I have said too much already - that's for another time.

What is my toolset then, assuming I know what all of them are doing? It looks like this:

  • Most of my corporate customers expect Burp Suite history as evidence for the test, so I use Burp a lot. I'm a big fan of the addin model, and have even written a couple.  Right now, I have several addins installed from the BApp Store - which is under the Extensions tab in the main tool.
  • If I have a choice in the matter, I will often use ZAP. It has a very slick API that allows for even more automation - yes, tools for tools.  The results from the proxy are the same, of course.  It's just the output from the web server and the input from the web browser.
  • Powershell.  Yep, you heard me right. I run a Windows shop, and Powershell has a robust set of tools for testing services, handling certificates, and whatnot.  If you don't know it, highly recommended that you dig in a little.
  • Python. Like Powershell, it has a robust collection of tools for services and manipulation of requests.
  • Nikto.  This is a Perl application that tests for a boatload of known flaws in web servers and supply chain components.  Again, could I test each one manually? Of course.  Do I want to? Not that I don't want to, I just don't have time.  People like me but they won't pay me for a year for one test.

That's about it.  In my next post, "Vulnerability Analysis is just fancy QA" I'll talk a lot more about knowing what you are actually testing for, so this post and that one kinda go together.  Either way, I hope you got something from this info.

Postscript: I didn't get the job I mentioned at the top of the post. I use too many tools.

Application Security This Week for April 25

A fun tool that finds weak Active Directory passwords, and then notifies the user.

https://github.com/AdrianVollmer/Crack-O-Matic

 

Signal pwned Cellebrite with pure Moxie.

https://signal.org/blog/cellebrite-vulnerabilities/

 

Sad news, Dan Kaminsky has left us.  He was known for his extraordinary research into DNS cache poisoning, but most importantly, he was a great person. He will be missed.

https://en.wikipedia.org/wiki/Dan_Kaminsky

 

S

Application Security This Week for April 18

Pwn2Own had some interesting browser vulnerability results:

https://www.zerodayinitiative.com/blog/2021/4/2/pwn2own-2021-schedule-and-live-results

 

Reddit (A social network) has started a bug bounty program:

https://www.reddit.com/r/redditsecurity/comments/mqse9a/announcing_reddits_public_bug_bounty_program/?sort=qa

I am user #63 on that site, and thee oldest active member who isn't an admin, so I might give it a shot.

 

A good person wrote a list for semgrep that searches for secrets in public repos (or really any code) using some really well written filters.  Check it out:

https://r2c.dev/blog/2021/dont-leak-your-secrets/

 

Hope everyone has a secure week!

Application Security This Week for April 11

Surprisingly good article from the BBC about firmware attacks

https://www.bbc.com/news/business-56671419

 

Some really interesting code related to the Windows RPC attack

https://iamelli0t.github.io/2021/04/10/RPC-Bypass-CFG.html

 

One of my favorite topics - insecure API endpoints - presented at BSides

https://blog.assetnote.io/2021/04/05/contextual-content-discovery/

 

Have a secure week, everyone.

Application Security This Week for March 28

Guess who forgot to do a newsletter last week?

 

Cool file upload attack to get access to SSH unauthenticated.

https://blog.fadyothman.com/cve-2021-28379-gaining-rce-via-ssh-backdoor-in-vestacp/

 

Neat tool to MITM an iOS device.  The code is worth a look.

https://github.com/doronz88/harlogger

 

There is a new release of a (new to me) tool to test SAML implementations.

https://blog.compass-security.com/2021/03/saml-raider-release-1-4-0/

 

More cool HTTP2 vulnerabilities exploited.

https://blog.assetnote.io/2021/03/18/h2c-smuggling/

 

TLS 1.0 and 1.1 are formally deprecated.  These become High findings on reports now.

https://datatracker.ietf.org/doc/rfc8996/

 

Retire.js, one of my favorite tools, has been updated.

https://retirejs.github.io/retire.js/

 

And finally, spend your Sunday patching OpenSSL.

https://thehackernews.com/2021/03/openssl-releases-patches-for-2-high.html

 

Have a secure week, everyone.

Application Security This Week for March 14

Happy pi day!

 

Missive on the insecurity of C as a programming language.

https://daniel.haxx.se/blog/2021/03/09/half-of-curls-vulnerabilities-are-c-mistakes/

 

Regex is easily exploitable for denial of service attacks.

https://blog.doyensec.com/2021/03/11/regexploit.html

 

It might be too late to register, but Veracode is holding a Capture The Flag competition for students.

https://www.veracode.com/events/hacker-games

 

Have a secure week.

Application Security This Week for March 7

This is a pop culture article about why mobile application can be insecure (from Wired) but it is well written.  It might be behind a paywall for some of you, if so I'm sorry.

https://www.wired.com/story/ios-android-leaky-apps-cloud/

 

Good writeup on the Apache Velocity vulnerability.

https://securitylab.github.com/advisories/GHSL-2020-048-apache-velocity

 

Look, more supply chain problems! Yay! 3,500 pypy packages corrupt, and a tool to discover them.

https://github.com/pypa/pypi-support/issues/923

 

And finally, a series that begins with DLL Search Order Hijacking, something similar to what I have added to this newsletter before. Worth keeping an eye on.

https://github.com/pypa/pypi-support/issues/923

 

S

Application Security This Week for February 28

Portswigger published their Top 10 Hacking Techniques for 2020.

https://portswigger.net/research/top-10-web-hacking-techniques-of-2020

 

Vulnerabilities in malware!

https://malvuln.com/advisory/4932471df98b0e94db076f2b1c0339bd.txt

 

Github is doubling down on security tools, which I think is awesome.

https://venturebeat.com/2021/02/26/github-cso-pledges-more-security-tools-features-for-developers/amp/

 

Have a great week!

Husband. Father. Pentester. Secure software composer. Brewer. Lockpicker. Ninja. Insurrectionist. Lumberjack. All words that have been used to describe me recently. I help people write more secure software.

 

 

profile for Bill Sempf on Stack Exchange, a network of free, community-driven Q&A sites

MonthList