AppSecEnterprise Architecture

What to report and what to discard on assessments

POINT is a small company, just four employees, and I am the only tester.  We do most of our work for other places that have overflow work, or a larger infosec departments that need appsec expertise occasionally.  Either way, 80% of our tests use someone else's report format. POINT's format is very simple, almost no template text about why we do what we do, and a lot of custom text about the vulnerabilities and remediation advice.

Even with a slimmer report format, which I talk about at length in On Reporting, I am constantly considering what to include and what to discard in the vulnerabilities list.  Many of POINT's partners craft a custom report and still take all of the raw output from scanners like Burp, nmap, and Nessus, and give those to the client.  Everything, even informational findings, are included.  I understand that at the rates that appsec people command organizations might feel like they have to turn in these big glitzy packages to make it seem like it was worth the money.  Hell, the first assessment I did all by myself for POINT was in a three-ring binder! But I am feeling differently these days.

You see, I was an enterprise developer for many, many years.  I worked on a lot of very large, high-profile projects with many attack surfaces.  I worked on TravelPort, which moved all of the airline booking into a common SOAP web service system over a flexible data tier from the VT100 terminal that it used to be.  I worked on the OHIID project for the state of Ohio, which allows for sign ins for Ohio citizens, and provides access to everything that the state manages, from elder care providers to small businesses filing sales tax.  These projects were tested a LOT, and we would get 450-page reports with tons of listings of data, and no useful remediation advice.  The meetings would start "If you could turn to page one of the report ..." and I, the senior architect on both projects at the time, would immediately turn my brain off.  Yes, you did a lot of scanning.  Yes, it made many lines of logs. What do I need to fix?

Today, now the tester, I still feel that pain, and I try to prevent passing it on to the next generation of developers.  One week isn't always enough time to formally test the exploitability of everything that shows up in reconnaissance, so I depend on my experience as a developer to determine what actually matters.  XSS got through the service tier.  Does it get encoded properly by the UI?  Skip it - it means nothing. Verbose error when trying command injection?  Pretty good sign something more is there.

Those determinations are hard, and it's not something one could write a book about, or even guidance. It's totally a case-by-case basis. Every report I write I struggle with the includability (Is that a word? the spell check thinks so.) of a finding.  Today it was a cacheable HTTPS response.  Usually that is a low and can be a big deal for some things.  This was an SVG file, though, and everything else had the proper caching header.  Other than the "Why does this one file not have the caching headers?" question, there is no need to report that, yet folks do.  Just to prove they ran a scanner?  That seems, to me, to just add noise.  Another example (from the same test) was lack of the content sniffing header.  It's a JSON API, no one cares.  It's noise.

What do you include and discard when writing reports?  Or, if you are on the consuming side, how do you dig through the cruft?  Do you like it?  Does management?  Are you getting, and finding, remediation advice that actually works? Do you look for criticals, and everything else gets tossed in the (virtual) drawer?  Curious to know your thoughts.  Leave a comment or hit me up on Mastodon,