Originally, we had web forms. Well, I suppose we still have web forms. The basis of HTTP is request-response, and nothing's really changed there. We've changed how we make requests, but the basic format and idea of the request hasn't altered very much at all. There's still URL parameters and post parameters; it's pretty normal.
Responses are a little different though because sometimes they come back in JSON which is used by a JavaScript user interface. If they are set up in a formal way, they might even be considered an API. That means we've got a JavaScript front-end calling some JSON backend via a standard set of formats that move the data in an expected way, so that we know more or less what's going on.
But you see that isn't really where it stops. Because really anything that is returning JSON in the response isn't a web page right? It's an API call. You're returning data that is to be used by some user interface, in our case it's in a browser written in JavaScript. Fine. But what if it's weird? What if it returns way more than it should, or way less, or way more fields? What if it returns unexpected data when it gets unexpected requests? How are you going to handle that in your user interface?
Recently I wrote about writing failing unit tests. This fits in well with that, because your front end needs to be able to handle unexpected responses from JSON, but more importantly, your security scanner has to be able to handle unexpected responses from JSON. At this time, I have to tell you the future of that happening is bleak.
The issue is one of architecture. The security companies are working very hard to sell you products that will accurately and reproducibly find security vulnerabilities in your code. When they tell you that an API call has a Cross-site scripting vulnerability, you have to wonder if they really understand how the web works. Not to be dissuaded by this they started dividing their products into web scanners and API scanners. Problem solved, right?
As I am sure you can imagine, no, that isn't right. The way we are developing web applications today makes the API a very integral part of the application, which is different from how APIs were built in the client-server days. Having a com component, for instance, with a set of objects you can spin up to get things done is how most architects with experience over longer periods of time think of contemporary web APIs.
But you see that isn't how it works anymore. As time has passed, APIs have become an integral part of the application itself, tied in directly with intermediary products like GraphQL, document databases, and just files in the web application that return JSON that are going to be handled by React or some other JavaScript framework on the front end. Or an app. Or a watch. Or a cron job. Like the diagram I admit I used Eraser for because online diagramming tools hate me.

Because of this, static scanners become extremely confused. It's simply not clear what is risky from an API call perspective and what is risky from a web application perspective. The vulnerabilities in HTTP have not changed. The way an attacker can implement them has, and the security companies are simply not keeping up with this accurately enough to make it a default install of any static scanner worth its weight in anything.
So, what are we going to do? The first thing you're going to want to do is learn the ins and outs of the details of your static scanner so that you can teach it exactly what parts of the application should be scanned in what ways and what kinds of content types being returned have to be looked at in a certain way. you don't need your scanner telling you over and over again that you're missing a cross-site request forgery token in an API call. That just doesn't make any sense. You don't need to know about cross-site scripting in your API call; that is a front-end activity because it's encoding. The scanner just doesn't know the difference.
You can convince the scanner to work with a specific web application, but it is not a point-and-click operation. I found a lot of benefit to manually indexing the site using a tool like Selenium or Burp Suite Professional and then using that matrix to feed into my scanner to do the dynamic scanning from there. are you lucky and you have a nice test suite that the QA team has put together specifically to test an underlying API that's a real API? Fantastic. Proxy that through Burp 2. Give that a try. Rather than feeding the actual Postman project into your scanning tool, try the Burp history instead with your tests in it. In this way, it understands how an individual page call interacts with whatever back-end system there is and doesn't need nearly as much interaction.
The second thing you're going to want to do is handle all of the authentication and authorization pieces in your scanner. I have, no less than six times in the last two years, found scans that have been being run for months or years even, that have absolutely no log-in capability and because of that they aren't fully scanning the application. This really plays in when dealing with an API that uses a separate authentication mechanism from the website itself. the contemporary authentication and authorization systems like Ping and Okta are definitely complicated to scan. There is lots of good documentation out there, but you have to actually get in and read the docs or even go look at sample configurations on the respective vendor's websites.
Third would be looking at the actual output from the scans. I can't believe how much detail is in there that no one really ever looks at. The last several times that any problem has occurred in a scan, I downloaded the full output of the scan, loaded it up in Excel, made a pivot chart, and figured out exactly where it was going wrong. Honestly, that's what led me to points 1 and 2 - realizing that the application didn't know where to go to actually test things and that it wasn't authenticated properly. it's ugly and complicated, with lots of text in it. Honestly, it looks like a web server log. It's hard to dig through, but it brings a tremendous amount of value, not only in solving problems but also in really understanding what that scanner is doing.
there's my two cents as far as dynamic application scanning for APIs. It's not easy. But it can be easier if you just are willing to work with the scanning tool a little bit, realizing it's not just gonna work out of the box. You're going to have to change the way you interact with the tool so it can change the way it interacts with the underlying application