I>S+D! – Interactive Application Security Testing (IAST), Beyond SAST/DAST (OWASP AppSecUSA Presentation Review)

Ofer MaorThis talk, by Ofer Maor, CTO – Quotium (Follow on Twitter, @quotium) at 2012 AppSecUSA, addressed something that I see is an up and coming issue, interactive in-memory code testing. There are several talks about it and there is a lot of ambient chatter about this in the security community. As one who works for a company that offers an automatic application security (DAST) tool that prides itself on being as automatic as possible, I have a very particular perspective on this. Specifically, the idea that manual or partially manual security testing reasserts itself in each new generation of test paradigms.

In this case, rather than running a tool that looks for potential buffer overflows and other such things by grammatically analysing source code and then relying entirely on its report, one (possibly in addition to that) runs a tool that hooks the program dynamically and observes the behaviour as it happens and guides the tool to the next attack accordingly. At NTO, we have tried to avoid requiring user interaction as much as possible using it only when absolutely necessary, and our tool is a web application scanner, not a source or binary code analyser (not specifically that though it does do a bit of that in order to get deeper into the site). So our tool is analogous to what the speaker calls static and dynamic source code analysis. In the field of static source code analysis, until someone comes up with some super magic halting-problem-paradox-violating source code analysis algorithm, I am inclined to buy into the speaker’s implicit assertion that rather a lot of user (pen-tester) interaction with the tool may be necessary to maximize results on code reviews.

So the speaker’s tool is not a static or dynamic one, but an interactive code analyser. The calls that can be hooked by the speaker’s tool include HTTP request/responses, database queries, file system calls, string operations, memory, 3rd party libraries, external app calls, etc. It is binary based as opposed to source code based and therefore needs a good database of api calls and such to hook. Probably the definitive example, though certainly not the only one, of what one can do with the speaker’s tool is tainted input tracking… following an SQL injection (for example) through all stages of its propagation down to the database.

I was very impressed with the technical achievement that this speaker’s tool represents. And I agree with the implicit notion that no particular paradigm (automatic black box testing, fully manual pen-testing, tool-assisted semi-manual pen-testing) is entirely sufficient to maximize security. They need to be used in concert. If budget prescribes a triage approach to security however, I would say an automatic tool doing blackbox testing of Low Hanging Fruit is likely to achieve the greatest coverage within that limitation.

About M. J. Power 22 Articles
Connect with Mike on Google+

Be the first to comment

Leave a Reply

Your email address will not be published.


*