Secure Code Reviews, Magic or Art (OWASP AppSecUSA Presentation Review)

sherif koussaContinuing my series of write-ups on the talks I attended at AppSecUSA this year.

Sherif Koussa (@Skoussa) who is a Principal Application Security Consultant at Software Secured presented this talk about source code reviews and proposed a methodology for going about it. Companies tend to take the “happens to someone else, never to us” attitude. There can also be “accidental security” in the form of inputs that happen to be filtered against, for example, SQL injection by being converted into integer. Security has to be more proactive and deliberate than that. The methodology goes like this: enumerate the inputs, autoscan for Low Hanging Fruit, manual review, weed out false positives, communicate to developers. Note the combination of automatic and manual assessment for maximum coverage.

The notion of a “trust boundary” was also discussed. This is something I have contemplated extensively in architecting software, not just security related but also in handling bad input that can crash the software. One implicitly or explicitly draws a trust boundary, that is, a boundary behind which you do not bother checking inputs and trust the code outside the boundary to do so. Because it is simply not feasible to wrap every line of code in a validating if statement… if for no other reason, you have to wrap the ifs in ifs, and those ifs in ifs, etc. until you stackfault the universe.

Koussa also mentioned naming identifiers something obvious vis-a-vis the security issues implied so those issues are not lost in subtlety. This is a subset of a greater principle to which I subscribe but not being a 900 pound gorilla security company am presently at a loss to coerce the industry to practice. The principle being to convince developers to write code that makes it easy for automated tools and manual reviewers to find problems without compromising the expressive power of the code. At NT OBJECTives, we build automated application security scanning tools. A lot of things I have seen fool scanners or that we have had to make not fool our scanner were things that were unnecessarily complicated like multiple assigns and copies of variables in javascript that cause the tool to get lost in attempting to profile the inputs. Developers just do this stuff without thinking of how it makes an automated tool’s job or a manual reviewer’s job more difficult. If there is a compelling reason to code that way then so be it but if it can be made more simple, by all means do.

Koussa wrapped up by calling for the need to report the findings to the developers with articulate reasons why it is vulnerable, business impact, and how to remediate.  It is difficult to argue with this of course but I also know why it is necessary to say it, though it may seem obvious.  Alot of reports are information glut that, if it presents the information this speaker was calling for at all, makes it a needle in the haystack of the glut.

About M. J. Power 22 Articles
Connect with Mike on Google+

Be the first to comment

Leave a Reply

Your email address will not be published.


*