Observations on Smoke Tests – Part 1

Observations on Smoke Tests – Part 1

After performing more than 100 application smoke tests over the last year with a variety of commercial scanning tools, I thought I’d share some of my interesting observations.

 

Smoke testing in the traditional definition is most often used to assess the functionality of key software features to determine if they work or perform as intended. In the context of application security, smoke testing is leveraged in a slightly different way, to quickly evaluate the security of web applications. More specifically, Optiv performs smoke tests to reveal common security issues within applications and their respective environments. To do that, we first scan the application and its environment, then manually validate any issues identified by the scanner. 

 

Compared to the more comprehensive dynamic application assessments, smoke testing takes less manual effort, less time and identifies common vulnerabilities that are widely known. That’s why we provide smoke testing as a service for clients who have limited budget, time, resources, or simply need to set up a security baseline for their web applications. 

 

Key Observations from Testing Results

 

Smoke testing is good for quick security validation of your web applications, but we caution our clients to keep in mind that automated scanning tools often miss many security issues that can be found only through manual testing. One example is the issue of improper authorization between accounts with different privilege access levels. In a case like this, the scanner will usually not identify this issue because multi-level account authorization rules are very difficult if not impossible to define in-tool. By design, smoke testing provides reduced test coverage in exchange for scan speed.

 

Additionally, many clients will scope smoke tests to include one application with two user roles… privileged and non-privileged. However, in cases where the application shares the same code base as one or more applications, the scope may result in under-reported impact. Say for example that a privileged user has access to two similar applications, Web App A and Web App B, and a non-privileged user has access to only Web App A. If we find that the non-privileged user can access resources reserved to privileged users in Web App A, chances are that Web App B is also affected. Smoke testing a single application will usually fail to detect this issue.

 

Another general observation is that scanners tend to report many false positives, which need to be identified and ignored so they don’t take unnecessary attention away from the legitimate vulnerabilities. And like all software tools, scanners do what they’re configured to do, so the configuration options you choose will affect the results. Our extensive product experience provides us that deep product knowledge needed to assist clients with tool configuration that in turn helps them maximize their tool investment. 

 

For example, leveraging a scan policy at the group level for a number of applications, verses something more granular, is less prone to human error. Usually it’s not necessary to change the policy on every scan. However, we might suggest changing the scan policy in certain situations. If the client doesn’t want us to attack the database, for example, I’d remove SQLi tests from the scan policy. 

 

In several situations, I was able to customize the policy, (e.g. specify server/database/language information) in advance, which resulted in a decrease of scanning time by a range of 10-50 percent. And the results were more accurate. In one instance, the default policy resulted in a “Windows internal path disclosed” issue for a Linux server. The issue wouldn’t have been reported if I had specified the server type.  In another case, I scanned the same application with and without the policy of “Page content varies based on user-agent” selected. A number of issues were reported only when this policy was enabled. As a result, I was eventually able to find a mobile login page using a custom user-agent value; the mobile login page was found to have a severe authentication vulnerability. 

 

Based on my observations, smoke testing is not meant to be a substitute for comprehensive security assessments, and careful configuration of scanner policies is critical. In the next blog article, I’ll talk about the pros and cons of using cloud-based vs. desktop-based scanners. 

Security Consultant, Application Security
Raina Chen is a security consultant for Optiv’s application security team. In this role she deliveries a variety of service offerings including web application assessments and web service assessments.
Would you like to speak to an advisor?

How can we help you today?

Image
field-guide-cloud-list-image@2x.jpg
Cybersecurity Field Guide #13: A Practical Approach to Securing Your Cloud Transformation
Image
OptivCon
Register for an Upcoming OptivCon

Ready to speak to an Optiv expert to discuss your security needs?