A Single Partner for Everything You Need Optiv works with more than 450 world-class security technology partners. By putting you at the center of our unmatched ecosystem of people, products, partners and programs, we accelerate business progress like no other company can.
We Are Optiv Greatness is every team working toward a common goal. Winning in spite of cyber threats and overcoming challenges in spite of them. It’s building for a future that only you can create or simply coming home in time for dinner. However you define greatness, Optiv is in your corner. We manage cyber risk so you can secure your full potential.
Observations on Smoke Tests – Part 1 Breadcrumb Home Insights Blog Observations on Smoke Tests – Part 1 March 14, 2018 Observations on Smoke Tests – Part 1 After performing more than 100 application smoke tests over the last year with a variety of commercial scanning tools, I thought I’d share some of my interesting observations. Smoke testing in the traditional definition is most often used to assess the functionality of key software features to determine if they work or perform as intended. In the context of application security, smoke testing is leveraged in a slightly different way, to quickly evaluate the security of web applications. More specifically, Optiv performs smoke tests to reveal common security issues within applications and their respective environments. To do that, we first scan the application and its environment, then manually validate any issues identified by the scanner. Compared to the more comprehensive dynamic application assessments, smoke testing takes less manual effort, less time and identifies common vulnerabilities that are widely known. That’s why we provide smoke testing as a service for clients who have limited budget, time, resources, or simply need to set up a security baseline for their web applications. Key Observations from Testing Results Smoke testing is good for quick security validation of your web applications, but we caution our clients to keep in mind that automated scanning tools often miss many security issues that can be found only through manual testing. One example is the issue of improper authorization between accounts with different privilege access levels. In a case like this, the scanner will usually not identify this issue because multi-level account authorization rules are very difficult if not impossible to define in-tool. By design, smoke testing provides reduced test coverage in exchange for scan speed. Additionally, many clients will scope smoke tests to include one application with two user roles… privileged and non-privileged. However, in cases where the application shares the same code base as one or more applications, the scope may result in under-reported impact. Say for example that a privileged user has access to two similar applications, Web App A and Web App B, and a non-privileged user has access to only Web App A. If we find that the non-privileged user can access resources reserved to privileged users in Web App A, chances are that Web App B is also affected. Smoke testing a single application will usually fail to detect this issue. Another general observation is that scanners tend to report many false positives, which need to be identified and ignored so they don’t take unnecessary attention away from the legitimate vulnerabilities. And like all software tools, scanners do what they’re configured to do, so the configuration options you choose will affect the results. Our extensive product experience provides us that deep product knowledge needed to assist clients with tool configuration that in turn helps them maximize their tool investment. For example, leveraging a scan policy at the group level for a number of applications, verses something more granular, is less prone to human error. Usually it’s not necessary to change the policy on every scan. However, we might suggest changing the scan policy in certain situations. If the client doesn’t want us to attack the database, for example, I’d remove SQLi tests from the scan policy. In several situations, I was able to customize the policy, (e.g. specify server/database/language information) in advance, which resulted in a decrease of scanning time by a range of 10-50 percent. And the results were more accurate. In one instance, the default policy resulted in a “Windows internal path disclosed” issue for a Linux server. The issue wouldn’t have been reported if I had specified the server type. In another case, I scanned the same application with and without the policy of “Page content varies based on user-agent” selected. A number of issues were reported only when this policy was enabled. As a result, I was eventually able to find a mobile login page using a custom user-agent value; the mobile login page was found to have a severe authentication vulnerability. Based on my observations, smoke testing is not meant to be a substitute for comprehensive security assessments, and careful configuration of scanner policies is critical. In the next blog article, I’ll talk about the pros and cons of using cloud-based vs. desktop-based scanners. By: Raina Chen Security Consultant, Application Security Raina Chen is a security consultant for Optiv’s application security team. In this role she deliveries a variety of service offerings including web application assessments and web service assessments. Share: Smoke Testing SecOps Privileged Access Management
Would you like to speak to an advisor? Let's Talk Cybersecurity Provide your contact information and we will follow-up shortly. Let's Browse Cybersecurity Just looking? Explore how Optiv serves its ~6,000 clients. Show me AI Security Solutions Show me the Optiv brochure Take me to Optiv's Events page Browse all Services