How are you testing your APIs?
To get a glimpse into how organizations are tackling API security, we interviewed one of Synopsys’ principal consultants, diving into best practices, challenges, and key areas of focus.
Modern approaches to software development increasingly involve microservices. Unlike clunky monolithic systems, microservices offer agility, productivity, and velocity. But they also require you to reconsider testing strategies. Testing in a partitioned environment, where components are independently deployable, is complex, and this is especially true for APIs. APIs are not new, but their complexity demands that organizations consider how to properly handle and secure them.
How does an organization best address API security? We asked David Johansson, principal consultant at Synopsys, about API testing and compiled some key takeaways.
Critical challenges of dynamic API testing
Testing microservices is much more complex than testing older monolithic web applications, largely due to issues of scale. Applications built on microservices typically entail dozens of different services, many of which are often not available to test at the same time. There are two overarching challenges that cause the majority of issues organizations face today.
API inventory: Even before automated testing tools can be pointed at APIs, the problem of API discovery and documentation has to be addressed. The dynamic nature of microservices deployment means that APIs could be changing on a daily basis. It’s therefore critical to practice robust API inventory tracking. Since APIs involved in an application all work independently of each another, it can be challenging for testers to keep up with updates and the implications of those updates to the overall application. Maintaining a detailed and up-to-date API inventory is critical.
Business logic: Another key challenge posed by API testing involves the limitations of testing solutions. Methods like DAST and penetration testing can generate nonrelevant findings unless they are specifically tailored for API testing and configured appropriately for the context. These nonrelevant findings can be a false positive or a finding that’s unimportant when examined in the context of business logic or developer intent. Automated testing tools can’t offer the human insight necessary to understand the full picture.
Managed penetration testing can help take business logic and intent into consideration, performing risk analysis and prioritizing critical vulnerabilities. Whether you use professional services or perform this critical analysis yourself internally, there’s no simple way to automate deeper analysis of business logic. The human touch is necessary to ensure accurate results.
Johansson also provided some key practices and considerations that should be evaluated and used to ensure a well-functioning SSI. He recommended the following:
Focus on automation enablement
It’s important to get automated tools to a state where they can be used by development as part of their daily work. An assessment of the tools and automation processes currently in place should include:
- Perform a pilot and gather data to learn how noisy your tooling is.
- Identify areas in need of improvement and calibration.
- Consider getting rid of any tool that isn’t working as desired.
- Fine-tune the tool and work with development to find the best implementation blueprint.
- Teach the broader development organization and Security Champions how to successfully implement the tool and calibrate it further.
- Get development feedback and make improvements as necessary.
Consider business logic during your review
When testing APIs, it’s important to perform a manual review of all findings. Not every finding is relevant or a reason for concern. This is where human oversight is needed to view findings through the lens of development’s intentions.
When you review these findings, make sure you have a clear understanding of the business context. You should have a firm grasp of whether the API supports system-to-system integration or whether it supports an end-user application, such as a single page application. This type of information will help identify possible points of exposure as well as relevant attack scenarios.
Tackle your inventory
Make sure you have proper documentation of your APIs along with technical specifications that can be imported by an automated tool to generate test cases. The OpenAPI Specification is the leading specification for documenting REST APIs that most tools support, and Web Services Description Language (WSDL) is used for SOAP-based web services.
Additional documentation describing use cases and key business logic flows may be necessary to help ensure that multistep flows are carried out correctly and aren’t inadvertently bypassed. Lack of an appropriate technical specification will often lead to poor coverage and can even make it impossible to run automated security tests without substantial manual effort to map out the API endpoints and configure the tools in question. This documentation also makes it easier to effectively exercise the API and the intended functionality in a time-boxed assessment.
Make informed tooling choices
When it’s time to select the appropriate tooling and actually perform the tests, the key focus should be on doing what makes the most sense on a case-by-case basis. Depending on the type of API you’re testing (SOAP, REST, etc.), you can decide what type of testing is appropriate.
To streamline the testing process, it can help to make a checklist of items that automated scanners work well on. Leave these items to the automated scanner and prioritize other items for manual review.
When selecting your testing solution, keep in mind that automated tools look at common classes of vulnerabilities but don’t provide coverage for issues specific to your API and business logic. Incorporate manual reviews because they can cover more complex and context-specific attack scenarios.
Given the scaling challenge posed by a microservices environment, security teams need proper and active API discovery mechanisms in order to keep track of which APIs have been assessed. You should consider when and how APIs are being tested: Are you using an automated API security tool that triggers testing each time the API is built to ensure a baseline security expectation, or is the API relying exclusively on a managed or professional security testing service that is triggered by some other development activity? Using just one of these is likely insufficient because of limited test cases. And it’s inefficient because it doesn’t catch low-hanging fruit early, so testers may have to spend time on issues an automated tool could have caught.
Did you like this story? To see more stories, become a BSIMM member. Contact Synopsys' sales department here.