|Organizers||Claudio Camerino Claudio Camerino , Francisco Novo Francisco Novo , Rafael Jimenez Rafael Jimenez|
|Participants||Ajy Gupta Ajy Gupta , Ben Schofield Ben Schofield , Bjoern Kimminich Bjoern Kimminich , Dominik de Smit Dominik de Smit , Emma Fang Emma Fang , Felipe Zipitria Felipe Zipitria , Foteini Karantoni Foteini Karantoni , Gabor Pek Gabor Pek , Jannik Hollenbach Jannik Hollenbach , Jay Mbolda Yamdjeu Jay Mbolda Yamdjeu , Martin Rock-Evans Martin Rock-Evans , OSS Bot OSS Bot , Sean Siford Sean Siford , Tom Ling Tom Ling|
|Remote Participants||André Rainho André Rainho , Camilo Cota Camilo Cota , Pablo Garcia Pablo Garcia|
Security testing is vital to validate the correct implementation of controls and that security requirements. To scale securty testing to often hundreds of different software products, many organisations now implement automated tools to scale security testing practices. In this hands-on working session we’ll learn how to build a working DevSecOps POC and, more importantly, how to deal with the myriad of security findings it generates.
- Slot 1 (AM1) - Architectiure and Setup
- Slot 2 (PM2) - Objectives and Chellenges
- Slot 3 (PM3) - Outcomes
Thanks to the proliferation of automated security scanning tools we are generating a phenomenal amount of security findings. As part of this session we tackle the following goals.
- Increase Visibility - Can’t secure what you don’t see. Why is important to test early in the SDL and map tests to QA business flows.
- Define Accountability - Creating a feedback loop with your Devs. Why is important to flag findings to their respective owners and incorporate Devs feedback into testing policies.
- Improve Noise Removal - Accuracy drives credibility. Devs are more likely to triage and action reputable findings, starting with tighter scan policies.
- Achieve Scalability - Running tools and managing processes manually is not an option when dealing with hundreds of products. How to scale generation, collection and triaging of security findings.
- Explore the automated testing workflow, participants will be encouraged to take part and share their experience.
- What selection of tools and test types should be used to generate security findings as part of a DevSecOps program.
- Reccommended security testing approaches for: – Frontend vs backend applications – Static vs runtime
- Why is important to have a single source of truth for multiple testing tools
- AppSec testing integration with QA - user stories vs abuse cases and how to leverage QA processes to drive ZAP.
- Integration with Jira - how to raise and populate SEC type tickets and track their lifecycle.
- Continuous improvement - how to tune security policies as result of the triage process
- Build and run a working DevSecOps POC lab from open source tools
- Define ruleset for programmatic removal of noise (e.g. duplicates, fixes in progress and easy to spot false positives)
- Learn how to adapt/hack OSS tools like ZAP and Defect Dojo for enterprise level automation.
- Define roles and responsibilities for an appsec pipeline based on common industry roles (QA, Del Svcs, Engineering etc.)
- Create CD scripts to automate generation, collection and allocation of findings.
- Generation of: – ZAP scan policies, contexts and ZEST scripts – SAST SonarQube quality profiles – Dependency Check Configuration – Defect Dojo/Jira integration Scripts – Jenkins groovy scripts to tie it all together
- Publish Repo with Automation scripts
- Pull Request for Defect Dojo
- Document process to handle findings
The target audience for this Working Session is:
- Security professionals
- DevOps / DevSecOps
- Security champions
- AppSec leaders
Here are the current materials for this session:
Previous Summit Working Session
Register as participant
To register as participant add
Dealing with DevSecOps Findings to either:
sessionsmetadata field from your participant's page (find your participant page and look for the edit link).
- or the
participantsmetadata field from this git session page
Back to list of all Working Sessions