Working towards a unified way of describing threat modelsView the original Working Session content
Description of session
We want to create a model of a threat model. It needs to be generic so it fits all threat modeling methodologies. Later we can try to use this model to create graph based websites detailing the different techniques and methodologies as well as linking the examples we will have to these different elements of a threat model.
Creating a reference diagram that allows someone to be able to use it to help understand models. -> Solve communication issues some teams experience.
- Having a taxonomy of terms helps to ensure we're all speaking the same language
- Steven shared a diagram from Toreon that explains components of a map
- This type of model helps to ensure the creation of models that can be machine readable / tool supported
- A consistent language helps when creating tools that can allow features such as JIRA ingestion.
- When creating a DSL it shifts the question to “what is useful input?”, as this will also vary for everyone. e.g. interaction (for DFD-oriented TM), ‘scenario’s’ for user story-oriented TM, some will want to include details on data, etc.
- Integrating with scrum teams was an interesting way of demystifying security and increasing collaboration both in and post threat models
- Create a non-linear framework -> a perspective on reality rather than trying to model reality (that cannot also a comprehend the totality of what it's trying to model)
- Further discussions on whether there should be Cynefin collaboration on threat modelling with Dave Snowdon
- Further work to create a Typology for threat modelling
- A DSL may not be the right approach for this moment in time as it may not cover all of the components/ considerations required
Who is the audience of the DSL and how does this help us make threat modeling more accessible?
Is there a different DSL for business vs tech?
Is the challenge actually that people don't understand the terms used in threat modeling vs the model itself?
How do threat models capture weak signals / micro-anomolies in our models? - Oddities / ‘low’ threats that someone has spotted/ thought of but either not captured or it's been captured but not put in front of the right people with the right level of importance.
- How do you capture data that may not make sense at the time but the pattern of that data raises it's importance and relevance. How do you see those patterns? And how do you recognise a false signal?
Is it possible to define specific core components that are most important/ useful/ required?
How does a DSL deal with the risk of cognitive dissinence / intellectual fatigue? Does a DSL support the intent of keeping the audience engaged and aware?
Is a typology (a selection of perspectives) a better approach then a taxonomy where things like boundary conditions can be missed or a DSL that might be over complicated?
How do we utilise stories to tell an accurate story of what the threats are.
How do we ensure threat modeling isn't so arduous or complicated or initimidating that people choose not to contribute? - How do we better facilitate data capture ‘in the field’ or after the fact.
- https://cognitive-edge.com/ - for further resources on decision making/ leadership (Dave Snowden)