The Dirty Dozen vs The Magnificent Seven
Trustworthy & Responsible Generative AI (Gen AI) is tough - Full Stop. Agreeing on what it is, or more importantly what it isn’t, is also not easy. Perhaps, that is the root of all the confusion. Without discussing the merits of any one stakeholder’s position, perhaps we can pick one definition and then compare that against real world mission statements and Service Level Agreements (SLA’s), Warranties and Guarantees.
I have become fond of calling the 12 risk categories associated with Generative AI as described in NIST AI 600-1, as “The Dirty Dozen”. Distilled down to its essence, the document describes in detail how human beneficiaries could be harmed if a generative AI system fails. It has become my lens of choice when assessing Gen AI systems.
The 12 listed out:
1. CBRN Information
2. Confabulation
3. Dangerous or Violent Recommendations
4. Data Privacy
5. Environmental
6. Human-AI Configuration
7. Information Integrity
8. Information Security
9. Intellectual Property
10. Obscene, Degrading, and/or Abusive Content
11. Harmful Bias or Homogenization
12. Value Chain and Component Integration
Humans love stories. Armed with the Dirty Dozen I can have impactful and productive conversations with various stakeholders when discussing curated threat catalogs and control affinities. This approach has proven to be very effective when communicating complex concepts like “AI Hallucinations” (i.e. Confabulation) to the people responsible for securing these systems. Further, it allows me to be very prescriptive when discussing reasonable ways to address residual risk with compensating controls.
A curated threat catalog is simply a list of bad things that have or could happen to an organization that would cause harm to their stakeholders. Historically, organizations focus more on risk management than threat catalogs. However, from a story telling perspective people seem to gravitate to the threats regardless of the likelihood that bad thing could happen. A proper threat catalog distills the world of threats into “stories” (aka. threat scenarios) of the most relevant threat items to your organization and stakeholders. What’s in your threat catalog?
System confidence is a combination of trust and control. In the absence of trust, control is all you have. By assessing specific threat catalog items against the harm they could cause, we can develop “structured choice” by suppling lists or groups of controls that have a high degree of affinity for addressing the harm that could be caused by said threat item.
Once an organization decides to address its threat catalog items, they must actually choose the controls they will use to address residual risk. [Residual risk is the difference between the organization’s current risk profile and the risk profile the organization wants.] Then the organization can leverage its understanding of control affinities to choose the best controls to mitigate the possibility or impact of bad outcome for their organization. Compensating controls allow organizations to “treat” residual risk.
When we put it all together, these are the types of informed conversations I can now have.
Client: “We want to use Generative AI to do something cool. But we want to make sure our system doesn’t tell people to hurt themselves or others (bad things). We want to make sure that our system does not discriminate, exclude or insulant its users (our stakeholders). We want to make sure we are good stewards of the world’s limited resources (see hammer and nails). We also want it to be cost effective, safe, secure and easy to operate.” [No tall order here ;)]
Me: “It sounds like you want to implement a new productivity tool and have a holistic view on Trustworthy and Responsible AI. Assuming you already have a mature governance foundation in place, you should start by validating your business case, agreeing to a list of bad things you want to protect against and putting controls in place that will provide a high degree of certainty in how it is operated.”
Client: “Yeah, that sounds about right.”
We now have a reasonable starting point and can move on to control selection. It’s beyond the scope of this blog to talk about all the types of controls available to organizations. Suffice to say, one size does not fit all and there are many controls that can be used to provide the required system confidence. Much like threat catalogs, organizations should consider building their own curated control catalogs. These catalogs contain lists of controls that currently exist in the organization and some insight to their cost and maturity.
One set of controls that are NOT often discussed, but should be considered, are Service Level Agreements (SLA’s), Warranties and Guarantees. There are controls that attempt to boost system confidence via commercial remediates and assertions. This is where it gets interesting.
Besides being a most excellent western, The Magnificent Seven is also what Bank of America analyst Michael Hartnett calls the market dominating tech companies. I wondered what the leaders in technological change, dominance and influence, consider Trustworthy and Responsible? More importantly, do they put their shareholder's money where their mouth is? So, I decided to collect and review their words:
1. Alphabet
2. Amazon
3. Apple
4. Meta
5. Microsoft
6. NVIDIA
7. TESLA
My initial read is that there is an overabundance of platitudes and “good words” and little on commercial remedy or legal recourse should the vendor fail to deliver on their obligations in these vendor assurances. This exercise reminded of another awesome movie, “The Good, the Bad and the Ugly”. Please make sure to check out my deep dive review of The Good the Bad and the Ugly in my next blog coming to a small screen near you because…AI Matters!
AI Matters: Hammers & Nails
Ever heard the old adage, “When all you have is a hammer, every problem looks like a nail.”? It pops into my head every time I have a conversation about using AI (the hammer) to solve “Wicked” problems (the nails) in Cyber Operations. Don’t get me wrong, I am stoked by the potential of (Generative) AI. The buzz is contagious. In some ways it feels like we are living in a modern Renaissance. A fountainhead of creativity, experimentation and insight, if you will. However, I am also a weathered practitioner and aspiring curmudgeon, who knows there is no such thing as a silver bullet. Responsible AI is not easy or cheap. Full Stop!
Shouts out to Chris Roberts, as this post was inspired by a conversation we had and this line from his recent LinkedIn post positing: “Do we need A.I. to solve our problem? If so, why?”
To that end, I’ve compiled a short list of observations and possible approaches you should consider when determining if using AI will actually solve a problem a create new problems to solve.
1. Inertia and status quo - In my experience, most organizations struggle with the fundamentals. Once they find a path that “works”, they resist change because its “good enough” #statusquo. This is sad because it means people lose their ability to imagine the world of possibilities, by building a perceptual blind spot for seeing alternatives. Improving the fundamentals is always a good thing. Incremental changes are often more valuable than “big bang” projects or transformations that take months or years to complete. Optimizing existing workflows as a normal practice can yield more benefit/value at higher frequencies. However, you must proceed with caution, as the way to fail at scale is to automate a broken business process. Please don’t make that mistake.
2. It Takes a Village – The key to creating a pipeline of cyber operations optimization candidates [aka Innovations] is to understand the “Wicked Problem” you are trying to solve. I mean really understand it. You should be able to explain it to a 12-year-old or an executive in a suit ;) If you need to role play that conversation this would be a good time to confer with your favorite chat bot. [Insert Phun Prompt Here: “Explain to me as if I am a 12-year-old, why passwords are important but not executive friendly?”]. Next, you will need to identify your core stakeholder community. 3-5 stakeholders will typically give you the perspectives needed to find critical mass and more importantly governance & resourcing support. Finally, you need to understand what motivates your key stakeholders, what they value and how they are graded. Ensure your Innovation candidate selection criteria and KPI’s resonate with your stakeholders and your stories are tailored to them. Make it real to them when describing the possibilities of a new approach. Be disciplined and ruthless, once selection criteria are agreed. Move quickly with purpose. Don’t get distracted with “bright shiny things” that do NOT align with your selection criteria. Each use case should tell a story that appeals to at least 75% of your stakeholder community. Less than that and inertia will make it too hard to realize value in a timely manner. Move on from candidates that do not make the cut, without regret as there should be a continuous pipeline of other opportunities to be reviewed.
3. Latent Capabilities – Take the easy wins first! – Once you have a vetted pipeline of innovation candidates, it’s time to determine if you can extract incremental value using the AI features or embedded capabilities already in place. This is most easily accomplished by reading the tin and looking for key words like BIG DATA, Behavioral Analysis, Machine Learning, Neural Network, LLM or (Generative) AI. It’s very likely you have been using AI-enabled tools already. If so, how are they performing? If they are available but you aren’t using them, why not? Of course, if capabilities were available and you weren’t aware of them or there are commercial considerations, then we need to quickly determine what level of effort is needed to implement. If it doesn’t pass the sniff test move on to the next candidate. Leverage your suppliers to educate but temper their claims with real world testing and assessment.
What next? Look for opportunities to integrate, optimize and automate at scale for operational improvement, which could include AI force multipliers such as Security Co-Pilots, organizationally aware GPT agents or automated Hackbots. We will look at these use cases in future blogs.
So, do we need an AI to solve our “wicked” problem? The simple answer is probably not. However, there are “Wicked” problems that AI can address and you should be looking for those opportunities too. Because AI Matters. Stay tuned.
Day-Con XVII: Summit Notes
Once again, neighbors came together to discuss wicked problems and how the community could & should address them. The 17th annual event's tag line is "Chlorine for your soul!". Check out the summit notes HERE to learn more.
Future Shock: The Future of Fraud Today
Below is the abstract and link to the referenced material, including the presentation, from the Taste of IT on Nov 8, 2023. This is the first release in the Future Shock series. [FYI: Mo was unable to attend so I presented his slides]. Enjoy!
"The presenters will discuss the evolution of Organizational Identity Fraud and the abuse of organizational identity assets from the beginning of the World Wide Web to its current incarnation. They will assess the state of organizational identity asset protection programs and answer the question “Are organizations prepared for the world of software defined everything, nation state threat actors and coexisting with the Internet of Dangerous Things?”
The presentation will update the definition of Corporate Identity Assets and introduce relevant, novel, and forward-thinking threat catalog items associated with Organizational Identity Fraud. The presenters will articulate control affinities and practical life-cycle management practices for consideration, positing how transformational trends in mobility, computing and social media conspire to make organizations more vulnerable, while demonstrating how marketing, security and operations can join forces to turn the tables on their adversaries by becoming “hard targets”.
Building on prior work published in 2006 (https://www.sans.org/reading-room/whitepapers/engineering/corporate-identity-fraud-life-cycle-management-corporate-identity-assets-1650 ) and 2021 https://www.sans.org/white-papers/corporate-identity-fraud-ii-future-fraud-today/ ) the presenters will share new research & insight that demonstrate multi-domain mayhem caused by abusing organizational identity assets and exploiting (hidden) dependencies! Further, they will share their methodology, findings, and novel & emerging threat catalog items (aka relevant use cases).
Presentation Link HERE
Innovation Matters: Invisibility Cloak
“Wicked” Business Problem:
The simple act of connecting a device to a network (wired or wireless) exposes (high-value) it to more risk.
Innovative Approach:
Provide dynamic connectivity protection and resource sharing, in hostile environments, with no externally visible attack surface.
Vetted Solution:
BYOS’ human friendly approach uses a physical dongle to effectively “Cloak” personal computing devices, rendering them invisible and allowing them to connect with confidence.
Call To Action:
If you are interested in learning more dm me on LinkedIn or check them out @ BYOS [ Tell them Bryan sent you ;) ]