#1720057: Managing Risk Has Been a Priority Ever Since You Asked About It (LIVE in NYC)
| Description: |
Full Transcript Intro 0:00.000 [Voiceover] Biggest mistake I ever made in security. Go. [Saket Modi] Thinking security is a technical problem and not a business problem. [Voiceover] It’s time to begin the CISO Series Podcast, recorded in front of a live audience in New York City. [David Spark] Welcome to the CISO Series Podcast. My name is David Spark. I am the host and producer of the CISO Series Podcast. Sitting to my immediate left is the guest co-host for today’s episode. He’s actually done this show as a guest co-host before. Please, warm round of applause for Matt Southworth, CSO of Priceline. Thank you. All right, Matt, just quickly, we are at the FAIRCON25 Conference. You come to many of these conferences. Here’s my quick question for you. When you come to an event like this, you talk to your colleagues. Everyone talks about the conversations being the most important part of the show. My question to you is, what are the questions you ask your colleagues when you come to an event like this? [Matt Southworth] Few things. So everyone likes to talk about the new vendors, but I like to talk about what are you not doing? What vendor, what service, what process are you dropping? I think that’s interesting. And sometimes people aren’t even aware of what they’re dropping. It’s always good to find out what everybody else’s board is asking about. Somebody mentioned quantum crypto to us and that’s a project for next year. And what are the open source tools your engineers love to play with? [David Spark] That’s very good. All right. Let me also introduce our sponsored guest for today’s episode. Very thrilled they’re on board. Many of the reasons why many of you are here today, to my far left, it is a CEO of Safe Security, Saket Modi. Let’s hear it for Saket. [Saket Modi] Thank you, David. It’s time to measure the risk 1:55.273 [David Spark] “What’s actually driving risk in your environment right now?” This is Lisa Begando of Health Catalyst. She argues that too many security teams never answer that very critical question. Instead, they’re consumed with busy work, filling out third-party questionnaires and chasing compliance scores. She proposes a popular trend, modernizing your GRC system to automatically pull real time control data from your environment and feed it into a risk engine that quantifies how control failures impact risk as they happen. Now, if you throw a rock, you’ll hit any number of AI -powered tools promising continuous visibility into your control health. But if AI gives us a genuinely better understanding of our security posture, how should that change your legacy GRC program? I’m throwing to you, Matt, first. What’s becoming antiquated? This references what you just said at the beginning of show. What becomes antiquated that we should simply stop doing of our traditional GRC? [Matt Southworth] I feel like this is somewhere that I’ve always struggled is what we do here, measure risk, and how do we talk about that? What I think we can stop doing is questionnaires, everything that our vendors and that our partners are looking for from us because the truth is those are going to be filled out by an LLM, whether we tell our vendors that or not. For that to work and for us to stop doing the busy work, we need to make sure that we are doing a better job documenting incidents, documenting controls and just keeping a corpus, it doesn’t have to be well-organized, of information about our program so that we can still answers and new information. [David Spark] All right, Saket, same question to you. I want to know just high-level, what should you stop doing with your GRC program? [Saket Modi] I think the first piece is GRC has three letters, governance, risk and compliance. I think let’s start focusing in all three, if not equally, at least giving them that degree of priority to understand that there is a risk, not just compliance. So to what Matt was mentioning very rightly, that when you do questionnaire and when you’re very, very compliance-led, to say, “Hey, is this something compliant? Is it checking the box?” that is not really risk management. How do you actually get to the bottom of real data, real telemetry? And there’s enough sources today where you can pool telemetry from and then compile that together, not just to see how compliant you are, not just seeing what is the control maturity in your environment, but that compliance and controls can lead to what are your business risks and going ahead and tracking that. So as we say, start with the why. The “why” has to be we have to do risk burn down, not just achieve compliance. So that is what we have to stop doing, being more very compliance focused. We have to go and change the conversation to be very risk focused. [David Spark] Have you had to do that shift? Because we’ve heard this line many times before, nobody throws stones at me, compliance does not equal security. But at the same time, most businesses know that if we don’t do this, we’re going to be automatically fined. I mean, can you make that shift or is it just… I mean, it sounds like it’s just like it’s a cost of doing business. [Matt Southworth] It is, but it’s also a little bit of an opportunity sometimes. If you’ve got a compliance program, you’ve got strategic goals. You can connect them however loosely and drive where you actually want the program to go with a little veneer of compliance on top of it. [Saket Modi] Yep, and I think compliance is a subset. Nobody’s saying don’t do compliance, but think of it like this. 99.9 % of financial services companies, which have been hacked in the last 10 years, have been compliant to PCI, to the popular stuff. They still get hacked. What does that show? It means that you need something beyond compliance. So you’re not saying don’t do compliance, but that’s the floor, not the ceiling of how you manage risk. Do you trust this LLM? 6:02.150 [David Spark] “Who is responsible when an autonomous agent makes a mistake?” I love this. Ritu Jyoti argued in a piece on CIO.com that because AI agents lack legal personhood, responsibility must fall on the humans who deploy them. So we need clear ownership, revocable credentials, and audit trails for AI. Now, we don’t always blame humans for making mistakes, we expect it, but agentic AI will make mistakes, too, probably at scale. So should we need clear ownership, revocable credentials and audit trails for AI. Now, we don’t always blame humans for making mistakes. We expect, but agentic AI will make mistakes too, probably at scale. So should we treat AI errors the same way we treat human errors or fundamentally different? What does accountability even look like when an autonomous agent screws up? So I’m going to start with you, Saket. The blame game could be endless here. We know this. And more practically, how are you building governance today that can actually trace responsibility when things inevitably go wrong? [Saket Modi] I think the way you look at that is, firstly, David, this depends on how you define a mistake. There are different degrees of mistakes. [David Spark] Yeah, but more of the fear is the speed that these mistakes could start happening. [Saket Modi] That’s right. And that can be a difference between somebody very smart in your team doing things very quickly versus somebody who takes two weeks to do the same thing. What I’m trying to get towards is if you thinking of Gen AI agents as autonomous human beings and treat them like identities, I know this is like a Pandora’s box, but the reality is that, given the capabilities of these AI agents that we’re talking about, they have to be treated in a similar way, if not exactly the same way, of how we look at human identity. And the moment you look at that, you’re talking about the guardrails, the identity and access controls, you’re talking about the privileges that they have access from a data access perspective, what can they see, what can they act upon. So again, the first principles remain the same because it’s human intelligence or artificial intelligence. Intelligence is still intelligence and that needs to be guarded with the same degree of governance. It’ll be slightly different, but not entirely different, in my view. Read rest in the link |
|---|---|
| More info: | https://cisoseries.com/managing-risk-has-been-a-priority-ever-since-you-asked-about-it/ |
| Date added | Jan. 15, 2026, 1:21 a.m. |
|---|---|
| Source | CISO Series |
| Subjects |
