Dive into the latest episode of The Edge of Show where hosts Josh Kriger and January Jones sit down with Claire Davey from Realm to discuss the urgent challenges of AI business risks. This episode unpacks the White House's new AI action plan, alarming findings from IBM’s AI data breach report, and explores how businesses can navigate the rapidly evolving AI landscape. From understanding AI vulnerabilities to implementing proper governance and insurance coverage, Claire shares expert insights tailored to industries like Web3, biotech, and the space economy. Whether you’re building cutting-edge AI products or just integrating them into your organization, this conversation offers a roadmap to safeguard innovation and build resilient systems in a world where AI business risks are rapidly emerging.
Key Topics Covered:
- White House AI Action Plan: Analysis of the Trump administration’s new strategy with 90+ policy recommendations aimed at deregulation, infrastructure growth, and positioning America as the global AI superpower.
- State vs Federal Regulation Conflict: Insights on how federal AI policies may override state laws, influencing heavily regulated states like California and Colorado.
- IBM AI Data Breach Report: Key findings on how 13% of companies experienced AI breaches, with 60% losing data due to poor AI access controls.
- AI Governance Challenges: Discussion on transparency in AI training data, legal issues surrounding intellectual property, and the EU’s upcoming AI Act.
- Role of Insurance in AI Risk Management: Claire Davey explains how Realm develops insurance solutions tailored to dynamic industries like Web3 and biotech to mitigate emerging AI risks.
Episode Highlights:
- “AI adoption is racing forward, but without proper governance, it’s a hacker’s dream.” – Josh Kriger
- “Incentivizing states to deregulate AI through federal funding policies is a significant shift.” – Claire Davey
- “The IBM report shows that 97% of companies lack proper AI access controls.” – January Jones
- “Insurance is all about trust and providing clarity, especially when risks are unknown.” – Claire Davey
- “Human-first AI is about making technology serve people, not replace them.” – Josh Kriger
People and Resources Mentioned:
About Our Guest:
Claire Davey is a leading expert in risk management and emerging technologies. As a key team member at Realm, she helps businesses in industries such as Web3, AI, biotech, and alternative medicine secure innovative insurance solutions. Claire specializes in developing strategies to navigate the complex landscape of AI business risks, focusing on data security, compliance, and governance. With years of experience in insurtech and risk mitigation, Claire is at the forefront of helping organizations protect their assets and innovations in today’s rapidly evolving digital economy.
Guest Contact Information:
Transcript:
Josh Kriger: Welcome to Hot Topics on the Edge of Show. I'm Joshua Krieger here with my co-host January Jones, and today we have a guest joining us from Realm, Claire Davie. Great to have you back, Claire.
Claire Davey: Thanks for having me. Pleasure to be here.
January Jones: Coming up, the White House's AI plan, the IBM report on AI data breaches, and we'll talk to Claire about how to mitigate some of those AI business risks.
Josh Kriger: This episode is brought to you by Realm. Realm crafts insurance solutions to give businesses in dynamic industries like Web3, AI, alternative medicine, biotech, and the space economy the protection to innovate and thrive. Where emergent industries have previously battled uncertainty, Realm is making innovation resilient. And a quick note that the views expressed by the Edge of Company, of course, do not necessarily reflect those of Realm.
January Jones: This is another production of Edge of Company, a rapidly growing media ecosystem empowering the pioneers of Web3 technology, culture, and innovation. Let's get started. So kicking off our first story, we're going to talk about the AI plan from the White House. Last month, the Trump administration released its AI action plan with over 90 policy recommendations to make America the world's AI superpower through massive deregulation and infrastructure spending. The plan eliminates references to misinformation and climate change from federal AI guidelines while threatening to cut funding to states that regulate AI, all under the goal of preventing woke AI. Now, there's lots of other things in this plan that is pro-industry. These are kind of some of the criticisms. And it sparked some debate. They're talking about building data centers on federal land and embedding tracking location technology and AI chips. So it's interesting, you know, the White House comes up with plans and reports. It's not necessarily law, but it is can it can be enacted by executive orders and federal agency rulemaking. So the idea is to make America the AI superpower. Josh, what's your take on this?
Josh Kriger: Well, I haven't gone through the full document, but I think at a high level, this is a very significant undertaking by the U.S. government to not just invest in AI, but try to own a narrative overall. And, you know, it's sort of like digital diplomacy 2.0. I think one of the more staggering adjustments to policy is this idea of sort of free speech and AI models. I think we need to hear from AI experts globally on what the potential impact of that is. Hopefully, the White House also did consult with various folks in this regard. I think it's going to impact how AI LMs are built and sold, and especially sort of as it pertains to government contracts, but more broadly as well. At the end of the day, you know, there's a pendulum here and, you know, you do too much to sort of deregulate and sort of open AI. What are the implications on the other side in terms of, you know, AI has gone wild, right? So I don't know what the exact formula should be, but I think it needs to be carefully threaded. And again, similar to other White House sort of policy updates, I do wonder if this is one of those situations where they're intentionally swinging the pendulum a little bit farther than they actually anticipate will actually be the resulting legislation and policies. Claire, what are your thoughts?
Claire Davey: Yeah, I agree with those sentiments. I think one of the standout items to me was that although it's a drive for investment and to encourage growth, there was a point in there that said that they were probably going to divert investment away from those states that have heavy AI regulation. And those at the moment are sort of California and Colorado. So in a way, they're trying to incentivize a pause in regulation or almost a deregulation at the state level, because as we're aware, there's no overall federal regulation of AI at the moment. So it's a bit of a patchwork. So this is trying to use sort of business incentive, I guess, to try and shape the legislative regulatory agenda.
Josh Kriger: Yeah, ever since I left LA to travel this summer in Europe, there's been no shortage of the federal government beating up on California. I feel bad for all my friends and neighbors in California where they just can't seem to catch a break lately.
January Jones: Well, when it comes to the states, there are a lot of states that have pushed some regulation and a lot of it is for consumer protection, right? We're seeing a lot of fraud. We're going to talk about that coming up in the next story. I live in Colorado, so I have that reference. And the issue in Colorado is they wanted to put an AI law in place about screening for applicants for job applications. So AI shouldn't do that, right, because there's going to be bias in the models and, you know, a person should do that. So there's lots of hooks, of course, the federal government has in the states for funding. Yeah. But this AI plan really puts out that anything the federal government decides is their policy should supersede anything that any of the states have. And if they do have those laws, then they could be withheld funding.
Josh Kriger: Yes, so January, you make a really good point here. I think there's one thing to own the narrative, but this is a strong, strong policy statement by the federal government in terms of how they see the future of AI. and governance. So it will be interesting to see to what extent, you know, this actually occurs. Historically speaking, there's been a lot of deference to states and local municipalities on a number of different issues. However, it's clear the White House has determined that they need to sort of hold their reins very carefully at a federal level. At the same time, I think there is some integration of considerations around other sort of policies and sort of things that the government is trying to do, that they're using this as a wedge or a lever, if you will. So we'll have to keep coming back to this story as it unfolds.
January Jones: Yeah, definitely. Of course, there's the promise of AI, and the report lays out a lot of ideal scenarios. We are seeing great things happen. I mentioned research in medicine. That's one of the coolest things, I think. And it's like AI serving humans, not humans serving AI. That's always my thought. Don't replace me. Help me. Be my good robot so I don't have to do the things I don't want to do in my life. And also that ingenuity in science. There's enormous potential for that. But relatedly, there is the downside. That brings us to our next story. We're talking about data breaches. So IBM has just released a report, a 2025 cost of data breach report. And it's the first time ever that they've really looked at this. And they're looking at AI breaches. And now it is on the radar. The main headline is 13% of organizations reported AI model or application breaches, and 97% of those had no proper AI access controls. And even more shocking, 60% lost data. 31% faced operations getting disrupted, and only half of the breaches in the organizations had a plan even to invest in better security. And so this is coming from numerous things, but it is a little bit of like, Employees using chat GPT, right, doing AI stuff on their own and not realizing they're opening up their companies to these securities. So as we're racing to adopt AI and kind of regulate it as we were just talking about, this is a criticism of the report, you know, that we're leaving like the back door open a bit. Claire, what are your thoughts on this, of the risks and what the IBM report had to say?
Claire Davey: Yeah, I think one of the sort of big improvements in the cybersecurity industry and discipline over the last 10 years has been sort of the ability to segment networks and control what leads the network or to see what's leaving those segmented areas and to be able to apply additional security tools around them. But now with the implementation of AI agents, for instance, that are working on various projects across the business, they are moving and need to move laterally across different areas of the network in order to do what they do, whether that's data gathering, data analysis, etc. And so what we're seeing there is a massive vulnerability that's opening up, because if you've got tools and technologies that can move across those controls, then obviously, bad actors can also use them as a way to enter, you know, repositories, data repositories that they shouldn't be in. So I think, in this rush for AI adoption, we've really got to be thinking about the governance of it and the technical security controls and how they may be sort of obfuscating what's already been done in this area.
Josh Kriger: Clear does that make AI insurance more complicated in terms of creating policies and tailoring those policies to the different nuances of of different industries and organizations and in the products and services they deliver.
Claire Davey: I think it does in the sense that, you know, if we had been looking at underwriting a technology company before, we would have been very attuned to the fact that they are developing these different solutions. They're going to be playing with new and innovative things that may need additional testing, etc. and have vulnerabilities that we weren't aware of. But I think the difference now is that we're seeing organizations that are not related to that. I'll give an example of manufacturing or you know, the education sector, healthcare, that are implementing these technologies. And so the underwriting questions need to be different to understand how they're managing that risk internally, and deciding whether they are, you know, they do have the appropriate controls in place to get us comfortable with underwriting it. I think one of the challenges for the insurance industry is understanding what their total exposure is to these different AI tools and technologies, particularly certain models, a good example being ChatGPT, that are being used not only by employees, but at a corporate level, organizations are using those as a basis to then build and adapt their own tools. And so, sort of an aggregation risk is building up across insurers portfolios that I think they sort of at the moment are not taking into consideration.
Josh Kriger: Yeah, I can speak for our company that there were some new questions on our insurance application this year around AI that sort of made me have to think about, you know, are we doing everything we need to do on our end to sort of prepare for this change? And the reality is I think everyone's kind of learning as we go and trying to figure this out to the best of our ability. similar to sort of all the hacks of recent years in blockchain. I think we are seeing a darker side of AI adoption as we speed forward with security. It's sort of a hacker's dream, if you will. The cost of the breach in the U.S. hit $10.2 million while the global average has dropped. A clear signal that high-valued targets are not yet protected fully. At the same time, organizations using AI and automation in their security saved almost 2 million, shaved 80 days off a breach's life cycle. It's not necessarily AI is the problem, it's as you mentioned previously, it's the governance that's most important to support maximizing this unique innovative technology that for Edge of company, we're using every day. It's changed our whole organization for the better and to January's point, You know, Sandy Carter just wrote a book just about sort of human first AI. And I think, you know, with that sort of idea of human first AI, humans can make mistakes. And that's why good governance is essential here.
Claire Davey: Yeah, I think, you know, just jumping in here, you know, focusing on media companies, which obviously edge of is, is one of those, you know, we're seeing a lot of jurisprudence and sort of legal actions taking place at the moment where there's this kind of battleground right for intellectual property rights as a result of AI use. And I think that's something really to watch, both from an insured's perspective in terms of how they're using AI and what can they call their own IP when they're using AI, but also how can they avoid infringing the IP of others when they're using it. So definitely want to watch over the next coming months and years.
Josh Kriger: Just what I need with my media company is more complications and things to keep me up at night. But yes, this is all true. January, what else do we have going on today?
January Jones: Well, let's talk a little bit more with Realm. We've been talking about the AI growth. And so Claire, you've put up some risks that I think a lot of people may be not considering right in their business. You've just got Josh's haunches up about it. So tell me a little bit more about some of these risks and how you guys are looking to let your clients and the rest of the community know about these and what's happening with insurance. Because I don't think a lot of people consider the idea that insurance could help you with this.
Claire Davey: Yeah, totally. I think we've talked about some of the risks within our sort of conversation earlier today, one of which being data security. Another is transparency. Can we be sure of where the training data came from that is being used by these AI models? And also sort of regulatory compliance, right? So although we've talked about the US being a patchwork of regulation in the EU, they've been very clear about the AI Act and what they expect there, which is going to be rolled out in terms of its implementation over the coming few months and years. So I think Those have been sort of the top concerns of insureds regarding their risks. And instead of just deploying it into their business, what we would like to see as insurers is that they've got a sort of multidisciplinary governance committee that are having oversight and project managing these use cases within organizations. And if we have a combination of IT specialists, risk managers, you know, legal experts, we're finding that it actually not only improves the implementation, but it reduces the risk. So that's something to be mindful of for anyone that's listening out there.
Josh Kriger: So Claire, Realm is sort of a different egg when it comes to how you guys are looking at insurance. What are some of the challenges overall in the insurance industry and how they're sort of tackling this new emerging technology?
Claire Davey: So the main approach that we're seeing at the moment in the insurance industry is to remain silent, which means that there's no affirmative coverage. And insurers are kind of hedging their bets as to whether they're going to pay on those claims or not. And that's not really a position that we want to take. We'd rather be affirmative about it, which is why we released our
Josh Kriger: I mean, that was the whole problem with the last 10 years of blockchain governance, right? They left it ambiguous and that was really difficult to run a company in that ambiguity.
Claire Davey: Correct. And we saw it even before that with just cyber coverage. So we had this whole debate around silent cyber exposure that the regulators in Europe and the UK cracked down on. So we're going to see that again with regards to AI and go through that journey as an insurance industry and insureds. And let's just hope that claims and insureds don't get caught up in that. I'd rather they gain affirmative coverage from their insurers.
Josh Kriger: And let me ask a question on that. I mean, historically, do the courts sort of typically lean towards the favor of the insuree or the insurance company when there's a lot of ambiguity?
Claire Davey: That really depends on the jurisdiction of where the policy is governed. So in more insured-friendly states, particularly in the US, you'll find that any ambiguity will be found sort of on behalf of the insured. But other states which are more punitive and more business-friendly will then find, you know, on behalf of the insurer. I think that ultimately, no one wants to get to that position. The insurers don't, insureds don't because at that point, relationships have broken down and there's a large amount of cost involved in that dispute. Ultimately, insurers want to be there to pay claims. They want that partnership, Realm certainly does. It doesn't help anyone if their name is in the press to suggest that they are not holding up on their half of the intent of the contract.
Josh Kriger: Yeah, I mean, at the end of the day, insurance is is all about trust. Right. And and that peace of mind that that, you know, with a particular product, that it provides the sort of clarity that is needed to to to just do do your thing, just run your business and focus on, you know, maximizing profit and shareholder value. Right.
Claire Davey: But particularly when the risk is so unknown or still emerging as it is with AI technologies. So for instance, although that EUAIA has been implemented, a lot of companies don't quite know yet what that's going to mean for their operations, whether there's going to be audits, what happens if they're investigated? What do the fines and penalties? How punitive are they going to be? And yes, we know the ballpark calculations. But I think offering some form of certainty and safety net and reassurance to the board or your balance sheet that in a worst case scenario, in those times of uncertainty, there is something, a financial backstop to help, I think is invaluable. And so, why cloud that or muddy the waters with ambiguous language would be my challenge.
Josh Kriger: Yeah, that makes a lot of sense. And just from the last few weeks of conversations we've had with you and our other guests from Realm, it's really opened my eyes to the importance of thinking about risk proactively. So really appreciate you joining us here today, Claire, and being part of the show.
Claire Davey: Great. Thanks for having me. It's been a pleasure.
January Jones: Yeah, well, that's going to wrap up today's episode of The Edge of Show. Join the conversation on social and let us know what you think.
Josh Kriger: Thanks for joining us as we navigated the latest and greatest in web 3 and AI. I'm Josh here with January, our guest, Claire Davie at Realm. Stay curious, keep pushing boundaries, and don't forget to subscribe so you never miss what's next on the Edge of show.