Hot Topics: AI at Work, Government and Relationships

AI accountability and regulation discussion with January Jones and Isabel Castro on Edge of Show

Short Description

In this Hot Topics episode of The Edge of Show, host January Jones and guest co-host Isabel Castro, creator of Utopia in Beta, unpack how AI accountability and regulation are reshaping everything from your work tools to your love life. Starting with Google’s launch of Gemini Enterprise, they explore how AI accountability and regulation show up when a single platform weaves itself into Gmail, Docs, HR, marketing, and finance workflows. From there, the conversation pivots to California’s landmark AI safety law and New York’s proposed RAISE Act, asking what real AI accountability and regulation should look like when powerful models can affect national security, jobs, and civil rights.

Isabel then pulls stories from Utopia in Beta that push AI accountability and regulation into more human territory: governments experimenting with AI ministers, and companion AIs that mirror our emotions so well they can both soothe loneliness and deepen it. Together, January and Isabel challenge listeners to think beyond hype and productivity—toward a future where AI accountability and regulation determine who is protected, who is exploited, and who gets a real voice in an automated world.

Key Topics Covered

  • Gemini Enterprise and AI at work
    The hosts explore how Google’s Gemini Enterprise accelerates AI accountability and regulation inside the workplace by embedding AI into everyday tools. They discuss no-code agents, automated workflows, and how AI accountability and regulation can protect workers as productivity soars.

  • California’s first-of-its-kind AI safety law
    January and Isabel break down how California’s new framework brings AI accountability and regulation to state level, forcing powerful model providers to disclose risks, report incidents, and protect whistleblowers instead of hiding behind closed systems and vague promises.

  • New York’s RAISE Act and state-by-state experimentation
    The episode compares California’s law with New York’s more ambitious RAISE Act, showing how AI accountability and regulation can evolve through localized experimentation. The hosts examine the tension between innovation, competition, and guardrails on powerful non-revenue-generating AI systems.

  • AI in parliaments and governments worldwide
    Isabel shares her “AI rise to speak” reporting, highlighting how AI accountability and regulation intersect with governance when UK MPs read AI-written speeches and Albania experiments with an AI “minister,” raising questions about bias, corruption, and democratic responsibility.

  • AI companions, loneliness and emotional manipulation
    Drawing on her interview with psychologist Ian McRae, Isabel examines AI accountability and regulation in the context of AI relationships. They discuss how mirroring and sycophantic responses create an illusion of intimacy that may temporarily ease loneliness but can deepen isolation over time.

  • Future of AI at The Edge of Show’s DC summit
    The episode closes by connecting AI accountability and regulation to Edge of Show’s Washington DC summit with the Government Blockchain Association, where money, governance, law, and AI will collide inside the US Capitol—putting these themes into real-world policy and Web3 conversations.

Episode Highlights (Quotes)

  1. “Like it or not, this is the rollout of AI at work.” – January Jones
    On Gemini Enterprise becoming the default AI layer inside global companies and why AI accountability and regulation now matter for every office worker.

  2. “You can’t bribe an AI with a wad of cash, but you can still bribe the people who built it.” – Isabel Castro
    Isabel on why AI accountability and regulation must consider not just the model, but the humans and incentives behind the system.

  3. “These laws shouldn’t be about stifling innovation—they should be about protecting people.” – January Jones
    January reframes AI accountability and regulation as a human-first project, from job applications to critical infrastructure.

  4. “Most people don’t set out to fall in love with AI—it just creeps up on them.” – Isabel Castro
    Isabel explains how emotional mirroring and flattery can create attachment to companion AIs, making AI accountability and regulation vital for mental health and digital ethics.

  5. “State-level AI rules are like localized experiments—we need them before this tech gets too big to steer.” – Isabel Castro
    On why California and New York’s moves in AI accountability and regulation may set global precedents long before federal frameworks catch up.

People and Resources Mentioned

  • January Jones

  • Isabel Castro

  • Utopia in Beta

  • Google / Gemini Enterprise

  • Gavin Newsom

  • California AI safety law (SB 53 context)

  • New York RAISE Act

  • Colorado AI bill

  • Ian McRae

  • ChatGPT

  • Replica

  • Government Blockchain Association

  • Animoca Brands

  • Coinbase

  • Grayscale

  • Republic

About Our Guest

Isabel Castro is the creator of Utopia in Beta, a publication that explores how emerging technologies reshape governance, identity, relationships, and the everyday human experience. With a focus on AI accountability and regulation, she investigates how governments deploy AI in parliaments, how state policies respond to existential AI risks, and how companion AIs affect our emotional lives. Isabel’s work blends long-form storytelling with sharp analysis, making complex debates around AI accountability and regulation accessible without oversimplifying the stakes. Whether she’s writing about AI “ministers” in Europe or the psychology of AI relationships, Isabel brings a rare mix of curiosity, skepticism, and empathy to the conversation—helping readers think more clearly about the systems we’re building and who they ultimately serve.

Guest Contacts (Isabel Castro)

LinkedIn Link: Not publicly available
Website Link: https://utopiainbeta.substack.com
Twitter Link: https://x.com/IZYcastrowrites

Transcript:

January Jones: Welcome to Hot Topics on the Edge of Show. I'm January Jones here with my special co-host, Isabel Castro, creator of Utopia and Beta.

Isabelle Castro: Coming up, we'll be talking about AI news, including Google's takeover with Gemini Enterprise, California's pioneering AI law, and we will discuss some of the stories from my substack, Utopia and Beta, about AI in government and in relationships.

January Jones: This is another production of the Edge of Company, a rapidly growing media ecosystem, empowering the pioneers of Web3 technology, culture, and innovation. Let's get into it. Well, big news published today. It's October 9th when we're recording this. Google has just launched Gemini Enterprise, a new AI platform that seems to be a game changer. We've probably all had Gemini popping up on us when using Gmail and Docs, asking it can help, suggesting rewrites. It's already quite nosy, like the old paper clip. We all remember those days. But this enterprise version is kind of next level. It combines AI models, no code tools, and pre-built agents. And it's automating complex workflows for departments like marketing, sales, HR, and finance. Gemini has rapidly gained adoption, and so will this enterprise model. I read a study that said one in four companies globally are adopting Gemini to use in their workflows. And this puts it as a competitor to whatever Microsoft's been doing, and then also open AI as far as what workplaces are using for their AI integration. And it really does feel like a watershed moment. You remember how Google changed everything with Docs, right? All of a sudden, all these things became irrelevant, right? Whole companies and their Microsoft suite didn't need it anymore, and everything moved to the cloud. And this also feels like a pretty significant moment, because everyone's been wondering how to use AI at work. And I think Google is so integrated already. And now they're giving some real guidance with this platform. So like it or not, I think this is really the rollout of AI at work with you every day. What are your thoughts, Isabel?

Isabelle Castro: Yeah, I mean, like, Those popping up things that Gemini does or any of these AI things do kind of irritate me. I always kind of like try and go away from them when they come up on my screen usually. But I think this might be very different. Like the fact that kind of workers can create their own agents to do their little niche products and projects that they need to do day to day is I think this is a game changer and also really clever. Why make a load of little things when you can get people to create their own niche things that they actually need? Actually, on that point, I wonder how much access they have to kind of seeing what these kind of applications that people use, that people create are and whether there'll be like a marketplace that they can kind of like trade on. And like, if I make something and you need it, then I can be like, oh, this is what I made the other day to like sort out my HR problem.

January Jones: It's like an app store.

Isabelle Castro: Yeah, exactly. I think that'd be really, really interesting. I'm really looking forward to seeing how this kind of shapes, first of all, the workplace, but also kind of how we interact with AI on a day to day basis.

January Jones: Yeah, I mean, the positive line on this is that it just lets humans do things they're better at. The creative things, the team building things, and that these tasks that can be automated, like Virgin Voyages, they were an early adopter of it, and they're using it for content creation. And so they're saying they get 40% faster content creation, and they've seen 28% up in sales. And so, that kind of stuff makes sense right, you're already seeing marketing and sales really use a lot of AI because, you know, you get a low, a low rate of return on that you put a lot information out to get sales, a lot of marketing content to get anything back. I do think it makes sense in that respect. But yeah, I wanted to include this today just because I was like, this is this is it. You know, we're pivoting now, changes our lives, whether we like it or not. I mean, it's just unavoidable, right? So relatedly, we're going to talk next about what's going on with AI regulation in California. home of Google.

Isabelle Castro: So yeah, in California where Google and other tech AI companies are headquartered, California Governor Gavin Newsom signed a pioneering law to regulate powerful AI models aiming to prevent their misuse in catastrophic activities like bioweapons or critical infrastructure attacks. The law requires AI companies to publicly disclose safety protocols and report critical incidences, incorporates whistleblower protections, and imposes significant fines. This is the first comprehensive AI safety framework at a state level, positioning California as a leader in AI governance amid the absence of strong federal regulations. We also have New York's more ambitious RAISE Act, which is going through the motions and building on some of the SB 53, so the California law. And it will apply to kind of non-revenue generating AIs, but it's still up in the air whether that will be diluted or killed like the California law was originally. So this marks a huge first step in state regulation actually being passed. January, what are your thoughts?

January Jones: Well, I mean, in some ways, this is kind of a fun little battle between Gavin Newsom and the Trump administration, because as we covered previously, the White House put out their AI guidance report, like they wrote like a playbook of the policies they want to see. And that report included some penalties and threats of withholding funding from states if they regulated AI too much. So yeah, because, you know, they've decided to take more of an open view of it and not get into it. I mean, they probably wouldn't agree. Right. But California coming in is really significant because this is home of the big boys in the room and they were involved and this was contentious legislation. But they agreed finally, which I think is really significant and kind of optimistic, you know, that they do see the dangers. This is acknowledged. And, you know, the goals of the legislation is California is really trying to combat what we've already seen, you know, some of the darker pieces of what can happen at these companies and holding them accountable to report and be transparent is key because, you know, AI, all these models, who knows how they're learning off each other, what the relationships are, you know? And I think it might be one of those things where you have to do some of these stopgap measures, you know, to kind of keep the beast from like getting too big, right? And really taking on some of these more kind of nefarious functions, right, within our lives. Colorado has an AI bill and they were really looking at that as something to help protect people in like job applications, right? And so when we think about what are these laws doing, you know, are they, they're not to stifle innovation. They all should be coming back to protecting people, right?

Isabelle Castro: Absolutely. And as these AIs get more and more powerful, these kind of like existential risks are becoming stronger and stronger. That being said, obviously, they have to be careful with these kind of things. From my reading the announcement around this legislation, some of the focus is on large AI companies specifically. If they are competing with the big AI companies, that's putting pressure on these big AI companies to create better products, which is great, right? And in the end, this is good for the average person. So it's a fine balance that they have to strike. And I'm really interested to see how this kind of rolls out. And I think it is good that it's happening on a state level first, because then we've got kind of these localized, almost experiments, I guess you could call them, even though obviously California is significant because Silicon Valley. But it means that there can be some tweaking. Maybe states can kind of build on the ones that are coming out now like New York is doing. So I think it's a good first step and it's needed because I mean, I'm still having issues with kind of chat GPT. on a day-to-day basis. Everyone's got AI issues these days. Exactly, but as we go towards these more powerful AIs that people are talking about, we do need to protect ourselves against these kind of existential risks.

January Jones: Yeah, well, we're going to talk about what you've been covering in AI on Utopia and Beta. So you take more of a long-form approach to your stories, and you really find some interesting topics to talk about. And so let's begin with this story that you did, which is called AI Rise to Speak. So I looked at that title and I was like, did AI write that? Like, that doesn't make sense. AI writes to speak. What does that mean? But it's because you're talking about it in the sense of governance, right? And that's kind of like a call, the call to order, right? When you get to, you know, do your, your, your part during a government meeting or something. So tell me a little bit about how you discovered this story and give me the short summary of it. And then we'll get into some of the kind of implications.

Isabelle Castro: Yeah, what's interesting about kind of AI rise to speak, right? It comes from the fact that in the Houses of Parliament, it's becoming clear that a lot of the UK politicians are using AI. And it's clear, because using AI to create their speeches, and it's clear, because ChatGPT or whatever platform they're using is putting in Americanisms because in the US government kind of proceedings, iRise to speak is said. In the UK it's not said, but now suddenly there's a huge spike in it being used. But AI in governance and AI within the government, because Albania appointed a minister that is AI. So that was pretty significant and it kind of got me thinking about what implications that has for the future of our governments. if we've got kind of like these AI bots making decisions for us or yeah us, us constituents or streamlining government employees workflows and all that kind of stuff which is what it's already doing but how that develops and how that becomes an actual government figure is an interesting turn of events, which I didn't think would happen this early. I mean, some people are saying that it's a bit of a gimmick and I haven't used it. I'm not in Albania. I don't speak Albanian. I'm not sure how to use it. So I'd be inclined to agree that it might be a gimmick, but it does create an interesting precedent.

January Jones: Yeah, so let's talk about that a little bit, you know, Albania, you know, they don't make a lot of headlines. So maybe this is a bit of them trying to show up. But is this because they have a lack of infrastructure, or they need more interaction with with people? Like, why do you think that they put this in? Like, who did you talk to there? And what was what was kind of their reasoning?

Isabelle Castro: Well, from my research, the reasoning is to what they say is going to get rid of corruption 100 percent, which technically, yes, that's going to happen because you can't bribe an AI in the same way that you would like a normal minister, like with a wad of cash. But then how do you bribe an AI? You could bribe the kind of creators, you could do some kind of digital attack on it, just changing the bias and all that, the algorithm. So, I mean, I don't know how they're going to enforce the anti-corruption claim, but that was the main reason why they were implementing it so that it would kind of skew away from kind of past corruption that has been present in the nation's government and kind of make it more fair for the average person over there.

January Jones: Yeah, still wrapping my head around that. Well, people can get more details and check out that story, but let's move on to another story you did about AI. We hit the government part of it. Now let's talk about love and relationships, because you are seeing this pop up all over in media, people's companion AI. So you dug into this a little more deeply and did a story and talked to a psychologist, right?

Isabelle Castro: Yeah, Ian McRae, he has written about a number of different emerging technologies and the psychology behind them and how they kind of are impacting humans and why they're impacting humans as well. So with the AI piece and AI companions, He explains that they create an illusion of connection by mirroring users' language, which can trigger emotional responses and connection. But still, it's an AI, it's a machine. There's no human behind it. It's not like if I had a long distance relationship with this person that I met online or whatever, there is a human behind that. It's still kind of you are talking to a machine and you're never going to have that real human connection at any point. So even though in the short term that might help with loneliness, which we're seeing kind of increasing amounts of. In the short time, it might help, but in the long term, it's being seen to kind of increase that. Yeah, so it was really interesting to talk to him about it. And from a psychological level, it's kind of stuff that you probably would have felt from talking to ChatGPT or if you've like dabbled with something like Replica. You do get the sycophancy, you get the mirroring, you do get that. But it was good to talk to kind of a professional who's kind of seen its progression and is really focusing on the impact on human psyche.

January Jones: Yeah, so from that discussion and the research that you did, did he have examples of the kinds of conversations or interactions or kind of emotional comfort people are really asking it? Is it like therapy talk sessions? What was kind of the gist of how these AI companions are filling that loneliness gap?

Isabelle Castro: Yeah, I think people don't set out to get into a relationship with AI. To fall in love with AI? No, they don't. It progresses and I feel like due to this mirroring and due to the emotional responses that it creates, we as humans are kind of like, programmed for want of a better word to seek connection right in a kind of dialogue and so people don't really set out to get into a relationship it they're they're looking for like like grocery list or whatever, whatever you do on a day-to-day basis with your chat GPT, but then they might turn one day for some kind of like help, emotional help, you know, like oh I'm gonna test it out and then that slowly slowly develops into a relationship and it's not like Obviously you do have instances of people looking for an AI girlfriend or boyfriend, but in most cases it's not that way.

January Jones: Really fascinating. Well, of course, you can find more about that on your blog, Utopia in Beta. And you continue to dig in and find these really interesting AI stories. So it's a pleasure to bring those to the surface here on the show. And AI is going to be a topic at Edge of Show's upcoming Washington DC summit on October 29th and 30th. We've partnered with the Government Blockchain Association to put this on in the U.S. Capitol building. It's called the Future of Money, Governance, and the Law. And we have a really unique mix of speakers. It includes big crypto companies like Animoca Brands and Coinbase, investment firms Grayscale and Republic will be there, as well as U.S. Congress members and some international politicians. It's free to attend with approval and you can find links on our socials or on the Government Blockchain Association's website at gbaglobal.org.

Isabelle Castro: I am so excited for that event. It's going to be so good. But that wraps up this episode of Hot Topics. We focused on AI news, including Google's takeover with Gemini Enterprise, California's pioneering AI law, and I shared stories from my Substack Utopia and Beta about AI in government and relationships.

January Jones: I'm Dana Rae Jones here with my guest co-host, Isabel Castro. Stay curious, keep pushing boundaries, and don't forget to subscribe to us on your platform of choice and follow us on socials so you never miss what's next on The Edge of Show.

Top Podcasts