AI

The multibillion-dollar AI security problem enterprises can’t ignore 

AI agents should make the job easier. But they also create a whole new category of security nightmares.

As companies deploy AI-powered chatbots, agents, and copilots into their operations, they face a new risk: how do you let employees and AI agents use powerful AI tools without accidentally leaking sensitive data, breaking compliance rules, or opening the door to prompt-based injections? Witness AI just raised $58 million to find a solution, building what they call “the trust layer for business AI.”

Today on TechCrunch’s Equity podcast, Rebecca Bellan was joined by Barmak Meftahco-founder and partner at Ballistic Ventures, and Rik CacciaCEO of WitnessAI, to discuss what companies are actually worried about, why AI security will become an $800 billion to $1.2 trillion market by 2031, and what happens when AI agents start talking to other AI agents without human supervision.

Listen to the full episode and hear:

  • How companies accidentally leak sensitive data using ‘shadow AI’.
  • What CISOs are concerned about now is how the problem has evolved rapidly over the past 18 months and what it will look like in the coming year.
  • Why they think traditional cybersecurity approaches won’t work for AI agents.
  • Real examples of AI agents going rogue, including one who threatened to blackmail an employee.

Subscribe to shares at YouTube, Apple podcasts, Cloudy, Spotifyand all casts. You can also follow Equity on X And Wiresat @EquityPod.



Source link

See also  'Saturday Night Live' technical problem before Stevie Nicks' performance interrupted the show
Back to top button