What could possibly go wrong if an enterprise replaces all its engineers with AI?


AI Coding, Vibe coding And agents swarm have recently made a dramatic and astonishing market entry, valuing the AI Code Tools market $4.8 billion and is expected to grow 23% annually. Companies are grappling with AI coder tools and what to do with expensive human coders.
They do not lack advice. The CEO of OpenAI estimates that AI can deliver more than 50% of what human engineers can do. Six months ago, Anthropic’s CEO said that AI would write 90% of the code in six months. Meta’s CEO said he believes AI will mid-level engineers will be replaced ‘soon’. Judging by recent layoffs in the technical fieldIt seems like many executives are embracing that advice.
Software engineers and data scientists are among the most expensive salary lines at many companies, and business and technology leaders may be tempted to replace them with AI. However, recent high-profile failures show that engineers and their expertise remain valuable even as AI continues to make impressive progress.
SaaStr disaster
Jason Lemkin, a tech entrepreneur and founder of the SaaS community SaaStr, developed a SaaS networking app and live-tweeted his experiences. About a week after his adventure, he admitted to his audience that something went very wrong. The AI its production database deleted despite his request for a ‘code and action freeze’. This is the kind of mistake that no experienced (or even semi-experienced) engineer would make.
If you’ve ever worked in a professional coding environment, you know how to separate your development environment from the production environment. Junior engineers are given full access to the development environment (it’s crucial for productivity), but production access is granted on a limited basis to some of the most trusted senior engineers. The reason for restricted access is precisely for this use case: to prevent a junior engineer from accidentally stopping production.
In fact, Lemkin made two mistakes. First, for something as crucial as production, access to unreliable actors is simply never granted (we don’t trust ourselves to ask nicely to a junior engineer or AI). Second, he never separated development and production. In a subsequent public conversation on LinkedIn, Lemkin, who has a Stanford Executive MBA and Berkeley JD, admitted as much he was not aware of best practices of splitting development and production databases.
The takeaway for business leaders is that standard software engineering best practices still apply. At the very least, we need to build in the same security constraints for AI as we do for junior engineers. We should undoubtedly go further and treat AI with some hostility: there are reports that, like HAL in Stanley Kubrick’s 2001: A space odysseycould try the AI breaking out of its sandbox environment to accomplish a task. With more vibe coding, it will become increasingly necessary to have experienced engineers who understand how complex software systems work and can implement the right guardrails into development processes.
Tea hack
Sean Cook is the founder and CEO of Tea, a mobile application launched in 2023 designed to help women date safely. In the summer of 2025, they were ‘hacked’: 72,000 images, including 13,000 verification photos and images of government documents, were leaked on the public discussion forum 4chan. Even worse, Tea’s own privacy policy promises that these images will be “immediately deleted” after users authenticate, meaning they may has violated their own privacy policy.
I use “hacked” in quotes because the incident stems less from the cleverness of the attackers than from the incompetence of the defenders. In addition to violating their own data policy, the app also left a Firebase storage bucket unsecured. exposing sensitive user data to the public Internet. It’s the digital equivalent of locking your front door but leaving your back door open with your family jewelry hanging ostentatiously from the doorknob.
While we don’t know if the root cause was vibe encryption, the Tea hack highlights catastrophic breaches that stem from fundamental, avoidable security flaws resulting from poor development processes. It’s the kind of vulnerability that a disciplined and thoughtful engineering process addresses. Unfortunately, the relentless pressure of financial pressure, where a ‘lean’, ‘move fast and break things’ culture is the opposite, and atmospheric coding only exacerbates the problem.
How can you use AI encoders safely?
How should business and technology leaders think about AI? First, this is not a call to abandon AI in favor of coding. An MIT Sloan study estimated AI leads to productivity gains between 8% and 39%, while a McKinsey study found a 10% to 50% reduction in time to task completion with the use of AI.
However, we must be aware of the risks. The old lessons of software engineering aren’t going away. These include many proven best practices, such as version control, automated unit and integration testing, security controls such as SAST/DAST, separating development and production environments, code review, and secrets management. If anything, they become more noticeable.
AI can generate code a hundred times faster than humans can type, creating an illusion of productivity that is a seductive siren call for many executives. However, the quality of the rapidly generated AI slop is still up for debate. To develop complex manufacturing systems, companies need the thoughtful, seasoned experience of human engineers.
Tianhui Michael Li is president of the Pragmatic Institute and the founder and president of The Data Incubator.
Read more of our guest writers. Or consider posting yourself! See our guidelines here.




