You let an AI Layoffs and Security Concerns in the Workplace

You let an AI write your web app last week. Loved how fast it was. Shipped it in an afternoon. Then you opened the Google search results and realized: thousands of people can see your customer database right now.

This isn’t theoretical. Security researcher Dor Zvi just found 5,000 AI-coded web apps sitting on the open internet with virtually no security. Around 2,000 of them were actively leaking sensitive data—medical records, financial statements, corporate strategy docs, customer conversation logs. Hospital work assignments with doctors’ names. Shipping manifests. Ad buying strategies. One phishing site was impersonating Bank of America, built with an AI coding tool and hosted on Lovable’s public domain.

The machines didn’t fail here. The humans did.

The Convenience Trap Is Real

Here’s what happened: AI coding platforms like Lovable, Replit, Base44, and Netlify made building a web app so frictionless that security became optional. Click a few buttons, describe what you want, get a working application in minutes. Host it on their domain automatically. No servers to configure. No authentication to set up. No infrastructure decisions required.

The tradeoff seemed reasonable when the app was just internal. Then teams started loading real data into it.

Replit’s CEO responded to Zvi’s findings by pointing out that their platform does have privacy settings. You can make apps private with a single click. Base44 said the same thing: they provide security tools; users just have to use them. The implication is clear—if your data got exposed, that’s a configuration choice, not a platform problem.

Technically true. Practically irrelevant.

Here’s the surprise nobody wants to admit: security tools that require active decisions fail at scale. When a 26-year-old product manager at a mid-market SaaS company spins up a quick internal dashboard to track customer metrics, they’re not thinking about authentication schemas. They’re thinking about shipping. They’re using AI because it’s faster than security. The platform made the insecure path the default path. Everything else is just liability-shifting.

The Layoff Acceleration Is the Real Story

Meanwhile, companies announced 83,387 job cuts in April alone—a 38 percent jump from March. For the second straight month, AI was listed as the leading reason. We’re talking 21,490 jobs directly attributed to AI and automation efforts. That’s more than a quarter of all cuts.

This matters because it reveals something broken about how organizations think about AI adoption. They’re not replacing bad processes with better ones. They’re not automating repetitive work so humans can do higher-value thinking. They’re cutting headcount first, asking questions later. And they’re using AI as the justification—sometimes legitimate, sometimes not.

Mark Cuban called it out: workers who learn to use AI effectively will win. Fair enough. But the real answer isn’t for every employee to suddenly become a prompt engineer. The real answer is that companies are using the AI boom as cover for decisions they’d have made anyway. Some analysts call it “AI-washing”—the practice of framing broad cost-cutting as innovation.

Here’s the thing though: both realities are true simultaneously. AI is displacing certain jobs. And companies are overstating AI’s role while quietly automating away roles they never wanted to fund in the first place.

Security and Speed Are Still Fundamentally in Conflict

Nobody wants to hear this, but the data exposure problem isn’t a bug. It’s a feature of the current setup.

AI coding tools optimized for speed, not safety. They do what you ask them to do. If you ask them to build an app without authentication, they build an app without authentication. The tools don’t spontaneously add security measures because you didn’t request them. Why would they? Speed was the entire value proposition.

Joel Margolis, who recently found an AI chatbot that exposed 50,000 conversations with children on an unsecured website, said it plainly: “Somebody from a marketing team wants to create a website. They’re not an engineer and they probably have little to no security background or knowledge. AI coding tools do what you ask them to do. And unless you ask them to do it securely, they’re not going to go out of their way to do that.”

This scales the problem infinitely. For every security researcher like Zvi actively searching for exposed apps, there are hundreds of teams that built something, shipped it, and forgot about it. The apps are still online. The data is still exposed. Nobody’s looking.

Zvi compared it to the S3 bucket crisis from earlier in the decade—when misconfigurations in Amazon’s cloud storage led companies from Verizon to WWE to accidentally expose massive datasets. The industry blamed the users, not Amazon. But Amazon also bore responsibility for confusing security settings that made it trivially easy to make the wrong choice.

This time, the AI coding platforms are in Amazon’s position. They’re saying “we provide the tools; how you configure them is your problem.” But they also designed those tools to reward speed over safety. They hosted the apps on their own domains, making them trivially searchable. They made the insecure configuration the path of least resistance.

The Workforce Angle Is Darker Than It Looks

Now zoom out. Companies are cutting jobs because they’re buying AI tools to do the work faster and cheaper. Those tools are being used by people with no security background because the barrier to entry is intentionally low. Those applications are leaking data at scale because security requires effort nobody budgeted for.

The feedback loop: speed-focused hiring → speed-focused tooling → insecure systems → data breaches → more job cuts to “invest in AI infrastructure.” Rinse. Repeat.

The federal government is loosening AI restrictions under the new administration. The White House wants to “empower innovators, not bureaucracy.” That sounds good if you’re building the AI tool. It sounds less good if you’re the person whose medical data just got exposed on an unsecured web app because some startup wanted to move fast and break things—and broke your privacy along the way.

What Actually Happens Now

The exposed apps Zvi found will eventually get secured or taken down. The four platforms will likely tweak their default settings. There will be a wave of blog posts about “AI security best practices.” Companies will continue laying people off and calling it “AI transformation.”

None of that fixes the underlying problem: we’ve built a system that rewards moving fast and punishes thinking carefully. We’ve created tools that make security a burden rather than a default. And we’ve optimized for convenience at the exact moment we should be optimizing for safety.

The irony is savage. We’re using AI to automate work faster than humans can secure it. We’re laying people off to pay for the tools that are supposed to replace them. And we’re exposing sensitive data at scale because security requires a decision, and decisions slow you down.

The people building these tools know this. They’re optimizing for adoption and speed because that’s what their metrics reward. The companies using these tools know this. They’re accepting the risk because the alternative—hiring security engineers and infrastructure teams—costs more upfront. And the workers being cut know this. They’re watching their roles disappear into the gap between what AI can do and what it should be allowed to do.

The real problem isn’t that AI is bad at building secure applications. It’s that we’ve collectively agreed that speed matters more than safety—and we’re willing to let other people’s data pay for it.

Scroll to Top