You’ve likely heard of vibe coding and very well may have conducted an experiment or two yourself, enlisting Claude or some other AI tool to create a simple website or an interactive game. OpenAI cofounder Andrej Karpathy coined the phrase with a tweet in February 2025. In its simplest terms, vibe coding involves telling an AI program what you want to accomplish and having the AI create the code. It uses natural language provided by the user to generate the software.
Vibe coding is a truly revolutionary democratizer of software development. It allows anyone with a computer and a little imagination to come up with software that appears, at least on the surface, to do whatever you ask it to.
And therein lies the rub. Anyone in a company can potentially insert software inside the cybersecurity perimeter of a company without the burden of any knowledge of how software works and what it may be designed to do beyond your clever prompt.
If the code an employee conjures just happens to be algorithmically derived from vetted, publicly available sources, you are in luck. But the fundamental danger with AI-generated code is precisely that you have no idea where it came from, what the sources were or how they were assembled. Was the source a PhD student at a top university, a basement-dwelling hacker, a state-sponsored cyber terrorist? All of the above?
The AI program you are using doesn’t know or care—it’s loyally fulfilling its blindingly fast and blindingly oblivious pattern matching mission.
Opening the door to disaster
That amazing program you just created without ever having learned to write a line of code may contain world-class level spyware, viruses, or malware that can extract (i.e., exfiltrate) a company’s proprietary data or so-called SQL injections that can wreak havoc on your databases. The beautiful part from the bad actor’s point of view is they don’t need a back door: The blissfully ignorant employee importing the mystery code just swung the front doors wide open.
But wait, there’s more.
The vibe code your employee magically generated with his new AI colleague could also violate copyright or patent law. How would you assess the probability of a typical nontechnical employee discovering that? Those odds are likely to be a number approaching zero. AI-generated IP liability could radically reshape your company’s litigation profile.
When you generate code through an LLM, like any code that humans develop, it will have bugs. But unlike human-generated code, there is nobody on staff who fully understands how it was put together. That includes whether or not it is structurally sound, whether it is coherent, or where the vulnerabilities may be. Addressing this problem does not currently seem to be a major priority in the damn the torpedoes, full speed ahead mindset of the current AI-obsessed moment.
So what can organizational leaders do to manage this risk and mitigate potential catastrophe? Understanding the danger is the first step. Consider taking the following steps.
It’s a C-level problem, so treat it as such
AI security is not primarily an IT problem: It’s a company-wide strategic problem for senior management. Given interactions with AI across finance, HR, legal, sales and marketing, design, engineering, the technical aspects of AI interaction is just the entry point. AI security needs to be treated as an enterprise issue. It cannot simply be delegated to IT as is standard procedure with cybersecurity.
Build security into your process
Don’t wait to react after the fact. When it comes to AI risk, the old approach of creating a policy and having employees acknowledge it is not sufficient. Risk monitoring and remediation need to be part of the technical processes themselves, not separate static policies that you hope are being followed while collecting digital dust in some virtual folder somewhere. There are new software programs that are designed to flag, assess, quantify and address these types of risks before they become crises. Consider adopting them sooner rather than later to make sure your security is keeping apace of AI deployment.
Demand accountability from providers
Require your providers to expressly describe how AI is incorporated into their applications, what the risks are, how they can be assessed and addressed in real time (seconds or minutes, not quarters) as they occur in the application itself. This is rapidly becoming a new requirement well beyond the standard check-the-box security questionnaire.
Consult the experts
There is a new industry arising that aims to address the gap between the explosion of AI use in organizations at all levels and the lack of response protocols for the largely unidentified risks created at that same breakneck pace. It is worth seeking guidance from the experts.
The ability for AI to allow non-technical employees to create code is truly revolutionary. But as history teaches, revolutions can go a few different ways. It is critical to be aware of and address the new risks that are inherent in these new capabilities. Vibes can only get you so far.
