The quiet rise of shadow AI

Table of Contents

The quiet rise of shadow AI

“Shadow AI” sounded like a concept straight out of the Batman franchise when I first heard the term in a BBC article, but I soon noticed it everywhere. Ranging from my friends at school to industry professionals, it seems that everyone is using LLMs to write emails, plan presentations and generate code. To me, it feels like these LLMs are so convenient that the thought of using AI secretly doesn’t even cross people’s minds - the trade-off isn’t even considered.

I think shadow AI should be taken as a signal to employers that their employees are improvising - turning to LLMs as helpers, treating them no differently than Google or Grammarly. However, there is a key difference - LLMs don’t only assist the client, but process the information given to them and train themselves on that information as well, whether it be innocuous data such as a WhatsApp message to organise a get-together, or confidential data such as internal codebases.

Of course, companies tried to adapt as quickly as possible, but shadow AI grew even faster, thanks to how LLMs have been integrated so quickly into daily apps such as Docs, Word, Slack, and Notion. There is no need to download anything or install plugins to use LLMs conveniently - a good example would be VSCode, the editor I use to write this blog! When I was doing a pairs trading tutorial off X, I kept noticing VSCode was writing my code ahead of me, thanks to the free, integrated GitHub Copilot. This irritated me, as I wasn’t coding just to make a simple model, but rather to learn.

The risk of private data being used by these LLMs in an incorrect matter is pressing. In April, The Verge reported that OpenAI was quietly storing deleted chat logs - data that users thought was gone. Around the same time, a BBC investigation found Meta’s AI assistant embedded in apps like WhatsApp and Instagram, collecting content in ways that could mislead even well-informed users. In both cases, users may have believed they were operating in private. Of course, they weren’t. And when employees pass in sensitive information, it becomes a question of security, trust, and sometimes legal compliance.

One solution proposed to me by a friend was to blanket ban AI tools outright on all company networks, but this didn’t seem effective to me - you could just use a VPN to bypass it, and the reason people use these models is because they are extremely effective and time-saving.

In opposition, I offered a simpler, more realistic fix: give people the tools they need, but in a safe fashion, through internal LLMs. Firms could even construct basic guardrails, for flagging sensitive inputs and logging odd behaviour. However, the issue lies at a deeper level - if firms keep rewarding speed above all else, then it makes sense that people would just use whatever gets the job done the fastest.

This is why shadow AI usage isn’t just about ‘bad employees’, but also about the disconnect between what’s available and what’s actually allowed - the “new tools are moving faster than the new rules”. I am confident that shadow AI usage will never stop, but it can definitely be reduced from where it is right now.

The first course of action is to take note of how it is being used across both the workplace and in academic settings, and to respond accordingly by putting the right guidelines in place, whether that be adoption of services like GPTZero, or through the implementation of internal LLMs.

Click here to go back to the home page!