Your company may be using AI you don’t know about. It could be putting you at risk

Nearly 9 out of 10 AI tools inside enterprises are invisible to IT. That’s the finding of a LayerX study that should send shivers down the spine of any executive: AI is shaping decisions, summarizing meetings, and analyzing data without the knowledge—or control—of the very teams meant to secure it. What sounds like a technical oversight has become a board-level crisis, worsened by new global regulations.

Last month, the EU’s AI Act entered its next enforcement stage, forcing enterprises to document how general-purpose AI tools process data and threatening penalties of up to €35 million or 7% of global turnover. Yet weeks later, many organizations remain unprepared, struggling even to inventory which AI features are active in their environments. As regulators demand transparency, most enterprises can’t meet the basic threshold of visibility.

That gap is where the real danger lies. AI isn’t only the domain of headline-grabbing tools like ChatGPT; it’s embedded in the everyday software stack. Zoom can transcribe and summarize meetings, Salesforce can auto-generate reports, Slack can analyze conversations. These features arrive through silent updates, slipping under IT’s radar while handling sensitive data.

The shadow AI crisis

Call it AI sprawl. Platforms ship “smart” features by default, leaving enterprises with dozens—sometimes hundreds—of parallel AI apps. IT teams often monitor only a fraction. A report from security platform Zluri found that four out of five AI tools inside enterprises are unmanaged, leaving leaders unsure what data they touch, whether they comply with retention rules, or if they’ve been activated at all.

The danger lies in how AI arrives. It doesn’t show up as new software IT can review. It slips in through automatic updates inside trusted apps. One day Slack is just a messaging platform; the next, it’s summarizing conversations and suggesting actions by default. Salesforce, Zoom, and Microsoft 365 are all adding similar capabilities, with little fanfare and no guarantee that compliance teams are aware.

Gal Nakash, cofounder and chief product officer at the SaaS security company Reco, warns that the real danger isn’t in sanctioned AI tools but in the hidden ones that slip into everyday workflows. He notes that vendors regularly roll out new features inside apps like Microsoft 365, Salesforce, and Slack, often without fanfare or IT oversight. “The real challenge isn’t governing AI you know about,” he says. “It’s discovering and securing the AI you don’t even realize is there.”

That discovery gap is what turns AI from productivity booster to liability. When features activate silently, they bypass procurement and security reviews. Sensitive data can be processed without oversight. “If you can’t see where AI lives in your stack, you can’t govern its behavior or its output,” Nakash says.

Why traditional governance is failing

Enterprise security tools weren’t built for this. They track software inventories and run quarterly reviews, but embedded AI arrives silently, as toggles and background features inside already-approved apps. The risk isn’t new software; it’s new capability. Search that combs entire databases. Copilots that draft messages or summarize private docs by default.

New Reco data underscores the scale: 91% of AI tools inside enterprises operate without IT oversight, and 8.5% of employee prompts involve sensitive business data. That includes personal identifiers, customer details, even financials, all of which are processed by features security teams may not know are in fact turned on. “Traditional security tools operate on static inventories and periodic assessments,” Nakash notes. “They were built for the pre-AI era where changes happened slowly and visibly.”

In other words, the very tools companies trust to protect them are ill-equipped for a world where SaaS vendors can transform the capabilities of an approved app overnight. By the time traditional reviews catch up, sensitive data may already have been exposed.

Governance-first AI

Some companies are responding by embedding AI inside governance controls from the outset. LeapXpert’s communications intelligence solution, Maxen, is one such example. Instead of layering an LLM onto consumer chat apps, Maxen functions within enterprise guardrails. That means access is enforced at the user level, outputs are explainable and retained, and data stays within compliance perimeters.

Dima Gutzeit, CEO of LeapXpert, argues that many AI assistants are rushed into products as afterthoughts, prioritizing ease of use over accountability. Gutzeit says his company took the opposite approach, building AI into its compliance framework from the very start, with controls for access, explainability, and retention. “We view AI as an integral part of the communications governance fabric, not an add-on,” he adds.

For highly regulated industries like finance or healthcare, the stakes are high. A vague query—“What’s the status of our largest deal?”—could cause an unsanctioned assistant to surface material nonpublic information to someone without clearance. Gutzeit says Maxen’s controls prevent that.

This governance-first model complements discovery tools. Enterprises still need visibility across SaaS platforms to spot hidden toggles and plug-ins. But assistants designed to respect audit and retention rules reduce the chance of sensitive data spilling into the wrong hands.

A transparency filled future?

The EU’s enforcement cadence makes the risk unavoidable. The AI Act now requires transparency, documentation, and risk assessments for general-purpose AI, with even tougher obligations for models deemed systemic risks. And regulators have also introduced a voluntary code of practice, according to The Wall Street Journal, offering a preview of stricter enforcement ahead.

LeapXpert’s Gutzeit believes this will trigger a fundamental shift in how enterprises adopt AI. “Silent AI features will no longer be tolerated,” he says. “Enterprises will require vendors to disclose how AI is being used, what data it draws on, and how outputs are retained. Compliance-first strategies will replace AI-first adoption.”

For executives, the message is clear: Waiting for perfect standards or a finalized audit checklist is not a strategy, and discovery must instead be continuous. And controls have to be built in from the start, not patched on after rollout.

“The enterprises that will succeed with AI are those that treat governance as a competitive advantage, not a compliance burden,” says Reco’s Nakash. “When you build visibility and control into your AI strategy from day one, you’re not just managing risk; you’re creating the foundation for sustainable innovation at scale.”

The future depends on making AI transparent. If you can’t see where it’s running or what it’s touching, you can’t safeguard customers, comply with the law, or trust the insights it generates. The good news is that a path forward is emerging: real-time discovery across SaaS platforms combined with governance-first assistants that keep data contained. That’s how enterprises can embrace AI without losing control.

No comments

Read more