Want to adopt AI in your workplace? Start by building a foundation first

When Microsoft announced that it would start evaluating employees on their AI usage, it sent a clear message: AI fluency is now part of the job.

Business Insider recently reported that Microsoft was asking managers to track employee engagement with internal AI tools like Copilot and consider that usage in performance reviews. This signals that AI skills are becoming as measurable as teamwork or communication.

AI is no longer optional in white-collar jobs. It now ranks alongside email, video calls, and spreadsheets as a workplace essential. However, unlike those tools, AI continues to occupy uncharted territory. There’s no standard for how to use it, no shared definition of what “good” looks like, and no clear line between acceptable and risky behavior.

The technology is also evolving faster than most companies can keep up with. Many employers are encouraging or requiring the use of AI without offering the training or oversight to match. That mismatch is creating more than confusion. It’s leading to misuse, uneven evaluations, and pressure on employees to guess their way through a system that doesn’t even exist yet.

Microsoft’s decision carries weight. When one of the world’s most powerful tech companies sets a precedent, others are likely to follow. But if companies adopt similar policies without fixing the missing guardrails, it can lead to misalignment and dysfunction.

The consequences of throwing employees into the AI deep end

Pushing AI adoption is smart, but leaving employees to figure it out alone isn’t. Companies are asking workers to operate fast-changing, complex tools with little support. The results are messy.

Many Gen Z professionals are already improvising in the absence of clear guidance. In a recent Resume Genius survey, 39% of Gen Z workers said they’ve automated tasks without their manager’s approval. Another 28% admitted to submitting AI-generated work without disclosure. Nearly a third used AI in ways that might violate company policy, and 23% reported that using AI at work negatively affected their mental health.

That pattern isn’t just limited to Gen Z. A 2025 KPMG study of 48,000 workers across 47 countries found that 57% are hiding their AI use from managers. Most haven’t received formal training. Two-thirds don’t verify AI outputs for accuracy, and more than half have already made AI-related mistakes on the job.

Together, these findings point to workplaces where AI use is rising fast. But employees are making up the rules as they go, often under pressure and without a clear sense of what’s safe, ethical, or expected. This kind of uncertainty isn’t sustainable, especially as job expectations involve the use of AI. Requiring AI without structure is like handing out calculators in math class without ever teaching equations. Sure, the tools are powerful, but if you want good outcomes out of those tools, you need to teach people how to use them properly.

Productivity is improving, but at what cost?

AI can boost performance. That much is clear. A 2025 Harvard Business Review study found that generative AI improves both productivity and creativity. But it also uncovered a troubling side effect: employees felt less motivated and more disengaged when they used AI to complete tasks they once took pride in doing themselves.

And burnout is rising. According to a July 2025 Upwork survey, employees who report the highest productivity gains from AI are also the most burned out. 88% of top AI users say they’re experiencing burnout, and they’re twice as likely to consider quitting compared to less productive peers. Many also feel disconnected from their company’s AI strategy, with 62% reporting that they don’t understand how their use of AI aligns with broader goals. It seems that the more AI increases productivity, the more it drains the people who use it.

What happens when the rules are missing

Right now, AI use at work is both essential and undefined. This creates three major risks for companies:

  1. Compliance breakdowns: Without clear policies, employees may expose sensitive data, rely on flawed outputs, or use AI in ways that open the door to legal risks.
  2. Subjective reviews: When “using AI effectively” becomes part of evaluations, but no standards exist, employees may be graded on personal bias instead of actual performance.
  3. Erosion of trust: Workers are left wondering what’s okay and what isn’t. Managers don’t always know either. This results in second-guessing on all sides, which isn’t ideal for anyone.

What businesses need to do when it comes to AI

The companies that thrive in this new era won’t be the ones that push AI the fastest. They’ll be the ones that do three things well:

  1. Set clear policies: Define what responsible AI use looks like in your workplace. Spell out what your companies encourage, what you restrict, and where to ask for help.
  2. Offer practical, task-based training: Skip the generic webinars. Teach employees how to apply AI to their actual work, whether that’s drafting policy language, summarizing customer feedback, or automating reports.
  3. Build real-time feedback loops: Hold regular check-ins to ask: What’s working? What’s unclear? What needs to evolve? AI is moving fast. Policies need to move with it.

If AI is now part of the job, it needs its own set of guidelines. The companies that succeed won’t be the ones who adopt it fastest, but the ones who build the right foundation and teach people how to use it well. Microsoft started the conversation. Now the rest of us need to define what responsible AI adoption looks like and put it into practice.

No comments

Read more