An increasing number of employees are using generative AI tools in secret, uncertain of whether their organisations support or prohibit such use. Some workplaces offer incentives for effective AI use, while others still lack basic guidelines, resulting in confusion, risk and covert behaviour that could lead to serious consequences.
Many organisations are still adjusting to the rapid uptake of tools such as ChatGPT and Microsoft Copilot, creating communication gaps that leave workers operating without clear guidance. Some businesses have officially embraced AI, offering structured channels and rewards for usage. However, others were initially cautious due to concerns over data privacy, and that hesitation continues to affect AI adoption, even as strategies begin to change.
In a global survey of 48,000 professionals, 44% admitted to using AI against company policy and 61% said they hide their AI use entirely. Another 19% of respondents to a Deloitte study said their employer had no policy on AI use, while 14% were unsure whether a policy existed at all. In the absence of clear rules, employees often use personal accounts or unapproved tools, increasing the likelihood of compliance issues or data protection breaches.
Legal experts warn that unregulated AI use may backfire, especially in fields where accuracy and source validation are essential. There have been high-profile legal cases in which AI-generated content resulted in fines, reputational damage and disciplinary consequences. As more companies move to update legal and HR frameworks to include specific AI guidance, a rise in legal disputes and challenges relating to misuse or regulation gaps is expected.
For now, organisations that manage to balance safety and innovation are better positioned to move forward. Clear policies that specify approved tools, required oversight and the essential role of human input are becoming more common. Some leading companies promote experimentation and link productivity incentives or team leaderboards to AI use, leading to greater transparency and engagement.
However, with AI evolving faster than HR departments can keep up, fixed policy documents are not enough. Experts say organisations must update AI-related policies regularly and train staff continuously to ensure a clear understanding of acceptable use. Culture is equally important. When leadership communicates a positive and consistent message about AI, employees are more likely to use it responsibly, disclose their activities and work together within the rules.

