The rapid adoption of generative AI tools in the workplace is transforming how we work—but it’s also opening new doors for data security risks. A recent TELUS Digital survey has uncovered that a significant number of enterprise employees are entering sensitive information into publicly available AI assistants. In this article, we explore the survey’s findings, examine the potential repercussions for enterprise security, and discuss best practices for mitigating these risks while harnessing AI’s productivity benefits.
The survey, conducted by TELUS Digital in January 2025 via Pollfish and targeting over 1,000 professionals from companies with 5,000+ employees in the United States, sheds light on a growing phenomenon: the use of “shadow AI.” Here are the key takeaways:
The survey clearly indicates that while AI assistants are powering up productivity, they are also introducing serious security challenges—not least the uncontrolled exposure of sensitive enterprise data through the use of public GenAI tools.
Summary of Security Implications:
Organizations must be vigilant. The benefits of increased productivity from AI assistants come with the risk of data leakage—especially when employees bypass designated, secure platforms. This demands a rethinking of how AI is governed in enterprise settings.
The drive for efficiency and improved productivity is pushing employees toward using accessible, but less secure, AI tools. To safeguard sensitive data, companies must align their internal tool offerings with employee needs and enforce comprehensive training and policy adherence.
Perspective for Windows Users:
For professionals working within Windows ecosystems, the lessons from TELUS Digital’s survey are highly relevant. As our community has discussed in previous threads (see Navigating AI Co-Pilots: The Urgent Need for Least Privilege in Data Security), integrating AI safely requires deliberate planning, robust security protocols, and a culture of continuous learning.
To summarize:
Stay tuned to WindowsForum.com for more insights and detailed discussions on cybersecurity, Windows enterprise security, and best practices for integrating advanced technology into your business workflows.
Source: Yahoo Finance https://finance.yahoo.com/news/telus-digital-survey-reveals-enterprise-114500468.html
Survey Overview: What the Data Tells Us
The survey, conducted by TELUS Digital in January 2025 via Pollfish and targeting over 1,000 professionals from companies with 5,000+ employees in the United States, sheds light on a growing phenomenon: the use of “shadow AI.” Here are the key takeaways:- Widespread Use of Public AI Tools:
- 68% of employees reported accessing AI assistants (like ChatGPT, Microsoft Copilot, or Google Gemini) through personal accounts rather than company-approved channels.
- Sensitive Data Exposure:
- 57% admitted to inputting sensitive information into these AI systems, despite the inherent security risks.
- The types of sensitive information disclosed include:
- 31% – Personal data such as names, addresses, emails, and phone numbers.
- 29% – Product or project details, including unreleased information and prototypes.
- 21% – Customer data encompassing contact details, order histories, and recorded interactions.
- 11% – Confidential financial information like revenue figures, budgets, and forecasts.
- Policy and Training Gaps:
- Nearly 29% of employees are aware of company policies that prohibit the use of sensitive data with GenAI tools. However, only 24% reported receiving mandatory training on these AI assistants.
- More than 44% stated that they either lack or are unaware of formal AI usage guidelines at work.
- A significant 50% are uncertain whether they comply with existing AI policies, and 42% noted that there are no consequences for not following such guidelines.
The survey clearly indicates that while AI assistants are powering up productivity, they are also introducing serious security challenges—not least the uncontrolled exposure of sensitive enterprise data through the use of public GenAI tools.
The Implications for Enterprise Security
The survey’s results are a wake-up call for organizations that aim to leverage AI without compromising their security posture. The practice of using personal accounts for accessing AI tools—coined as “shadow AI”—creates multiple challenges:- Data Sovereignty and Compliance Risks:
Without the proper enterprise controls, confidential information such as customer details and financial data can end up outside of the company’s secure ecosystems. This not only violates internal policies but could also lead to compliance breaches under data protection regulations. - Loss of Visibility:
When employees use personal accounts or non-sanctioned AI tools, IT and security teams lose critical visibility into how data is being shared and used. This “shadow AI” makes it difficult to monitor, audit, or enforce data security protocols. - Potential Cybersecurity Vulnerabilities:
By exposing sensitive information to third-party platforms, enterprises increase the risk of data theft or cyberattacks. Public AI systems, which are not designed specifically for enterprise security, might not have the robust safeguards needed to protect sensitive data.
Summary of Security Implications:
Organizations must be vigilant. The benefits of increased productivity from AI assistants come with the risk of data leakage—especially when employees bypass designated, secure platforms. This demands a rethinking of how AI is governed in enterprise settings.
Balancing Productivity and Security: The Enterprise Challenge
The TELUS Digital survey reveals a paradox that many modern enterprises face: despite the security risks, employees overwhelmingly favor the use of AI assistants in their daily workflows.- Productivity Boosts are Real:
- 60% of employees say that AI tools help them work faster.
- 57% feel that these tools simplify their daily tasks.
- 49% believe that AI assistance directly improves their performance.
- Overall, a staggering 84% want to continue using AI assistants at work.
- Supplementing Even When Company Tools Are Available:
Surprisingly, even when employees have access to officially sanctioned AI tools, about 22% still log into personal accounts for potentially more advanced functionalities. This behavior underscores a key issue: enterprise-provided solutions may not always meet the evolving needs of users.
Why Do Employees Turn to Shadow AI?
Several factors contribute to the persistent use of personal AI tools despite inherent risks:- Perceived Superior Capabilities:
Public AI tools are often seen as the cutting edge in generative AI, offering the latest features that some enterprise versions may not immediately provide. - Lack of Adequate Training:
With only a minority of employees receiving mandatory training on AI usage, many are simply unaware of the potential dangers or the existences of safer alternatives. - Insufficient Policy Enforcement:
The survey points out that nearly half of the workforce does not know of any rigorous guidelines, while a significant portion admits to a lack of consequences for non-compliance, further encouraging risky behavior.
The drive for efficiency and improved productivity is pushing employees toward using accessible, but less secure, AI tools. To safeguard sensitive data, companies must align their internal tool offerings with employee needs and enforce comprehensive training and policy adherence.
Best Practices for Securing Enterprise AI Use
To strike a balance between leveraging AI-driven productivity and safeguarding enterprise data, organizations should consider the following best practices:- Establish Clear Policies and Guidelines:
- Develop and disseminate concise AI usage policies that explain what data can and cannot be input into AI tools.
- Ensure policies are regularly updated to reflect emerging threats and technological changes.
- Mandatory Training Programs:
- Implement mandatory training on AI security best practices.
- Cover topics such as data classification, risk of shadow AI, and compliance requirements to empower employees with the necessary knowledge.
- Deploy Enterprise-Grade AI Solutions:
- Invest in AI platforms—like TELUS Digital’s Fuel iX—that are built with security, data sovereignty, and compliance in mind.
- These solutions are designed to protect sensitive information while still meeting the end-user’s need for powerful AI capabilities.
- Monitor and Audit AI Usage:
- Establish mechanisms for monitoring AI tool usage to detect and mitigate risk early.
- Regular audits can help identify trends of non-compliant behavior and enforce accountability.
- Implement Least Privilege Access:
- Align with strategies discussed in our earlier thread Navigating AI Co-Pilots: The Urgent Need for Least Privilege in Data Security.
- Restrict data access to only what is necessary for specific roles, reducing the risk of widespread data exposure.
- [ ] Define clear AI data handling protocols.
- [ ] Regularly update and enforce security policies.
- [ ] Provide comprehensive training on AI use.
- [ ] Adopt secure, enterprise-grade AI platforms.
- [ ] Monitor AI usage across the organization.
- [ ] Enforce the principle of least privilege.
Broader Industry Trends and Future Outlook
The survey results reflect an industry in transition. Generative AI is proving to be a double-edged sword: on one side, it offers revolutionary productivity benefits; on the other, it introduces significant risks that need to be managed proactively.The Role of AI in Digital Transformation
- Driving Efficiency:
AI assistants are no longer a luxury—they have become essential tools for speeding up work processes, creative brainstorming, and routine task automation. - A Call for Integrated Security Solutions:
As enterprises deploy AI at scale, integrated security must become an intrinsic part of any new tool or platform. The TELUS Digital approach—with its Fuel iX platform—demonstrates how marrying advanced AI capabilities with strong data protection measures can create a more secure and productive workplace. - Windows and Enterprise Security:
For many organizations, the Windows operating system remains the backbone of enterprise IT. As Windows users integrate AI tools into their daily workflows, it’s critical that these environments support and enforce robust data protection protocols. Software updates, Windows security patches, and dedicated support for AI tool integration are key to ensuring that the benefits are not overshadowed by vulnerabilities.
A Look Ahead at Industry Events
TELUS Digital is set to present its detailed insights and case studies at Mobile World Congress (MWC) 2025 in Barcelona. Their session, "Fueling Telecom’s Future: TELUS's Journey to GenAI Adoption, Contact Center Excellence, & CISO-Endorsed AI Safety," will delve into how enterprise-grade AI can be scaled securely. This is an essential discussion for IT and cybersecurity professionals, including those in Windows-dominated environments, who are keen to understand how to integrate advanced AI tools without sacrificing data security.Perspective for Windows Users:
For professionals working within Windows ecosystems, the lessons from TELUS Digital’s survey are highly relevant. As our community has discussed in previous threads (see Navigating AI Co-Pilots: The Urgent Need for Least Privilege in Data Security), integrating AI safely requires deliberate planning, robust security protocols, and a culture of continuous learning.
Conclusion
The TELUS Digital survey paints a compelling picture of the growing risks associated with unregulated AI use in the enterprise. With over half of the respondents admitting to entering sensitive information into public AI assistants, the concept of shadow AI is no longer theoretical—it’s a pressing reality.To summarize:
- High Adoption, High Risk: While generative AI tools are boosting productivity, they are also exposing sensitive data.
- Policy and Training Gaps: A significant proportion of employees either lack sufficient training or are unaware of formal guidelines, amplifying the risks.
- Need for Secure AI Platforms: Enterprises must adopt robust, secure AI solutions—backed by clear policies and continuous monitoring—to mitigate these emerging threats.
Stay tuned to WindowsForum.com for more insights and detailed discussions on cybersecurity, Windows enterprise security, and best practices for integrating advanced technology into your business workflows.
Source: Yahoo Finance https://finance.yahoo.com/news/telus-digital-survey-reveals-enterprise-114500468.html
Last edited: