Open-source artificial intelligence tools and cloud services are not just the darlings of digital transformation—they’re also, if we’re being blunt, a hotbed of risk just waiting to be exploited by anyone who knows where to look (and, according to the latest industry alarms, plenty of cybercriminals do). The pace of technological innovation is breakneck: organisations are snapping up the latest ML frameworks and provisioning new cloud AI services like shoppers at a Black Friday electronics sale—fast, frenetic, and sometimes a little too careless with the fine print. But now, as a new report from Tenable warns, this rapid adoption is creating cybersecurity headaches that are piling up faster than you can say “pip install”.
Organisational leaders have a love affair with AI and the cloud, and who can blame them? The promise of efficiency, scalability, and insight is irresistible. According to the Tenable Cloud AI Risk Report 2025, adoption of AI has surged so dramatically that 72% of companies globally now use AI in at least one business function, up from 50% just two years ago. This isn’t just keeping up with the Joneses; it’s a full-on arms race.
But as the survey’s stats climb, so do the risks. In their rush, many organisations are taking security shortcuts—integrating open-source libraries and spinning up managed cloud services without the scrutiny these powerful, and sometimes notoriously complex, systems demand. The result? A swelling attack surface, sprawling misconfigurations, and, frankly, a new set of nightmares for IT teams.
But here’s the kicker: these libraries can harbour hidden vulnerabilities, often because of their tangled web of dependencies. Layer upon layer, one package calls another and so on, until you’ve got a spaghetti stack that even veteran sysadmins struggle to untangle. And if one of those dependencies goes unpatched or flies under the radar with a zero-day, it’s an open invitation for attackers. On Unix-based systems—where this modular, open-source approach is standard practice—such risks are amplified, with the danger of lingering vulnerabilities that nobody spots until it’s too late.
The issue is complexity. Modern cloud environments are like sprawling digital cities, with data highways running between services, APIs opening doors left and right, and permissions layered messily on top. The report notes that cloud-based AI brings a bewildering array of default settings, many of them far too generous for comfort. Excessive permissions, broad access rights, and rapidly deployed infrastructure mean that small mistakes—like a misconfigured bucket or lax role assignment—can quickly, and silently, become gaping vulnerabilities.
Identity and access are a particular minefield. Configuring permissions across a cloud stack—especially as AI teams experiment, iterate, and spin up resources—often leaves a trail of unused or over-privileged access. Attackers, meanwhile, are all too happy to exploit these and gain a foothold, sometimes without even needing to bother with vulnerabilities in code.
Maybe it’s an unprotected S3 bucket holding sensitive health information used to train a medical AI. Or perhaps it’s a dataset of customer interactions lying open to the world in a forgotten cloud project. The point is the same: if organisations don’t treat these assets as sensitive, attackers will gleefully do so on their behalf.
Staying compliant means knowing where your data lives, who has access, and that your models aren’t spitting out biased or leaky outputs. With regulators sharpening their focus on AI, companies that fumble stewardship of sensitive data or can’t explain an algorithmic decision risk more than just a cyber incident: they face fines, loss of reputation, and existential legal challenges.
There’s the case of the AI startup whose hastily constructed development environment let attackers slip in and poison training data, subtly sabotaging the resulting recommendation engine. Or the global firm that suffered a data leak when a junior data scientist misconfigured object permissions in a popular cloud storage service, exposing gigabytes of customer records.
While major headlines still gravitate toward ransomware or nation-state hacking, the reality is that breaches built on open-source or cloud misconfigurations are getting more common, and often far less detectable—at least, until the damage is done.
First and foremost: manage AI exposure holistically. That doesn’t mean just scanning a few code repos or locking down one particularly sensitive VM. It’s about continuous monitoring across everything—cloud infrastructure, AI workloads, data, identities, and third-party libraries. Know what you’ve deployed, where, and how it’s configured—context is your most powerful defence.
Classify those AI assets. Treat your models, data sources, and tools as crown jewels, not miscellaneous IT “stuff”. Regular scans, active threat monitoring, and robust backup are essential, but so is the basic discipline of knowing what’s out there and who has access.
Critically, enforce least-privilege access everywhere. Look, “everyone can access everything” is not a feature, it’s a risk. Review permissions ruthlessly, manage cloud identities tightly, and always verify that configurations match—not just provider best practices, but your own organisation’s real-world risk appetite.
These defaults are attackers’ best friends. What’s convenient for rapid deployment is often insecure for ongoing operation. Organisations must not only change default passwords (yes, that’s still a thing), but must actively audit the security stance of every new tool and service. It’s not just about what comes out-of-the-box, but how that box is actually locked.
There’s a reason modern attackers target supply chains—they know the weakest link is often three or four libraries deep, maintained by a solo developer in a different time zone. Automated inspection, dependency scanning, and prioritised remediation are crucial to prevent your cutting-edge AI from inheriting someone else’s 2018 bug.
Tenable recommends solutions that prioritise actionable threats and support focused remediation, not just never-ending “to-do” lists. This means smarter alerting, built-in context, and a workflow that lets teams triage and respond efficiently, not just react frantically.
Keeping up with privacy requirements, conducting regular audits, and understanding the movement of data—especially when cloud and open-source blend and blur boundaries—is not just best practice. It’s survival.
The organisations that come out on top will be those that build security into their AI lifecycle from the start. That means training developers and data scientists in security basics, fostering collaboration between DevOps and security (DevSecOps, anyone?), and teaching everyone from the C-suite to the help desk why it matters.
It’s about balancing speed with vigilance, and excitement with responsibility. Yes—embrace the best of open-source and cloud, but know where the trapdoors are. As Nigel Ng of Tenable wisely put it, “The very openness and flexibility that make these tools powerful also create pathways for attackers. Without proper oversight, these hidden exposures could erode trust in AI-driven outcomes and compromise the competitive advantage businesses are chasing.”
Security is, and will remain, a foundational element of trust. Organisations looking to maintain their edge must treat it as a strategic asset—not an afterthought, nor a speed bump. Open-source tools and cloud services are here to stay, but it’s only with clear-eyed, ongoing attention to the full breadth of risk that companies can hope to harness their power without costly disaster.
So here’s the call to action: take inventory of your AI environments. Ask tough questions of your cloud configurations and permissions. Invest in tools and training. And above all, build a culture where secure AI is not just possible, but expected. The alternative? A future where AI doesn’t just power your business but opens the door wide for those who want to break it apart.
Because as everyone in cybersecurity knows, if you don’t know what’s running in your environment—or how it’s connected—rest assured: someone else will.
Source: SecurityBrief New Zealand Organisations face risk with open-source AI & cloud use
Chasing Innovation—And Catching Vulnerabilities
Organisational leaders have a love affair with AI and the cloud, and who can blame them? The promise of efficiency, scalability, and insight is irresistible. According to the Tenable Cloud AI Risk Report 2025, adoption of AI has surged so dramatically that 72% of companies globally now use AI in at least one business function, up from 50% just two years ago. This isn’t just keeping up with the Joneses; it’s a full-on arms race.But as the survey’s stats climb, so do the risks. In their rush, many organisations are taking security shortcuts—integrating open-source libraries and spinning up managed cloud services without the scrutiny these powerful, and sometimes notoriously complex, systems demand. The result? A swelling attack surface, sprawling misconfigurations, and, frankly, a new set of nightmares for IT teams.
Open Source: The Double-Edged Sword
Open-source frameworks like Scikit-learn and Ollama are turning up everywhere—present in nearly 28% and 23% of AI workloads, respectively, from Tenable’s real-world analysis across AWS, Microsoft Azure, and Google Cloud Platform. Their appeal is obvious: open source software accelerates AI project timelines, brings the latest community-driven features, and is almost always cost-effective.But here’s the kicker: these libraries can harbour hidden vulnerabilities, often because of their tangled web of dependencies. Layer upon layer, one package calls another and so on, until you’ve got a spaghetti stack that even veteran sysadmins struggle to untangle. And if one of those dependencies goes unpatched or flies under the radar with a zero-day, it’s an open invitation for attackers. On Unix-based systems—where this modular, open-source approach is standard practice—such risks are amplified, with the danger of lingering vulnerabilities that nobody spots until it’s too late.
The Cloud Magnifies Everything
If open-source weaknesses are a problem, then the cloud is a magnifying glass. Platforms like Azure, AWS, and GCP are the new homes of AI workloads, and they’re jam-packed with powerful services—Azure Cognitive Services, Amazon SageMaker, GCP Vertex AI Workbench, and more. Tenable’s report found that on Azure alone, 60% of organisations had turned on Cognitive Services, 40% had plunged into Azure Machine Learning, and 28% had spun up AI Bot Service. AWS and GCP numbers, while a bit lower, still flagged widespread excitement.The issue is complexity. Modern cloud environments are like sprawling digital cities, with data highways running between services, APIs opening doors left and right, and permissions layered messily on top. The report notes that cloud-based AI brings a bewildering array of default settings, many of them far too generous for comfort. Excessive permissions, broad access rights, and rapidly deployed infrastructure mean that small mistakes—like a misconfigured bucket or lax role assignment—can quickly, and silently, become gaping vulnerabilities.
Misconfiguration: The Silent Saboteur
Ask any cloud security analyst what keeps them up at night and here’s a safe bet: misconfiguration. As the Tenable report highlights, the openness and flexibility that makes cloud AI irresistible is the same property that makes it dangerous. Many organisations rely on defaults or “let’s just get this running” quick setups, enabling permissions and features that are forgotten—until a breach is traced right back to them.Identity and access are a particular minefield. Configuring permissions across a cloud stack—especially as AI teams experiment, iterate, and spin up resources—often leaves a trail of unused or over-privileged access. Attackers, meanwhile, are all too happy to exploit these and gain a foothold, sometimes without even needing to bother with vulnerabilities in code.
Data Exposure: When Training Sets Turn Toxic
In our rush to build the next game-changing model, it’s easy to lose track of what’s actually powering our AI—the data itself. Many data scientists treat datasets as raw material: valuable, yes, but often unprotected, unclassified, and scattered across dozens of environments. The Tenable report warns that these datasets, alongside models and tools, are high-value targets for attackers.Maybe it’s an unprotected S3 bucket holding sensitive health information used to train a medical AI. Or perhaps it’s a dataset of customer interactions lying open to the world in a forgotten cloud project. The point is the same: if organisations don’t treat these assets as sensitive, attackers will gleefully do so on their behalf.
Regulatory Rip Currents
As if the technical risks weren’t enough, the tidal wave of regulations governing privacy, data protection, and AI fairness is growing ever stronger. Frameworks like the NIST AI Risk Management Framework aren’t just suggestions anymore—they’re becoming de facto standards that organisations ignore at their peril.Staying compliant means knowing where your data lives, who has access, and that your models aren’t spitting out biased or leaky outputs. With regulators sharpening their focus on AI, companies that fumble stewardship of sensitive data or can’t explain an algorithmic decision risk more than just a cyber incident: they face fines, loss of reputation, and existential legal challenges.
The Real-World Impact: Incidents in the Wild
One of the most chilling takeaways from the Tenable report is how closely the risks outlined map to actual breaches seen in the wild. In recent years, attackers have exploited everything from old Python library vulnerabilities to over-permissive IAM roles in cloud AI platforms.There’s the case of the AI startup whose hastily constructed development environment let attackers slip in and poison training data, subtly sabotaging the resulting recommendation engine. Or the global firm that suffered a data leak when a junior data scientist misconfigured object permissions in a popular cloud storage service, exposing gigabytes of customer records.
While major headlines still gravitate toward ransomware or nation-state hacking, the reality is that breaches built on open-source or cloud misconfigurations are getting more common, and often far less detectable—at least, until the damage is done.
Mitigation: A Call for Holistic Vigilance
So, is it all doom and gloom? Not quite—Tenable’s report doesn’t just wring its hands. It offers practical, if sometimes sobering, advice for organisations determined to ride the AI wave without wiping out.First and foremost: manage AI exposure holistically. That doesn’t mean just scanning a few code repos or locking down one particularly sensitive VM. It’s about continuous monitoring across everything—cloud infrastructure, AI workloads, data, identities, and third-party libraries. Know what you’ve deployed, where, and how it’s configured—context is your most powerful defence.
Classify those AI assets. Treat your models, data sources, and tools as crown jewels, not miscellaneous IT “stuff”. Regular scans, active threat monitoring, and robust backup are essential, but so is the basic discipline of knowing what’s out there and who has access.
Critically, enforce least-privilege access everywhere. Look, “everyone can access everything” is not a feature, it’s a risk. Review permissions ruthlessly, manage cloud identities tightly, and always verify that configurations match—not just provider best practices, but your own organisation’s real-world risk appetite.
The Trouble with Defaults
Let’s take a moment to talk about default settings. Cloud service providers want you to start building quickly—they make it incredibly easy to “get going”, often by enabling broad permissions, auto-provisioning resources, or connecting bits of infrastructure seamlessly (sometimes too seamlessly).These defaults are attackers’ best friends. What’s convenient for rapid deployment is often insecure for ongoing operation. Organisations must not only change default passwords (yes, that’s still a thing), but must actively audit the security stance of every new tool and service. It’s not just about what comes out-of-the-box, but how that box is actually locked.
Vulnerability Management: Beyond the CVE Lists
Calling out vulnerabilities, patching, and updating dependencies is table stakes. But with the sprawling dependency chains characteristic of contemporary open-source AI projects, simple patching isn’t enough. Companies need tools that map out their dependency trees, highlight not just direct but transitive (indirect) vulnerabilities, and automate remediation wherever possible.There’s a reason modern attackers target supply chains—they know the weakest link is often three or four libraries deep, maintained by a solo developer in a different time zone. Automated inspection, dependency scanning, and prioritised remediation are crucial to prevent your cutting-edge AI from inheriting someone else’s 2018 bug.
Alert Fatigue: Finding a Signal in the Noise
An unexpected challenge with all this monitoring? Alert fatigue. Security teams are drowning in warnings, error messages, and “potential risks”—and when every popped up notification is urgent, it becomes all too easy to miss the one that’s a genuine, immediate threat.Tenable recommends solutions that prioritise actionable threats and support focused remediation, not just never-ending “to-do” lists. This means smarter alerting, built-in context, and a workflow that lets teams triage and respond efficiently, not just react frantically.
Embracing Regulatory Best Practices
Regulation might feel like a brick wall, but it’s really a safety net. Aligning with standards like NIST’s AI Risk Management Framework or your cloud provider’s hardened security settings doesn’t just reduce legal risk—it’s a blueprint for resilience. Ensure your AI projects have mapped data flows, access controls, and compliance baked in from day one, not as a post-deployment scramble.Keeping up with privacy requirements, conducting regular audits, and understanding the movement of data—especially when cloud and open-source blend and blur boundaries—is not just best practice. It’s survival.
Building a Culture of Secure Innovation
There’s no putting the AI genie back in the bottle, nor is there any chance of winning a global race to digital supremacy by playing it safe and slow. But as AI and cloud become embedded in every aspect of business—from customer service bots to financial predictions to medical diagnostics—the risk of “building fast and breaking everything” grows ever more tangible.The organisations that come out on top will be those that build security into their AI lifecycle from the start. That means training developers and data scientists in security basics, fostering collaboration between DevOps and security (DevSecOps, anyone?), and teaching everyone from the C-suite to the help desk why it matters.
It’s about balancing speed with vigilance, and excitement with responsibility. Yes—embrace the best of open-source and cloud, but know where the trapdoors are. As Nigel Ng of Tenable wisely put it, “The very openness and flexibility that make these tools powerful also create pathways for attackers. Without proper oversight, these hidden exposures could erode trust in AI-driven outcomes and compromise the competitive advantage businesses are chasing.”
The Road Ahead: Security as a Strategic Advantage
A decade from now, nobody will remember which company built the flashiest AI chatbot or trained the biggest model on “funny cat” pictures—unless, of course, that company’s bot also leaked millions of customer records due to a glaring, unpatched bug. The winners in the AI gold rush will be those that not only innovate quickly, but do so securely, sustainably, and with real, practiced discipline.Security is, and will remain, a foundational element of trust. Organisations looking to maintain their edge must treat it as a strategic asset—not an afterthought, nor a speed bump. Open-source tools and cloud services are here to stay, but it’s only with clear-eyed, ongoing attention to the full breadth of risk that companies can hope to harness their power without costly disaster.
So here’s the call to action: take inventory of your AI environments. Ask tough questions of your cloud configurations and permissions. Invest in tools and training. And above all, build a culture where secure AI is not just possible, but expected. The alternative? A future where AI doesn’t just power your business but opens the door wide for those who want to break it apart.
Because as everyone in cybersecurity knows, if you don’t know what’s running in your environment—or how it’s connected—rest assured: someone else will.
Source: SecurityBrief New Zealand Organisations face risk with open-source AI & cloud use
Last edited: