Singapore AI Adoption Surges, but Cybersecurity Lags, CPA Survey Finds

  • Thread Author
Singapore’s business sector has moved from AI curiosity to widespread trial — and in many cases routine use — but CPA Australia’s latest Business Technology Survey reveals a stark mismatch between the pace of adoption and the depth of integration, especially on cybersecurity and governance fronts.

Team discusses data analytics on holographic dashboards with governance and security visuals over a city skyline.Background / Overview​

CPA Australia’s 5th annual Business Technology Survey, conducted between July and September 2025, canvassed more than a thousand accounting and finance professionals across Asia including Singapore, and looked at how organisations are using data analytics, AI, and cybersecurity technologies. The Singapore results stand out for exceptionally high reported usage of analytics and AI tools, but materially lower levels of cybersecurity integration and strategic embedding. Key headline figures from the Singapore subset:
  • 95% of companies say they use data analytics and visualisation tools such as Python, Power BI and Excel.
  • 92% report some form of AI adoption, ahead of the global survey average.
  • Only 23% have cybersecurity embedded across strategy and operations.
  • 44% describe their AI use as ad-hoc or occasional; roughly 1 in 5 firms say AI is deeply integrated into core processes.
These numbers are useful directional indicators: they demonstrate broad uptake, but they do not, on their own, confirm how deeply models and ML pipelines are engineered into mission‑critical systems. Independent practitioner analyses consistently show the same pattern across the Asia‑Pacific region — wide tool adoption but a much smaller share of companies that have deployed AI with production-grade governance, MLOps and security controls.

What the survey actually measures — and where to be cautious​

What “use” means in survey terms​

Surveys that ask whether an organisation “uses AI” or “uses data analytics” capture a spectrum. For many respondents, use includes:
  • Prompting ChatGPT or similar LLMs for drafting and research tasks.
  • Using built-in AI assistants embedded in productivity suites (Microsoft Copilot, Google Workspace assistants).
  • Running Excel or Power BI analytics and lightweight automation.
By contrast, deep integration implies:
  • Custom models trained on company data and deployed via monitored inference pipelines.
  • Integration of AI outputs into ERP/CRM workflows with observability, rollback, and audit trails.
  • Clear decision‑rights, KPIs and MLOps for model lifecycle management.
The CPA Australia release distinguishes between occasional/off‑the‑shelf adoption (the most common pattern) and the minority who have embedded AI across multiple functions — a vital distinction for CIOs and boardrooms to understand.

Sampling, timing and interpretation​

The survey sample comprises accounting and finance professionals, which is valuable for understanding finance‑function adoption patterns, but it can skew the lens toward productivity tools (Excel + Copilot) and finance use cases. Treat headline percentages as directional — they signal a high prevalence of tools, not necessarily production AI architectures. Independent analyses and practitioner reporting echo this reading and caution against conflating “tool usage” with enterprise‑grade integration.

Deep dive: AI and analytics uptake in Singapore​

Analytics: near-ubiquitous but varied in depth​

Singapore’s reported 95% analytics usage demonstrates that companies broadly rely on data for operational and financial tasks. Tools listed in the survey (Python, Power BI, Excel) cover both exploratory analytics and production reporting. This mirrors global patterns: analytics is a low barrier to entry and frequently the first step on the journey to AI. Benefits reported in the survey over the past 12 months include:
  • Automation of repetitive tasks,
  • Streamlined workflows across finance and ops,
  • Faster decision cycles through dashboards and visualisation.
These are realistic, near-term gains, but extracting long‑term competitive advantage requires robust data engineering, single sources of truth, and productised analytics pipelines — the less visible investments that enable true model-ready infrastructure. Practitioner briefings repeatedly identify data readiness as the silent blocker to scaling AI beyond point solutions.

AI: widespread but often shallow​

The 92% AI adoption rate is impressive until you parse usage patterns: 44% of firms are using AI occasionally or ad hoc, largely via public LLMs and embedded assistants. Popular entry points include:
  • ChatGPT and similar LLM chat interfaces.
  • Microsoft Copilot integrated in Microsoft 365.
  • Google Gemini and workspace assistants.
These tools deliver immediate productivity gains — drafting emails, summarising documents, and generating first‑pass analyses — but they are not equivalent to deploying custom, governed AI for core business decisions. Only about 20% of organisations claim deep operational embedding — the “frontier” adopters who align people, data and processes to AI at scale.

The cybersecurity gap — why it matters​

The survey picture​

Despite rapid adoption, only 23% of Singapore respondents have cybersecurity integrated into strategy and operations — significantly below the survey average of 28%. Meanwhile 69% report using cybersecurity software (also below the 81% survey average). The survey also found that over 17% take a purely reactive approach to cyber threats, and 11% don’t know who is responsible for cybersecurity in their organisation. These are worrying governance signals given the expanding AI attack surface.

Why cybersecurity must be embedded, not bolt‑on​

AI expands an organisation’s attack surface in several ways:
  • Sensitive data used in prompts and training can leak when consumer tools are used without controls.
  • Agentic systems or autonomous agents that can act on behalf of users introduce operational risk (erroneous executions, privilege escalation).
  • AI‑enabled social engineering (deepfakes, targeted prompt attacks) increases the sophistication of phishing and impersonation.
Embedding cybersecurity means making security a design constraint across model development, integration and user‑facing assistants — from private endpoints and data loss prevention (DLP) to model governance, access controls, and observability. The survey’s cybersecurity shortfall is therefore not an incidental weakness; it’s a structural risk that can convert AI gains into reputational and regulatory losses.

Benefits versus risks — a balanced assessment​

Compelling near-term benefits​

  • Productivity uplift: Automation of routine accounting and administrative tasks frees skilled staff for higher‑value advisory work.
  • Faster decision-making: AI‑driven synthesis and visual analytics reduce latency in financial and operational decisions.
  • Scaled expertise: Role‑based copilots and domain models can encode best practices and accelerate onboarding.

Material risks to manage​

  • Governance gap: Without MLOps, monitoring and audit trails, models can “drift” and produce unreliable outputs. Vendor ROI claims are often directional and require independent verification.
  • Data leakage: Shadow usage of public LLMs with confidential inputs is common and dangerous. Implement contractual and technical controls (no‑train clauses, private endpoints, tenant isolation).
  • Workforce disruption: Automation can displace routine roles; the transition requires deliberate reskilling pipelines and job redesign. National and company‑level policy must plan for this distributional impact.
  • False confidence / hallucinations: LLM hallucinations remain a real operational hazard when AI outputs are taken at face value without human verification.
Where claims in media stories extend beyond the survey’s remit — for example attributing systemic job losses or precise ROI percentage points to AI — treat them as anecdotal unless backed by audited financial measures. Flag such claims for further company-level verification.

Practical roadmap: moving Singapore firms from experiment to integration​

Organisations that want to get beyond productivity hacks and capture durable, auditable value from AI should follow a staged, risk‑aware program:
  • Data foundation first
  • Conduct a data readiness audit (lineage, quality, access). AI succeeds where there is clean, well-documented data.
  • Consolidate key data sources and create a governed data catalogue.
  • Define measurable business outcomes
  • Prioritise 1–3 workflows (e.g., invoice processing, revenue forecasting) with clear KPIs (time saved, error reduction, cash flow improvement).
  • Harden security and procurement
  • Require non‑training and data‑deletion clauses in contracts where confidentiality matters.
  • Deploy private model endpoints, tenant‑scoped deployments and DLP controls before allowing confidential data to be processed.
  • Build MLOps and observability
  • Implement model versioning, drift detection, and incident runbooks.
  • Instrument telemetry so outputs can be traced back to source data and model versions.
  • Governance, policy and human oversight
  • Create an AI steering forum combining legal, HR, security, and product owners.
  • Publish acceptable use and “shadow AI” policies with approved sandboxes for experimentation.
  • Reskill and redesign roles
  • Invest in role‑based microcredentials for AI oversight (prompt engineering, model auditing, AgentOps).
  • Reprice services that become commoditised by automation to protect margins while redeploying staff to advisory roles.
  • Pilot with production intent
  • Design pilots with deployment plans, monitoring, rollback criteria and ongoing budgets for maintenance. Treat pilots as investments in a scalable product, not temporary demos.
This sequence reflects lessons from multiple regional studies: the difference between pilots and production is governance, not only better models. Firms that skip the data and governance steps risk brittle, non‑reproducible outcomes.

Procurement and vendor management — what to demand​

When buying AI capabilities, procurement teams should require:
  • Clear data handling guarantees: residency, deletion, and non‑training clauses.
  • Security attestations: SOC 2, ISO 27001, and independent pen tests.
  • Traceability: Outputs must be linkable to the underlying dataset and model version.
  • Service-level and cost predictability: caps on inference costs and transparent billing.
  • Local support and references: vendors who can demonstrate regional deployments and compliance experience.
Avoid one‑sided reliance on vendor case studies as proof of generalisable ROI; instead, insist on pilot KPIs and repeatable measurement frameworks before wide rollout. Independent verification and staged procurement reduce lock‑in risk and budget surprise.

National context: Singapore’s strategy and ecosystem​

Singapore’s public policy stance — focusing on applied AI for high‑impact sectors and building assurance mechanisms rather than competing at hyperscale model training — aligns well with the survey findings. The city‑state is cultivating a pipeline of industry pilots, standards, and reskilling programmes to capture AI’s benefits while managing social risks. That national focus gives companies a favourable environment for experimentation, but it does not obviate the need for company-level governance and security investments.
Regional case studies illustrate what deeper integration looks like: bank copilots for customer service and compliance, agentic scheduling and diagnostics in manufacturing, and hardware-model co-design in edge deployments. These examples highlight the payoff from investment in integration and governance — but also the sectoral variation in maturity.

What to watch next — indicators of progress (and failure)​

Track these indicators over the next 12–24 months to assess whether firms are moving from surface-level adoption to strategic embedding:
  • Percentage of AI initiatives with measurable, audited KPIs reported to the board.
  • Share of AI deployments using private endpoints and contractual non‑training clauses.
  • Growth in internal MLOps headcount and budgets for model observability.
  • Reduction in reactive cyber incident responses and increased proactive threat hunting.
  • Number of industry‑specific, auditable AI deployments (finance, healthcare, logistics).
If adoption grows but these indicators fail to follow, the market risks a proliferation of brittle, ungoverned systems that amplify operational and regulatory exposures rather than creating enduring value.

Conclusion​

The CPA Australia survey is a clear signal: Singaporean firms are enthusiastic and fast-moving adopters of AI and analytics tools, and many are already seeing productivity benefits. At the same time, the country’s low rate of cybersecurity embedding and the prevalence of ad‑hoc AI use are warning signs. Turning experimentation into strategic advantage requires deliberate investments in data foundations, MLOps, security, procurement discipline and workforce reskilling.
For CIOs, CFOs and boards in Singapore the imperative is simple and urgent: treat AI as a production grade capability — not just a desktop productivity hack. Embed security and governance early, measure outcomes rigorously, and plan for the human and organisational changes that durable AI adoption requires. The choice is between ephemeral wins today and sustainable, auditable value tomorrow.
Source: theaccountant-online.com Singapore companies increase use of AI: CPA Australia survey
 

Back
Top