• Thread Author
There has been a sharp and measurable shift in how Irish mid‑market executives view artificial intelligence: the proportion who described AI as “over‑rated” or mostly hype has collapsed, firms are moving rapidly to formalise generative‑AI rules for staff, yet anxiety about data privacy has never been higher — a paradox that will shape which Irish SMEs win and which stumble as AI moves from experiment to steady state.

Business team in a conference room analyzing data on a large holographic dashboard.Background and overview​

Grant Thornton’s International Business Report (IBR) surveys roughly 10,000 mid‑market firms across dozens of economies and is a leading quarterly barometer of how business leaders see risk, investment and technology. Its Ireland slice has, in recent quarters, shown technology — and particularly AI — moving from a boardroom talking point to a line‑of‑business tool that management teams are trying to operationalise. Grant Thornton’s own Ireland pages and IBR materials note rising technology investment and a growing focus on AI projects. (prod-emea.gtil-dxc.com)
At the same time, independent surveys and sector studies — from PwC’s GenAI and Digital Trust work to research produced with Trinity College and Microsoft — report similar dynamics in Irish companies: dramatically higher AI experimentation and adoption, more firms creating governance structures, and persistent, elevated concern about security and privacy when employees interact with generative AI. These outside datapoints confirm that the shift Grant Thornton flags is a national pattern, not a reporting anomaly. (news.microsoft.com)

What the latest figures say — and what they mean​

A swift collapse in scepticism​

  • The Irish Examiner reported that the share of Irish executives calling AI “mostly hype” dropped from 45% to 23% in a six‑month window, based on the Grant Thornton IBR. That headline number captures a rapid move from caution and scepticism toward active experimentation and deployment.
This magnitude of change matters: a halving of dismissive attitudes in half a year signals not just more positive press, but visible, hands‑on experience inside Irish firms. Executives are no longer debating whether AI is valuable — they are asking how to make it safe, useful and measurable. Grant Thornton’s commentary and IBR materials generally support the picture of growing AI investment, even if detailed country‑level tables sometimes sit behind report gates. (prod-emea.gtil-dxc.com)

Policies, training and governance rising fast​

  • The same reporting shows more firms now require staff to follow an AI‑usage policy for generative AI (examples include ChatGPT and similar tools). The Irish Examiner cites an increase from 37% to “more than half” in six months. That is consistent with other surveys showing governance activity accelerating as firms try to get ahead of the risks. (irishexaminer.com, news.microsoft.com, irishexaminer.com, irishexaminer.com, pwc.ie, irishexaminer.com)

    Compliance complexity​

    The EU AI Act and sector privacy rules (GDPR) mean that careless use of third‑party models can create both reputational and legal exposure. SMEs that do not map data flows, maintain records of model use, or treat AI outputs as potentially personal data risk non‑compliance fines and contractual breaches. (news.microsoft.com)

    Model reliability and “hallucinations”​

    Generative models can produce plausible‑sounding but incorrect outputs. When firms use AI for customer communications, legal drafting, or decisions that affect customers, hallucinations are a real business risk unless human review is baked into workflows. This is highlighted in multiple vendor and advisory studies. (pwc.ie)

    Skills and change management gaps​

    Many SMEs lack in‑house AI expertise. Even when leaders endorse AI, frontline staff need training on how to prompt, what data to share, and how to verify outputs; surveys repeatedly show governance lags training. PwC and EY surveys document this skills shortfall in Irish firms. (ey.com)

    Shadow AI and inconsistent policy enforcement​

    Where policy exists on paper but is not enforced, “shadow AI” use continues — increasing the chance of leakage. Tech teams must combine policy with technical controls (DLP, sanctioned agent proxies) or trust gaps will persist. (Community and technical forums likewise show practitioners debating controls, DLP and shadow AI mitigation — a signal that the issue is front‑line, not only boardroom.

    Practical, pragmatic guidance for Irish SMEs (ten actionable steps)​

    • Create a clear, short AI‑usage policy and publish it to staff. Policy must state approved tools, forbidden data types, and escalation paths. Start with a 1‑page summary and a 2,000‑word operating guide.
    • Identify and whitelist sanctioned AI tools (enterprise instances with contractual data protections). Block the rest at the network perimeter or with endpoint controls.
    • Deploy Data Loss Prevention (DLP) rules that detect common sensitive tokens (IBAN, personal IDs, client codes) and block or quarantine uploads to cloud LLM services.
    • Use data minimisation and redaction: never paste raw client data; instead, redact or summarise before prompting. Where possible, use synthetic or de‑identified examples for model‑training workflows.
    • Require human‑in‑the‑loop sign‑off for outputs used externally or in regulated processes (legal text, financial disclosures, medical advice).
    • Record and monitor AI use: maintain a register of AI models used, their vendors, data flows, and update cadence to meet audit needs under incoming regulation.
    • Negotiate vendor SLAs that specify data handling, retention, model training opt‑out, and breach notification clauses. Prefer vendors who offer contractual commitments on non‑use of customer data for model training.
    • Train employees on when to use AI and how to test outputs: short training modules, role‑specific cheat sheets, and internal “AI champions” who can advise peers.
    • Run short, measurable pilots with defined KPIs (time saved, error reduction). Use a staged rollout before enterprise‑wide adoption. Grant Thornton and PwC both stress pilots and measurement as necessary to convert pilots into scaled wins. (pwc.ie)
    • Prepare an incident response playbook for model‑related breaches or mis‑outputs: include communication templates for customers, regulators and employees.
    These steps emphasise speed and safety: rapid pilots to capture value, coupled with controls to reduce chance of catastrophic data leaks.

    Vendor selection checklist (what to ask suppliers)​

    • Does the vendor offer a contractual non‑use clause (no customer data used to further train public models)?
    • What encryption and key‑management options are available (bring‑your‑own‑key preferred)?
    • Where is data stored (data residency) and can you ensure EU/GDPR‑aligned processing?
    • Are there audit logs and transparency tools showing prompts, inputs and outputs?
    • Has the vendor completed independent security certifications (SOC 2, ISO 27001)?
    • How are model updates handled and how will changes be communicated?
    Prefer suppliers who support enterprise deployment patterns (on‑prem, VPC, private cloud) where sensitive workflows are involved. Trinity/Microsoft and major consulting surveys highlight the importance of vendor due diligence, especially for SMEs without deep in‑house ML teams. (pwc.ie)

    Piloting, measuring and scaling — a staged approach​

    • Select a bounded use case with measurable KPIs (e.g., customer support first‑response time, legal contract redaction time).
    • Run a 6–8 week pilot with control cohorts (measure time saved, accuracy delta, user satisfaction).
    • Assess security exposures: simulate potential data leakage and conduct a privacy impact assessment.
    • Draft standard operating procedures that define human review thresholds and escalation rules.
    • Scale iteratively — expand to adjacent teams only after governance, training and DLP controls are proven.
    Grant Thornton’s advisory work stresses aligning tech pilots to business objectives; pilots without measurable outcomes create noise, not value.

    Policy essentials — what to include in your AI usage policy​

    • Purpose and scope: which tools/processes the policy covers.
    • Approved and prohibited uses: concrete examples.
    • Data handling rules: prohibited data classes (personal identifiers, client confidential data, source code, payroll).
    • Vendor and procurement controls: who can sign vendor contracts and required clauses.
    • Audit and logging expectations: record keeping for auditors/regulators.
    • Training and certification: minimum training for staff that use AI.
    • Enforcement and sanctions: what happens if rules are breached.
    A short policy that is enforced is better than a long policy nobody reads. Keep it practical and job‑specific.

    Where Irish SMEs should invest next (priorities)​

    • Governance foundation: policy, risk register, DPO alignment.
    • Secure tooling: enterprise LLMs or private deployments with contractual data protections.
    • Training and change management: role‑based training with clear playbooks for daily use.
    • Measurement capability: dashboards that show business outcomes, not vanity metrics.
    These priorities echo the IBR and multiple consultancy reports: governance unlocks adoption; adoption without governance invites severe downside. (irishexaminer.com, irishexaminer.com, ey.com, irishexaminer.com, irishexaminer.com, news.microsoft.com)

    Source: Irish Examiner Number of Irish executives believing AI to be overrated is falling rapidly
 

Back
Top