Pennsylvania Expands AI Strategy with CMU Partnership and Dual Vendor Tools

  • Thread Author
The Shapiro administration has moved from pilot to partnership: the Commonwealth of Pennsylvania announced a newly expanded Cooperative Agreement with Carnegie Mellon University to provide ongoing AI advising, and at the same time the state is scaling enterprise-grade generative AI tools — continuing ChatGPT Enterprise access and adding Microsoft Copilot Chat for qualified employees — alongside formalized governance, mandatory training, and regional research investments designed to anchor an AI innovation cluster in Pennsylvania.

A diverse panel of executives convenes for the Generative AI Governing Board meeting.Background​

Pennsylvania’s AI push traces back to an executive order signed in September 2023 that established core principles for state use of generative AI and created a Generative AI Governing Board to oversee policy and deployment. The administration ran a year‑long pilot with OpenAI and Carnegie Mellon University that involved roughly 175 employees across multiple agencies; participants reported substantial time savings and broadly positive experiences, a finding the state has repeatedly cited as it scales the program. More recently — at a conference billed as “Unlocking AI for Public Good” and tied to the AI Horizons programming in Pittsburgh — the administration announced a Cooperative Agreement for Artificial Intelligence Advising Services with Carnegie Mellon University that formalizes ongoing CMU advisory support for strategy, governance and operational adoption across the commonwealth. The press material and university accounts place the new agreement announcement in early November 2025 and describe the arrangement as the next step in a multi‑year engagement that began during the pilot phase.

What the announcement actually says​

Key components of the expansion​

  • Continued or expanded access to ChatGPT Enterprise for qualifying Commonwealth employees.
  • Addition of Microsoft Copilot Chat to the state’s toolkit, creating a dual‑vendor approach intended to cover conversational RAG (retrieval‑augmented generation) workflows as well as deep integration inside Microsoft 365 apps.
  • A new Cooperative Agreement for Artificial Intelligence Advising Services with Carnegie Mellon University to provide ongoing advisory, research partnerships, and implementation support.
  • Formal governance and workforce commitments: continuation of the Generative AI Governing Board, creation of a Generative AI Labor and Management Collaboration Group, and mandatory InnovateUS training as a prerequisite for tool access.
  • Economic and research investments announced at related events, notably a multi‑year BNY–CMU research partnership (reported at $10 million) and industry accelerators aimed at small business skilling.

How the administration frames it​

Officials portray the move as a three‑pronged strategy: (1) increase government productivity, (2) protect citizen data and ensure responsible use, and (3) grow Pennsylvania’s AI economy by anchoring private investment, research funding and workforce training in the region. The administration has repeatedly described the combined offering as “the most advanced suite of generative AI tools offered by any state,” a promotional characterization that the state itself has framed as evidence of national leadership.

Why Carnegie Mellon matters here​

Carnegie Mellon University is not an incidental partner; it is one of the world’s leading AI research institutions and has played a hands‑on role throughout Pennsylvania’s pilot and policy development. CMU faculty and research centers contributed to pilot design, evaluation and the creation of tooling to help agencies scope appropriate AI use cases. The new cooperative agreement institutionalizes that advisory channel and positions CMU as the Commonwealth’s primary academic partner for governance, risk analysis, and applied research. That academic tie offers two practical advantages:
  • Access to applied research and graduate student support for operational challenges (e.g., scoping tools, building evaluation frameworks, constructing red‑team protocols).
  • Credibility and an evidence‑generation pathway: CMU’s Block Center and allied research teams can design independent validation exercises, fairness audits and technical controls that the state will need as the rollout expands.

The headline claims — what to believe and what to verify​

The 95‑minute figure: promising, but pilot‑level and self‑reported​

The most widely cited pilot metric is that participating employees reported an average of 95 minutes saved per workday when using ChatGPT during pilot activities like drafting, summarizing, research and basic coding help. That figure appears in official briefings and university accounts, and it was a central piece of evidence used to justify expansion. However, the metric is self‑reported, derived from exit surveys and structured feedback in a limited cohort (roughly 175 employees), and therefore should be treated as an encouraging pilot result rather than an audited, system‑wide productivity delta. Independent, instrumented measurement — baseline handling times, error and rework rates, and longitudinal verification — remains essential to understanding the true net effect on service quality and workload.

“Most advanced suite” — promotional, not a neutral ranking​

The administration’s claim that Pennsylvania now offers the “most advanced suite of generative AI tools” among U.S. states is a defensible framing given the dual‑vendor strategy and governance scaffolding, but it is not the kind of objective ranking produced by an independent authority. The phrasing should be read as aspirational marketing that compresses tool coverage, governance posture and training reach into a single headline. Verifying the claim would require a systematic state‑by‑state comparison across tenancy models, contractual safeguards, audit rights, training penetration and role‑based governance — a nontrivial exercise.

Economic totals and investment claims​

State materials tie the AI initiative to broader economic claims — private commitments running into the billions and transformative infrastructure investments in the region. Some of these totals (for example, multi‑billion corporate investments announced at related summits) come from a mix of corporate announcements and state aggregation, and the exact numbers vary across briefings. Treat headline totals as rolling aggregates that merit project‑level verification against official economic development databases or contract announcements when precision matters.

Technical and procurement realities — what this deployment entails​

Moving from a small, controlled pilot to thousands of employees with access to enterprise generative AI is a heavy operational lift. The technical checklist the Commonwealth highlights — and that enterprise implementers should insist on — includes:
  • Secure tenancy configuration (Azure Government / GCC or equivalent) for any workflows that handle sensitive or regulated data.
  • Data classification and label‑based routing so that PII, Controlled Unclassified Information (CUI) and other sensitive records are blocked from permissive AI flows.
  • Extended Data Loss Prevention (DLP), Microsoft Purview integration, and retention policies to preserve auditability and to meet FOIA/eDiscovery obligations.
  • Least‑privilege access models, conditional access policies and phishing‑resistant multi‑factor authentication (MFA) for accounts authorized to prompt AI systems.
  • Prompt provenance and immutable logging to support incident investigations and transparency reporting.
These are not optional niceties; they are operational requirements if the state intends to scale without creating FOIA exposure, uncontrolled data leakage or opaque decision trails. Implementation will require close coordination between procurement, legal, cybersecurity, and agency service owners.

Governance, labor and workforce strategy​

A notable strength of the administration’s posture is its explicit inclusion of labor and worker representation in implementation design. Pennsylvania created a Generative AI Labor and Management Collaboration Group intended to give unions and front‑line staff a formal voice in where and how AI is used across roles. This is an important mitigation against two common public‑sector failures: (1) technological decisions made without worker buy‑in, and (2) one‑size‑fits‑all automation that disregards job complexity and human oversight needs. Mandatory training is another central plank: the administration reports that more than 1,300 employees have completed InnovateUS training, with thousands more enrolled, and that training completion is a prerequisite for tool access. This points to a competency‑gated access model — a best practice that reduces the chance of naïve tool misuse and helps ensure that AI becomes an augmentation, not a liability. That said, training alone does not eliminate the need for process redesign, role re‑scoping and measurable reskilling plans tied to workforce outcomes.

The CMU / BNY / industry ecosystem: building a regional AI cluster​

Beyond operational rollout, the administration pairs tool deployment with ecosystem investments meant to anchor talent and research locally. Announcements at the AI‑focused summits included a $10 million, five‑year BNY–CMU collaboration to establish an applied lab focused on governance and accountability, and industry accelerators to offer free training and tooling to small businesses. Those commitments are designed to create a feedback loop: practical government adoption informs academic research, which in turn builds commercial capabilities and workforce pipelines for local employers. This cluster approach has two practical payoffs:
  • It provides the administration with ongoing access to independent technical expertise, graduate research resources and evaluation capacity.
  • It signals to private investors and vendors that Pennsylvania is serious about AI infrastructure, skills and governance — a message that can attract follow‑on commitments and local hiring.

Major risks and unresolved questions​

Even well‑designed public‑sector AI programs face well‑documented hazards. The Pennsylvania plan acknowledges many of these but the real test will be in execution and transparency.
  • Accuracy and hallucination risk: generative models produce plausible but incorrect outputs; human verification thresholds must be enforced for legal, benefits, licensing, and safety‑critical cases. The administration emphasizes human‑in‑the‑loop controls, but enforcement and auditing will be essential.
  • Data governance and FOIA exposure: public records obligations create complex retention and retrieval requirements for prompts, outputs and draft artifacts. Contracts must give the state the ability to extract logs and preserve eDiscovery evidence.
  • Vendor lock‑in and portability: the dual‑vendor approach reduces single‑vendor dependence but does not eliminate lock‑in risks tied to tenancy, connectors and proprietary APIs. Procurement should require egress rights, non‑training clauses, and clear audit capabilities.
  • Workforce and reskilling outcomes: the administration frames AI as a job enhancer, not a replacer, but measurable plans to redeploy saved labor into higher‑value tasks, reskill affected roles and track job‑quality outcomes are necessary to make that promise credible. The labor‑management collaboration group is an important step, but outcome metrics will be required.
  • Transparency and independent evaluation: to sustain public trust the state must publish red‑team results, independent audits, and annual transparency reports showing both successes and incidents. Without third‑party validation, pilot metrics risk appearing promotional.
Where specifics are not fully public — for example, exact contractual language around vendor model‑training, telemetry access, deletion guarantees, or specific tenancy choices for each agency — those remain procurement details to watch closely. Any claim about model versions, retention windows, or telemetry rights should be treated as contingent until verified in contract text.

What Windows‑centric IT teams and enterprise admins must prepare for​

For system administrators, desktop engineers, and security teams operating Windows environments across state agencies, the practical implications are immediate and specific.

Priority technical tasks​

  • Identity and access hardening: integrate Entra ID / SSO, apply RBAC, enforce phishing‑resistant MFA for accounts permitted to use Copilot or ChatGPT, and build conditional access policies to gate high‑sensitivity prompts.
  • Data classification & Purview integration: map agency data flows, apply sensitivity labels, and ensure DLP rules prevent high‑risk content from reaching non‑cleared AI tenancies.
  • Audit, logging, and provenance: extend endpoint management to capture prompt provenance, immutable logs and output retention policies that align with FOIA and eDiscovery needs. Plan for scalable storage and indexed search for records retrieval.
  • Endpoint provisioning and ringed deployment: validate Copilot features and AI execution providers in a staged release ring, signaling set‑aside device groups (e.g., high‑security workstations) that do not receive Copilot functionality. Treat AI features as a separate release track requiring validation and rollback plans.
  • Training and competency gating: coordinate InnovateUS completion reports with provisioning systems so that tool access is automatically granted only after verified training completion and role‑based competency checks.

Operational and policy tasks​

  • Negotiate procurement clauses with vendors that include portability/e‑gress rights, audit transparency, and explicit non‑training guarantees where state data cannot be used to refine vendor models.
  • Define human‑in‑the‑loop thresholds for outputs (what must be verified, and by whom), and instrument decision trails to show who accepted, edited or published AI‑generated content.
  • Create incident response playbooks that include AI‑specific vectors (hallucination remediation, data leakage investigations, prompt provenance requests) and test them with red‑team exercises.

Recommendations for making the expansion credible and durable​

The administration’s design choices are promising; to turn them into durable public‑sector capability, the following execution steps are recommended:
  • Publish an independent third‑party audit plan that will validate pilot claims and run longitudinal measurements across agencies. Make at least the high‑level KPIs public: baseline handling times, rework rates, error remediation time, and citizen outcome metrics.
  • Require manifest governance outputs: publish governing board meeting minutes, red‑team results and annual transparency reports detailing deployments, incidents and mitigations.
  • Tie further procurement phases to measurable milestones: training completion rates, audit logs availability, FOIA responsiveness, and documented human‑in‑the‑loop enforcement.
  • Use the BNY–CMU lab to operationalize governance research into applied auditing tools (bias detection, provenance logging) that can be integrated into real deployments. Fund reproducible tooling, not only white papers.
These steps reduce the risk that an ambitious pilot narrative becomes a cautionary tale when scaled to thousands of employees and millions of citizen interactions.

How to read this as a national case study​

Pennsylvania’s strategy embodies a pragmatic public‑sector playbook for generative AI adoption: instrumented pilot, academic partnership for independent expertise, role‑based training and a layered governance model that includes labor. That combination is an emerging best practice and is already influencing other states and universities designing their own deployments. The crucial differentiator will be whether the Commonwealth converts pilot‑level enthusiasm into verifiable evidence: longitudinal audits, published red‑team results, and procurement language that preserves audit rights and data portability. If executed well, Pennsylvania may become a replicable model. If not, the rollout risks the common failure modes: overpromising, insufficient oversight, and hidden operational costs tied to error correction and FOIA burdens.

Conclusion​

Pennsylvania’s expanded collaboration with Carnegie Mellon University and the scaling of generative AI tools to more state employees is a consequential, well‑scaffolded step from pilot to enterprise adoption. The dual‑vendor approach (ChatGPT Enterprise plus Microsoft Copilot Chat), explicit labor engagement, mandatory training and local research investments create a robust framework for responsible adoption.
That framework — however promising — rests on execution: rigorous procurement safeguards, enforceable technical controls (tenancy, DLP, provenance logging), independent audits of pilot claims (including the headline 95‑minute savings metric), and transparent reporting that keeps the public informed. For Windows‑centric IT teams, the immediate work is practical and operational: lock down identity, classify and route data correctly, instrument logging and provenance, and gate access behind competency checks.
Pennsylvania is now a high‑visibility case study in how state governments can pair technology adoption with governance and workforce development. The next chapters will show whether that design converts into durable public benefit, replicable governance practices, and a sustainable, equitable local AI ecosystem — or whether the unaddressed technical, legal and labor risks will outweigh the early productivity gains.
Source: The Business Journals Shapiro administration announces expanded AI collaboration with Carnegie Mellon University - Pittsburgh Business Times
 

Most Australian firms have stopped at the doorway of generative AI — using ChatGPT or Microsoft Copilot as productivity crutches rather than building the deeper, enterprise-grade systems that rewire business processes and lift productivity at scale, according to a new Reserve Bank of Australia liaison survey that finds adoption to date has been “shallow”, with around two‑thirds of firms reporting some AI use but nearly 40% describing that use as minimal.

Team in an Enterprise AI Lab collaborates on data pipelines and secure model deployment.Background: the RBA survey in plain terms​

The RBA’s November bulletin presents a small but revealing picture: between June and August 2025 the bank’s liaison team conducted guided interviews with 105 medium‑to‑large firms across most industries to understand recent and planned technology investments. The exercise focused on how firms are using information technology — and, specifically, AI and machine learning — and what that means for productivity, hiring and risk. The central finding is that while investment in technology broadly has accelerated over the past decade, the shift to meaningful, embedded enterprise AI is still in its infancy for many Australian firms.
  • Two-thirds of liaison firms reported they had adopted AI in some form.
  • For most of those firms, adoption was limited: nearly 40% said their use was “minimal”, largely restricted to off‑the‑shelf digital assistants such as Microsoft Copilot or ChatGPT.
  • Only roughly 30% reported moderate or deeper integration across core processes (e.g., forecasting, inventory management, fraud detection).
This is not a trivial observation: the RBA sample skews to larger, established firms — the organisations most likely to have the budget and talent to deploy AI at scale. Their hesitation therefore raises questions about how quickly broader Australian industry can move from experimentation to systemic change.

Why “Copilot and ChatGPT” are not the same as enterprise AI​

The difference between consumer‑grade AI and operational AI​

Using ChatGPT to draft emails or Copilot to summarise meetings is useful. It saves time, reduces friction and introduces staff to what AI can do. But there’s a critical distinction between employee‑led, point‑use AI and firm‑led, embedded AI:
  • Employee‑led tools are typically off‑the‑shelf, deployed without integration into data pipelines, security frameworks, or compliance processes.
  • Embedded AI requires data engineering, model governance, secure deployment, monitoring, and process redesign so outputs meaningfully change decisions or operations.
The RBA’s survey shows many Australian firms remain at the former stage — the “digital assistant” stage — rather than investing in the end‑to‑end capabilities needed for transformational gains.

Why surface use can be misleading​

Surface use creates the appearance of adoption while delivering limited productivity dividends at scale. Employee productivity gains from a digital assistant can be real but often do not translate into firm‑level output improvements unless organisations make complementary investments in data, processes and skills. The RBA warns that firms expecting rapid productivity payoffs from plug‑and‑play tools are likely to be disappointed unless digital adoption is accompanied by managerial change, retraining and systems integration.

Australia in the global context: catching up or falling behind?​

Global AI reports and indexes show dramatic increases in corporate AI investment and integration over recent years. International datasets indicate that enterprise AI adoption jumped sharply in several large economies during 2023–24, with many firms moving from pilot projects to organisation‑wide programmes. Australia, however, sits at the lower end of adoption and trust metrics compared with peer advanced economies. The RBA notes that Australia ranks relatively low across sentiment, investment and AI talent concentration indicators in comparative international surveys. Key global trends to keep in mind:
  • Private sector investment in AI soared in major markets, driving both capability and adoption.
  • In many countries, large firms have moved faster — those with deep data, cloud infrastructure and specialised talent capture disproportionate gains.
  • Trust, data governance and national policy frameworks are proving to be decisive factors in how fast businesses scale AI.
For Australian CIOs and boards, the implication is clear: international rivals are already building the cloud, data and model pipelines that make enterprise AI durable. Stopping at the “Copilot” stage risks losing competitive ground on process optimisation and customer experience.

What the RBA and Jobs & Skills Australia agree on: adoption is early, varied and often hidden​

The RBA’s liaison findings dovetail with the federal Jobs and Skills Australia (JSA) Gen AI Capacity Study: both conclude that AI adoption in Australia is early, uneven across industries, and frequently shadowy — driven by employees rather than by coordinated business strategy. JSA’s work finds that a notable fraction of workers use generative AI tools without formal managerial approval, creating a “shadow AI” layer that organisations may not fully understand or control. Those shadow practices can deliver productivity in pockets but introduce governance, IP and privacy risk when unmanaged.
  • JSA’s national study emphasises that Gen AI more often augments roles than replaces them, but the labour market impact will vary by occupation and demographic group.
  • Both the RBA and JSA flagged skills shortages — particularly for data engineers, machine‑learning specialists and platform engineers — as a material constraint on scaling AI projects.
These independent analyses reinforce the same conclusion: a lot of the “adoption” is at the desk level, not the boardroom level.

The technical and organisational barriers slowing deeper adoption​

1. Data and cloud foundations are still incomplete​

Many firms have upgraded CRM and ERP systems in recent years, but RBA respondents reported that these projects often serve resilience rather than productivity goals. For durable AI, firms need clean data pipelines, cloud platforms, and model‑ready infrastructure — an investment that can be costly and time‑consuming. Without these foundations, copying and pasting prompts into ChatGPT remains the simplest path.

2. Skills and talent shortages​

Companies routinely name finding skilled workers — data engineers, ML specialists, MLOps practitioners — as a key blocker. The RBA and JSA both highlight the squeeze on AI talent, which is competing globally. Recruitment and retention pressures slow projects and raise costs, particularly for mid‑market firms that lack the pay or prestige of global tech names.

3. Risk appetite and governance​

Off‑the‑shelf tools are easy to deploy, but they also create privacy, compliance and IP hazards. Firms are rightfully cautious about exposing customer data to third‑party models or relying on AI outputs without human oversight. The RBA found uncertainty around regulatory frameworks and governance simmering across industries, dampening appetite for autonomous, agentic systems.

4. Integration costs and legacy systems​

Legacy platforms resist plug‑and‑play modernisation. Integrating AI into core workflows (for example, connecting a forecasting model to procurement systems) often requires months of engineering and process change — a friction that tends to favour marginal incremental uses over transformational ones.

Business impacts observed — and those yet to materialise​

Where firms see early wins​

  • Time savings on routine drafting, summarisation and information retrieval.
  • Improved decision support in specialist pockets such as fraud detection, demand forecasting and inventory optimisation (in the minority of firms that have moved beyond pilot).
  • Productivity gains at the individual level, helping staff do more in less time.

Where the evidence is thin​

  • Broad, measurable firm‑level productivity lifts remain scarce in the RBA sample; few firms reported material output improvements directly attributable to AI.
  • Systemic operational redesign — the kind that yields sustained productivity jumps — is still rare outside of larger enterprises and select high‑capability sectors.

Labour market signals​

Both the RBA and JSA signal a nuanced labour story: AI’s immediate displacement effects appear limited in aggregate, with more widespread augmentation of roles. Yet the distributional consequences matter — certain occupations and demographic groups are more exposed to displacement risk, and reskilling needs are substantial. JSA’s research suggests that while full‑scale job elimination is not yet evident, the composition of tasks within jobs is changing rapidly.

Strategic implications for Australian businesses and IT leaders​

For CIOs, CTOs and boards the RBA survey should be a wake‑up call: surface adoption is insufficient if the ambition is competitive advantage. The following priorities are pragmatic and sequential.
  • Consolidate your data foundations.
  • Audit data quality, accessibility and lineage.
  • Prioritise cloud migration where it unlocks scale, security and model deployment.
  • Formalise AI governance.
  • Create a cross‑functional AI steering committee including legal, security and operations.
  • Define clear rules for shadow AI and a pathway to integrate safe, approved tools.
  • Invest in MLOps and lifecycle management.
  • Move from ad hoc model trials to repeatable deployment pipelines and monitoring.
  • Build talent through targeted hiring and internal reskilling.
  • Blend hiring with internal training programmes that teach data literacy to non‑technical staff.
  • Choose high‑impact pilot projects with clear KPIs.
  • Prioritise processes where AI can measurably change outcomes (e.g., reducing time to resolution, lowering error rates, increasing sales conversion).
  • Measure economics, not hype.
  • Track not just time savings but end‑to‑end process improvements and bottom‑line contribution.
These steps are not glamorous, but they’re the enablers that transform “Copilot at the desk” into “AI at the core” of the business.

Risks — technical, operational and societal​

No strategy is without trade‑offs. The RBA’s findings — and corroborating analyses — raise several risks firms and policymakers must manage.
  • Operational risk: Poorly validated models may propagate errors at scale, producing reputational harm or regulatory breaches.
  • Security and privacy risk: Sending sensitive data to third‑party models without strict controls creates exposure.
  • Concentration risk: A small number of cloud and model vendors control infrastructure and model access, potentially increasing vendor lock‑in.
  • Skills mismatch and inequality: Rapid change can displace workers in lower‑skilled roles and widen inequality unless retraining is effective.
  • Regulatory and compliance risk: Unclear or evolving regulation raises uncertainty for firms wanting to deploy AI in regulated domains such as finance or health.
Policymakers have a role to play in smoothing the transition: clarifying acceptable data use, supporting skills programmes, and ensuring competition in critical cloud and AI supply chains.

Pockets of Australian leadership — where progress is visible​

Not all Australian firms are standing still. The RBA notes examples of companies embedding AI across multiple lines of business, especially in finance, telecommunications and some service sectors where data maturity is higher. Those leaders show common traits:
  • Strong cloud infrastructure and modern data platforms.
  • Executive sponsorship and cross‑functional teams.
  • A willingness to invest in talent and governance.
  • A measured approach to risk that balances productivity with control.
These exemplars demonstrate that the transition is possible in Australia — but it requires intent, capital and patient execution.

What government, industry bodies and educators should do​

The RBA and JSA outputs point toward a shared policy agenda:
  • Expand publicly funded reskilling and micro‑credential programmes focused on data engineering, MLOps and AI governance.
  • Encourage cloud and data infrastructure competition to reduce vendor concentration risks.
  • Support SMEs with subsidised access to secure sandbox environments to experiment safely with AI.
  • Develop clear, industry‑specific guidance on acceptable use of generative AI in regulated sectors.
Well‑designed public interventions can lower adoption friction and reduce the “shadow AI” phenomenon by giving firms governed pathways to industrialise internal AI use.

A practical checklist for executives moving from experiment to scale​

  • Establish an AI governance framework with a formal risk register and escalation paths.
  • Create a 12‑month roadmap with prioritized workloads (top‑3 use cases) and measurable KPIs.
  • Commit to a single, cloud‑first data platform and standardise APIs for model access.
  • Run pilot projects with production intent — plan for deployment, monitoring and rollback before model training completes.
  • Allocate budget for continuous retraining and model monitoring; treat models as products.
  • Engage staff early, communicate the augmentation benefits, and link reskilling to career pathways.
These steps convert ad hoc efficiency improvements into sustainable competitive advantage.

Conclusion: the window remains open — but so does the gap​

The RBA’s liaison survey provides a sober, evidence‑based snapshot: Australia’s corporate AI journey is underway but shallow. Many firms have tasted generative assistants and stopped there; a smaller group has built the data and platform foundations needed to embed AI across operations. The practical consequence is a mid‑transition economy — one with rising technology investment but only partial adoption of the practices that deliver systemic productivity gains. Global indicators show rapid acceleration elsewhere, and independent national research underlines the urgency of skills, governance and infrastructure investments. The path forward for Australian businesses is straightforward in concept and demanding in execution: build the plumbing, professionalise deployment, and couple AI investments with people and process change. Those who do the hard work will capture the gains; those who stop at Copilot risk being overtaken by competitors who treated AI as an operational imperative rather than a desktop convenience.
Source: AFR RBA survey reveals ‘shallow’ AI adoption as businesses stop at ChatGPT
 

Back
Top