• Thread Author
More than a third of New Zealand retail investors now say they use generative AI tools such as ChatGPT and Microsoft Copilot to inform their investment decisions — and a large majority report being satisfied with the outcomes — a shift that is simultaneously pragmatic and precarious for markets, advisers and regulators. (rnz.co.nz)

A man holds audited financial statements beside a glowing holographic financial dashboard.Background​

The findings were reported after Chartered Accountants Australia & New Zealand (CA ANZ) released its annual investor confidence work, which surveyed retail investors across Australia and New Zealand and highlighted rising domestic confidence alongside growing use of AI in retail decision-making. The RNZ summary of the CA ANZ survey states that “more than a third” of New Zealand retail investors reported using AI tools to make investment decisions and that 76 percent of those users were satisfied with results. The RNZ report also noted that the CA ANZ study showed 79 percent of respondents had increasing confidence in New Zealand capital markets and listed companies — even while worries about political unrest and trade wars are growing. (rnz.co.nz)
CA ANZ’s public overview of the 2024 investor confidence project places the survey in context: it is a multi-year program that canvassed more than 1,500 retail investors across Australia and New Zealand, with country-level downloads and datasets available for deeper inspection. The organisation continues to emphasise the importance of audited financial information and the role of auditors in preserving market trust — themes that are now intersecting directly with the rise of AI-sourced investor intel. (charteredaccountantsanz.com)
At the same time, global investor research from other professional services firms shows a broader appetite among investors for AI to deliver tangible productivity and financial improvements. PwC’s Global Investor Survey, for example, finds that a large share of investors expect generative AI to raise productivity and that investors want firms to pair AI investment with workforce upskilling — a signal that institutional sentiment is converging on both opportunity and the need for governance. (pwc.com)

What the CA ANZ survey and RNZ reporting actually say​

Key numeric takeaways (as reported)​

  • More than one-third of New Zealand retail investors reported using AI tools such as ChatGPT and Microsoft Copilot for investment decisions; 76 percent of those users said they were satisfied with the results. (rnz.co.nz)
  • The CA ANZ work reported 79 percent of respondents had increasing confidence in New Zealand capital markets and listed companies; confidence in capital markets rose by 6 percentage points year-on-year, while confidence in overseas markets reportedly fell by 5 percentage points. (rnz.co.nz)
  • Younger investors are leading AI adoption: nearly two-thirds (64 percent) of respondents aged 18–29 were using AI in their investment decision-making. Geographic differences were reported too — Auckland had the highest reported AI usage (51 percent), followed by Canterbury (33 percent) and Wellington (27 percent). (rnz.co.nz)
  • Trust in audited financial statements remained high: 88 percent of New Zealand investors reportedly expressed confidence in audited financial statements, and auditors were ranked the most trusted group to advance investor protection and market integrity. (rnz.co.nz, charteredaccountantsanz.com)

Caveats on interpretation​

  • The headline AI-adoption numbers come via RNZ’s reporting of CA ANZ results. The CA ANZ country page confirms the study’s existence and provides downloads for the New Zealand-specific report, but the RNZ article is the immediate source for the detailed breakdown cited above. Readers should consult the CA ANZ New Zealand report to verify sample sizes, question phrasing and exact breakdowns before making firm claims based on marginal percentage-point changes. (rnz.co.nz, charteredaccountantsanz.com)
  • Survey definitions matter: “use” of AI can range from occasional prompts to ChatGPT for a quick data check, through regular Copilot-driven portfolio analysis, to integrated robo-advisers and agentic systems that can take transactional actions. The distinction between light trial use and operational reliance is material but not always made explicit in headline summaries; independent surveys of investor attitudes to AI show trust and acceptance vary substantially with familiarity and described use-cases. (pwc.com, rnz.co.nz)

Why investors are turning to AI — practical drivers​

AI is attractive to retail investors for several practical reasons:
  • Speed of research: generative models and copilot tools can summarise earnings transcripts, pull together newsflow and produce comparative line-item summaries far faster than manual searches. This can shorten the research cycle from hours to minutes.
  • Accessibility of analysis: retail investors can prompt models for valuations, scenario modelling and plain-language explanations of financial statements, lowering the barrier to building a hypothesis-driven investment case.
  • Behavioural factors: younger investors are more digitally native and more comfortable iterating with algorithmic assistants; the data in the CA ANZ reporting point to significantly higher AI uptake in the 18–29 cohort. (rnz.co.nz)
  • Cost: compared with paid analyst research or advisory fees, many AI tools offer low-friction, low-cost alternatives that satisfy casual or self-directed investors’ needs.
These drivers are consistent with broader surveys that show investors and corporate leaders expect AI to deliver productivity and analytical improvements, provided implementation is disciplined. (pwc.com)

Strengths and immediate benefits​

  • Democratization of basic analysis: AI lowers the time and skills barrier for everyday tasks such as screening stocks, summarising company announcements and comparing metrics across peers.
  • Enhanced information synthesis: models can combine structured data (financials) with unstructured data (news, transcripts) to produce narrative explanations or risk lists that can be useful starting points for deeper diligence.
  • Faster idea validation: investors can use AI to quickly surface counterarguments, complementary evidence and historical analogues — speeding iterative research and allowing retail investors to manage more ideas with less effort.
  • Scale and personalization: Copilot-style integrations in productivity apps can deliver tailored dashboards, watchlists and alerts that reflect individual preferences and risk tolerances.
These strengths are already visible within asset management and advisory operations, where AI prototypes accelerate tasks such as reconciliation, research summarisation and workflow automation. Independent operational accounts and industry case studies describe measurable productivity gains from using copilots and cloud AI stacks — though the magnitude varies by context and governance quality.

Material risks and limitations​

While the immediate benefits are compelling, the adoption of AI by retail investors — and by the broader investor ecosystem — introduces specific risks that require attention.

1. Garbage in, garbage out: data quality and model training​

AI advice is only as good as the data and models that underpin it. If models are trained on inaccurate, stale or biased financial data, they will amplify those errors. CA ANZ and market commentators emphasise the continuing primacy of audited financial statements as the reliable source layer beneath any AI-driven analysis. Retail investors relying on unsourced AI outputs risk making decisions on partial or incorrect information. (rnz.co.nz, charteredaccountantsanz.com)

2. Hallucinations and unverifiable outputs​

Generative models can produce confident-sounding but incorrect assertions (hallucinations). For an investor, a plausible but false narrative (e.g., an invented partnership, misstated revenue figure or wrongly attributed quote) can be costly if acted upon without verification.

3. Overconfidence and fragile trust​

The CA ANZ analysis highlighted that although many investors use AI, trust is fragile: significant proportions of non-users told the survey they do not trust AI outputs, and nearly half of non-users preferred other information sources. This bifurcation suggests that heavy reliance without human verification could create fragile behavioural regimes where investors accept false positives from AI until adverse outcomes force rapid re-pricing. (rnz.co.nz)

4. Lack of audit trails and explainability​

Many consumer-facing AI tools lack transparent provenance and auditable decision logs. This makes it difficult for an investor to demonstrate the basis of a decision, or for a regulator or adviser to reconstruct events after a loss or market disruption. Institutional-grade adoption demands immutable logs, model factsheets, and reproducible test cases — capabilities that are still maturing.

5. Concentration and crowding risk​

If a popular AI prompt or model produces similar trade ideas for a large cohort of retail users, the result can be crowded trades and amplified volatility in small-cap or illiquid securities. AI-driven consensus narratives can become self-reinforcing until a liquidity shock reverses them.

6. Regulatory and ethical exposures​

Automated or semi-automated investment advice that crosses into regulated financial advice territory raises legal and compliance issues. The regulatory environment for AI-driven financial advice is nascent in many jurisdictions; the boundaries between “information” and “advice” will be litigated and regulated in coming years.

The auditor and accountant as “data guardian”: what it means​

CA ANZ and broader chartered-accountant networks are reframing auditors and accountants as data guardians — the stewards of high-quality financial information that should underpin both human and machine decision-making. The CA ANZ commentary stresses that trust in audited financial statements remains robust (88 percent expressed confidence), and that audit will remain the bedrock of investor intelligence even as AI proliferates. (charteredaccountantsanz.com, rnz.co.nz)
A recent Chartered Accountants Worldwide / Ipsos study further supports this view: the profession sees a growing role in governing and validating the information flows that feed AI systems, and younger accountants are already heavy users of AI in their workflows. These developments point to a future where audited datasets, model factsheets and independent assurance become mandatory components of any reliable AI-driven investment process. (charteredaccountantsanz.com)

Practical guidance for retail investors using AI​

Retail investors who choose to use AI should adopt disciplined habits to limit downside and improve outcomes:
  • Verify critical facts. Cross-check any AI-sourced numerical claim (revenues, margins, contract terms) against primary filings or audited statements before acting.
  • Use AI for scoping, not execution. Treat AI as a research assistant that surfaces questions, not as an automated trader that executes on its own. Maintain human oversight for any buy/sell decision.
  • Demand provenance. Prefer tools that show source links, timestamps and model confidence scores for every output.
  • Maintain record-keeping. Save prompts, model outputs and the data sources referenced as part of your decision log.
  • Understand limitations. Know whether the AI uses cached data, real-time feeds, or proprietary datasets; adopt conservative position sizing if any output cannot be fully verified.
  • Seek audited information. Lean on independent audited financial statements as the foundation for any financialmodeling or valuation work. CA ANZ’s survey shows investors still rank auditors highly in trustworthiness for market integrity. (charteredaccountantsanz.com, rnz.co.nz)

What firms, advisers and regulators should do next​

  • Insist on verifiable data pipelines: platforms and advisers should integrate authenticated data sources (e.g., exchange feeds, audited filings) and publish model factsheets that summarise training data, limitations and known failure modes. Independent providers and auditors can play a role validating those pipelines.
  • Create minimum disclosure standards: if a firm or platform uses AI to produce investment recommendations, it should disclose the model class, data vintage, confidence levels and human review processes.
  • Support AI literacy for retail investors: regulators, professional bodies and consumer groups should fund clear, accessible guides that explain strengths and hallucination risks of generative AI in financial contexts.
  • Expand audit and assurance frameworks: as CA ANZ notes, audited financial statements remain central; auditors should develop assurance protocols and controls specific to AI-fed analytics and to the datasets used to train investment-grade models. (charteredaccountantsanz.com)
  • Consider phased policy for automated execution: regulators might differentiate between “information-only” outputs and outputs that trigger trade execution or portfolio rebalancing, with progressively stricter rules and accountability for higher-impact automation.

How to read the signals: a synthesis​

  • The rise in AI usage among retail investors is real and meaningful — particularly among younger cohorts who treat AI as a routine research tool. The CA ANZ reporting, as presented in the RNZ coverage, captures this behavioural shift while simultaneously showing that core trust in audited financial statements and auditors remains strong. (rnz.co.nz, charteredaccountantsanz.com)
  • AI is lowering the cost of generating investment ideas, but it is not yet a substitute for primary, audited information and human judgment. The immediate market-level risk is operational (hallucinations, crowded trades, provenance gaps), not a mysterious new financial black box. That makes remedial action — stronger data provenance, audit-level assurance, and clear disclosure — both possible and effective.
  • Institutional and professional bodies are already converging on the “data guardian” concept: accountants and auditors will likely play a central role in certifying the inputs and outputs that feed AI investment workflows. That role will be pivotal for restoring and maintaining investor trust in an AI-enabled ecosystem. (charteredaccountantsanz.com)

A practical checklist for investors and market participants​

  • For individual investors:
  • Verify all AI-generated numeric claims against audited filings.
  • Keep conservative position sizes when acting on AI-sourced signals.
  • Maintain a decision log (prompt → output → verification steps → action).
  • Prefer platforms that expose data provenance and model explainability features.
  • For advisory firms and fintechs:
  • Build immutable audit trails for data, prompts and model outputs.
  • Publish model factsheets and data lineage for any recommendation engine used externally.
  • Incorporate human-in-the-loop approvals for high-impact decisions.
  • Engage auditors to establish assurance over critical training and reference datasets.
  • For regulators and standard-setters:
  • Draft guidance that differentiates information tools from regulated financial advice.
  • Require minimum provenance and disclosure standards for AI-driven recommendation services.
  • Support upskilling programs and consumer education to reduce asymmetric comprehension.

Final assessment — opportunity with clear boundaries​

The CA ANZ survey signals an important inflection: retail investors are increasingly comfortable using AI tools as part of their decision-making, and many report positive outcomes. That’s an important market-development: AI is democratizing access to synthesis and analysis that was once the purview of paid research desks. But that democratization does not eliminate the need for reliable, auditable source data or the safeguards that make markets resilient.
Auditors and accountants have an opening — and arguably a duty — to step into the “data guardian” role by certifying the inputs that feed investor-facing AI systems. At the same time, platforms and tool vendors must improve provenance, transparency and explainability to convert fragile trust into durable confidence.
The practical path forward for investors and markets is straightforward: embrace AI as an augmentative tool, not as a final decision-maker; insist on verifiable sources and auditable trails; and use human judgment as the final arbiter for actions that materially affect wealth. When these guardrails are in place, AI can be a powerful productivity enhancer for investors — but without them, the technology risks amplifying errors and producing concentrated, fragile market behaviours. (rnz.co.nz, charteredaccountantsanz.com, pwc.com)

Quick reference actions​

  • If you use AI for investing: verify, document, and size conservatively.
  • If you build AI tools for investors: publish provenance, implement audit trails, and design human-in-the-loop controls.
  • If you regulate or audit markets: prioritise standards for data quality, transparency and assurance that explicitly cover AI-fed analytics.
These practical steps will determine whether AI becomes a durable, trust-preserving productivity tool in retail investing — or a source of fragile, high-frequency errors that undermine markets and investor outcomes.

Source: RNZ Investors turn to AI to make decisions
 

More than a third of New Zealand’s retail investors now report using generative AI tools such as ChatGPT and Microsoft Copilot to inform investment decisions — and a substantial majority of those users say they are satisfied with the results — a shift that is reshaping how ordinary investors research, validate and execute portfolio ideas while exposing fresh governance and market‑integrity risks.

A futuristic team analyzes data on holographic screens in a glass-walled, city-view control room.Background​

The findings come from Chartered Accountants Australia & New Zealand’s (CA ANZ) investor confidence work as reported by national media, which highlights two linked trends: growing domestic investor confidence in New Zealand capital markets and accelerating adoption of consumer and enterprise-grade AI tools among retail investors. The headline numbers reported include a rise in confidence in New Zealand’s listed markets (+6 percentage points), a fall in confidence in overseas markets (-5 percentage points), and the claim that more than one‑third of Kiwi retail investors use AI in their decision-making, with 76 percent of those users reporting satisfaction.
The same reporting also emphasises demographic and geographic differences: younger investors (ages 18–29) show markedly higher adoption — nearly two‑thirds reported using AI for investing — and uptake is concentrated in urban regions such as Auckland. Trust in audited financial statements remains high among respondents (88 percent reporting confidence), and professional bodies are now positioning auditors and accountants as potential “data guardians” for AI-fed investment workflows.

What the data actually says (and what it doesn’t)​

The headline claims​

  • More than one‑third of New Zealand retail investors report some use of AI tools for investment decisions, and 76 percent of those users said they were satisfied with the outputs.
  • Investor confidence in New Zealand capital markets increased, while confidence in overseas markets fell.
  • Younger cohorts (18–29) are the most active AI adopters; metro regions report higher rates of use.

Important caveats​

  • Survey definitions matter: a single question that asks whether someone “uses AI” can capture a broad spectrum of behaviours — from an occasional ChatGPT prompt to full reliance on Copilot‑integrated research workflows. Headline percentages do not reveal intensity or mode of use.
  • Sample framing and question wording determine interpretation: without access to the full questionnaire and raw cross‑tabs, it is unsafe to assume that “use” implies operational reliance. The organisation’s country-level report and dataset downloads provide necessary granularity that media summaries cannot fully substitute.

Why retail investors are turning to AI: practical drivers​

AI tools are attractive to self-directed investors for clear, pragmatic reasons. Those drivers are worth separating from hype.
  • Speed of research: generative tools can summarise earnings calls, news flow and basic financial metrics in minutes, compressing tedious research steps.
  • Accessibility: plain‑language explanations of financial statements, basic valuation scaffolds and scenario analysis reduce technical barriers for new investors.
  • Cost: compared with paid analyst reports or advisory fees, many AI tools offer low-friction alternatives that satisfy casual or exploratory needs.
  • Behavioural convenience: younger, digitally native investors are more comfortable iterating with algorithmic assistants and are therefore more likely to experiment and adopt.
These practical benefits help explain why adoption is concentrated in younger demographics and urban centres, and why users report high satisfaction even when the outputs are sometimes fragile or partial.

Strengths: what AI actually brings to retail investing​

AI shifts the balance of what an individual investor can reasonably accomplish in a single session. The clear strengths include:
  • Democratization of basic analysis — routine tasks such as screening, sector comparison, and summarisation are much easier to perform without specialized training.
  • Enhanced information synthesis — models can combine structured financials and unstructured text (news, transcripts) to surface narratives and counterarguments that spark deeper due diligence.
  • Faster idea validation — rapid back‑of‑the‑envelope modelling and scenario testing let investors iterate ideas quickly before committing capital.
  • Personalisation and scale — Copilot integrations and watchlist automation produce customised alerts and dashboards at minimal cost.
These capabilities are real and explain meaningful productivity gains reported across asset management and advisory functions. They lower friction to entry for serious retail research while enabling hobbyist investors to adopt more structured workflows.

Material risks and systemic dangers​

While the strengths are tangible, the diffusion of AI into retail decision-making creates several specific and potentially systemic hazards.

1. Garbage in, garbage out (data quality)​

AI outputs mirror the quality of their inputs. If a model is trained on stale, partial or biased financial information it will amplify errors — sometimes confidently. For retail investors this means an unsourced AI assertion about revenue, contracts or earnings can be acted upon with real financial consequences. Audited financial statements remain the most reliable bedrock, and their centrality is reinforced in the survey findings.

2. Hallucinations and unverifiable claims​

Generative systems can invent plausible‑sounding details: misattributed quotes, fabricated partnerships or incorrect numeric summaries. These “hallucinations” are especially dangerous when investors lack the habit or ability to verify outputs against primary filings.

3. Fragile trust and overconfidence​

Survey responses show notable scepticism among non-users; many do not trust AI outputs. But among users, satisfaction can mask fragile confidence — acceptance of plausible-sounding but wrong outputs until negative outcomes force rapid re‑pricing. That behavioural risk compounds the technical risk of hallucination.

4. Crowd concentration and market impact​

If a popular prompt or widely distributed model yields the same trade idea to many users, market crowding can amplify price moves — particularly in illiquid or small‑cap securities. AI‑driven consensus narratives risk turning simultaneous, shallow conviction into destabilising flows.

5. Lack of provenance and audit trails​

Many consumer tools lack robust provenance metadata, timestamps, or auditable decision logs. That absence complicates post‑event analysis and regulatory review and undermines accountability in cases where automated or semi‑automated recommendations cause consumer harm.

6. Regulatory exposure and advice boundaries​

Automated outputs that functionally become recommendations may cross into regulated financial advice territory. The legal and compliance boundaries for AI‑driven advice are nascent and likely to tighten as cases and complaints surface.

The emerging role of auditors and accountants: “data guardians”​

The CA ANZ reporting and related professional commentary position auditors and accountants as natural stewards in an AI‑enabled information ecosystem. The concept of the accountant as a “data guardian” has three practical dimensions:
  • Assurance of input quality: verifying audited financial statements and establishing trusted datasets that should feed investment models.
  • Validation of pipelines: providing independent assurance over the data lineage and transformation steps that end up in model training or reference datasets.
  • Publication of machine‑readable artifacts: moving from PDF disclosures to structured, verifiable, machine‑readable filings that reduce the risk of misinterpretation by downstream models.
The survey shows investors still trust audited statements and the audit function; this trust can be leveraged to anchor AI workflows to high‑quality, auditable inputs. That anchoring is a practical, near‑term mitigation that addresses multiple GIGO and provenance problems simultaneously.

What should investors do? A practical checklist​

Retail investors curious about or already using AI should adopt disciplined practices that convert convenience into durable advantage.
  • Verify critical facts: cross‑check numeric claims against audited filings or exchange filings before acting.
  • Treat AI as a scoping tool: use outputs to identify questions or hypotheses rather than as final trade instructions.
  • Keep a decision log: record prompts, model outputs, verification steps and the final action taken. This creates a personal audit trail for later learning and accountability.
  • Prefer transparent tools: choose platforms that show source links, timestamps and confidence indicators for claims.
  • Size positions conservatively: until a model and data provenance are well understood, limit exposure to avoid concentration risk.
These steps reduce the most immediate harms — hallucination‑driven trades, single‑source bias, and overconfidence — while preserving productivity gains.

What should fintechs and advisory firms do?​

AI vendors, wealthtechs and adviser platforms must move beyond novelty and build governance into product design.
  • Build immutable, auditable logs for data ingestion, model prompts and outputs.
  • Publish model factsheets that describe model class, data vintage, known failure modes and expected confidence levels for outputs.
  • Enforce human‑in‑the‑loop defaults for high impact recommendations and for any output that triggers an execution workflow.
  • Integrate authenticated primary data feeds (exchange data, XBRL/structured filings) as the authoritative inputs for numeric claims.
The product‑design challenge is not only technical; it is behavioural. Firms must make safe default choices that encourage verification and human oversight rather than automation by default.

What regulators and standard setters should consider​

Policymakers can and should act to protect retail consumers and market integrity without stifling innovation.
  • Differentiate “information” from “advice”: impose stricter disclosure and governance where outputs are positioned as recommendations that could reasonably be acted upon.
  • Set minimum provenance and disclosure standards: require model metadata, data vintage and a statement of the primary sources used to produce outputs for any public‑facing investment tool.
  • Encourage audited, machine‑readable financial reporting: modern filing standards reduce downstream risk and make verification easier.
  • Fund and promote AI‑literacy programs for retail investors: informed users make better decisions and place less strain on regulatory resources.
A phased approach is sensible: information‑only tools can face lighter requirements than systems that automatically place trades or offer portfolio rebalancing without human confirmation.

Reading the signals: what this shift means for markets​

The CA ANZ findings captured in the reporting are an important early indicator: AI is no longer purely an institutional tool — it is a mainstream research assistant. That democratisation matters because it changes the distribution of information-processing power across market participants.
  • Short-term: expect faster idea cycles, more event-driven retail activity, and richer retail engagement with company disclosures.
  • Medium-term: systems that fail to provide provenance or that amplify similar signals across wide user bases will increase crowding risk in smaller, illiquid names.
  • Long-term: the market will converge on standards — audited, machine‑readable data; model factsheets; and third‑party assurances — that separate trustworthy AI workflows from brittle ones. Auditors and accountants are likely to be central in that convergence.
This evolution is not speculative: professional bodies and global surveys already show an appetite among investors and accountants to pair AI adoption with governance, upskilling and independent assurance. fileciteturn0file0turn0file1

Potential winners and losers​

  • Winners: vendors that prioritise provenance, explainability, and auditable logs; accounting and assurance professionals who can offer independent validation services; platforms that embed human oversight by default.
  • Losers: consumer tools that prioritise novelty and speed over accuracy, and any participant that treats AI as a black‑box substitute for due diligence. Over time, products that cannot demonstrate data integrity will lose users and face regulatory friction.

A pragmatic roadmap for stakeholders​

For retail investors​

  • Start small, verify everything, and keep clear decision logs.

For advisors and fintechs​

  • Publish model factsheets, integrate source authentication, and design human‑in‑the‑loop defaults.

For auditors and accountants​

  • Develop assurance frameworks for data pipelines and work with standard setters to create machine‑readable reporting guidelines.

For regulators​

  • Establish a tiered regulatory approach that scales with the potential impact of automated outputs and requires minimum provenance standards for investment recommendation systems.

Conclusion​

The CA ANZ survey, as reported, documents an inflection: AI is now part of many retail investors’ toolkits. That shift brings genuine productivity and accessibility benefits, particularly for younger and digitally native cohorts. At the same time, it exposes markets and consumers to familiar technical hazards — poor data, model hallucinations, crowding and a lack of provenance — now amplified by the speed and reach of modern generative systems. fileciteturn0file0turn0file1
The sensible path forward is neither prohibitive nor laissez‑faire. Instead, it is practical and governance‑forward: anchor AI workflows in audited, machine‑readable financial information; demand provenance and model factsheets; embed human oversight into execution flows; and equip retail investors with the literacy to evaluate AI outputs. When those pieces are in place, AI will not replace judgement — it will sharpen it. fileciteturn0file2turn0file6

Source: RNZ Investors turn to AI to make decisions
 

Back
Top