More than half of Dutch asset managers are already using artificial intelligence or plan to do so within the next year, but the AFM’s latest survey makes one thing unmistakably clear: adoption is running ahead of governance. In a sector where models now shape research, portfolio analysis, trading decisions, and operational workflows, the regulator sees a widening gap between enthusiasm for AI and the controls needed to keep it safe, explainable, and accountable. That tension is now a supervisory issue, not just a technology story.
The Dutch Authority for the Financial Markets (AFM) has put a sharp spotlight on the way asset managers are embracing AI, and why the current pace of change is creating fresh supervisory challenges. Its 2026 report, based on a survey of 323 asset managers, shows a sector that is experimenting broadly, investing unevenly, and still underbuilding the policy frameworks needed for responsible deployment. The message is not that AI is unwelcome. It is that unmanaged AI is likely to become a problem.
The AFM uses “asset managers” as an umbrella term. That matters because the survey spans a broader mix than many readers might expect, including investment firms, fund managers, depositaries, proprietary traders, and trading venues. The adoption patterns are therefore not uniform, and the regulator is careful to distinguish between firms that use AI mainly for support functions and those that apply it directly to trading and price formation.
What stands out most is the split between usage and institutional readiness. The AFM says 53% of respondents are already using AI or expect to do so within a year, but 71% have no dedicated AI budget. Many firms rely on freely available or broadly licensed tools such as ChatGPT or Microsoft Copilot, which makes deployment easier but also blurs internal accountability. In supervisory terms, that is a classic capabilities without controls problem.
The AFM’s concern is not theoretical. It ties the sector’s AI expansion to known risks such as data-quality failures, algorithmic bias, limited explainability, and dependence on a concentrated group of technology suppliers. It also warns that policy development and employee training are lagging behind usage, especially for generative AI and AI agents. That is why the report reads less like a celebration of innovation and more like a prompt for the industry to catch up before the regulatory bar rises further.
The report also aligns with a broader supervisory pattern across the Netherlands. DNB has issued parallel findings in the insurance sector, concluding that larger firms are ahead in AI adoption while governance maturity remains uneven. Together, the two regulators are drawing a line in the sand: AI can be deployed, but the sector must be able to justify, monitor, and explain what it is doing with it.
That pattern mirrors what has happened in other parts of financial services. Early adoption typically begins with low-risk functions, especially where AI can assist a human rather than replace one. But as confidence grows, firms start expanding into areas that influence investment judgment, trading strategy, and risk management. That is when the regulatory stakes rise sharply, because the line between support and decision-making becomes harder to police.
The AFM’s report is especially relevant because it treats AI governance as a live market-conduct issue rather than a distant technology concern. In the Dutch asset-management context, market conduct supervision is about how firms behave toward investors, counterparties, and the market itself. If a model helps generate research, frames portfolio decisions, or influences execution strategy, then explainability, accountability, and disclosure all become part of the supervisory picture.
Another reason the report matters is that the industry is changing at different speeds. Larger firms have the resources to build internal controls, buy specialist advice, and establish formal training. Smaller firms may rely more heavily on cloud-based AI tools and generic vendor contracts, which can make implementation easier but governance harder. That asymmetry is one of the report’s most important findings, because it suggests AI adoption could widen the capability gap across the sector.
At the same time, the AFM’s framing connects AI to other structural vulnerabilities in financial markets. Cloud dependence, IT concentration, cyber exposure, and outsourcing are all part of the same operational-risk conversation. In that sense, AI is not a standalone challenge. It is becoming one more layer in a broader stack of digital dependencies.
This also helps explain why regulators keep returning to the same themes: documentation, explainability, vendor oversight, and human accountability. The law is not asking every firm to build an in-house AI lab. It is asking them to understand what the tool does, where it is used, and who is responsible when it goes wrong. That expectation is especially important in finance, where errors can quickly move from operational nuisance to investor harm.
Size also matters. Larger firms, measured by assets under management, are generally further ahead in adoption than smaller firms. That aligns with what one would expect from resource-intensive technology transitions. Bigger firms can absorb experimentation, build controls, and sustain staff training. Smaller ones often face a harder trade-off between ambition and operational simplicity.
The report also shows that AI is being used most often for information gathering and data analysis, followed by research writing and the analysis of unstructured data. In other words, the sector is still using AI primarily as a multiplier for human judgment rather than as a full substitute for it. That is reassuring in one sense, but it also means supervisors must watch not just model outputs, but the internal processes that turn those outputs into decisions.
That makes proprietary trading a useful canary in the coal mine. If AI proves reliable in a high-frequency, high-pressure environment, other asset managers will likely follow. If it proves unstable or opaque, the problems will also travel quickly across the sector.
This is why the report is not just about technology uptake. It is about escalation. Each new use case widens the surface area for model error, overreliance, and undocumented decision-making.
The AFM also notes that nearly all respondents rely on general-purpose models such as ChatGPT, Claude, or Google Gemini. That is a crucial point. General-purpose tools are widely accessible and easy to trial, but they often sit outside the tight governance structures firms use for core systems. That makes them useful and dangerous, especially when employees use them informally.
A smaller group uses custom-built internal models or external specialized providers. That distinction matters because custom models can be better aligned with a firm’s data, objectives, and controls, while external providers introduce additional dependency and procurement risk. The report suggests that many firms are still in the early stages of deciding whether AI is a strategic capability or simply a productivity tool.
The AFM’s concern here is straightforward: if a tool is easy to use, it is easy to misuse. Staff may paste confidential data into public interfaces, trust outputs too readily, or treat fluent language as a proxy for accuracy. Those are not exotic risks; they are everyday operational risks made sharper by generative systems.
That distinction matters because it suggests a more realistic narrative than the usual hype cycle. Most asset managers are not yet claiming that AI will replace the investment process. They are saying it will sharpen parts of it. That is a more credible claim, but it also means the industry may underestimate the governance effort required to keep “assistant” tools from becoming hidden decision-makers.
The issue is not merely paperwork. A policy is the mechanism through which a firm defines acceptable use, escalation paths, approval thresholds, and accountability. Without it, AI adoption tends to become opportunistic: employees use tools because they are available, not because the firm has assessed the risk. That can create a compliance culture in which technology is normalized before it is understood.
Training is also lagging. Fewer than half of asset managers provide general AI awareness training for all employees, and only a small number offer advanced training for AI developers. This matters because AI literacy is no longer optional under the EU AI Act. It is now part of the baseline expectation for responsible deployment.
That is a much higher bar than casual familiarity. It implies documented responsibilities, role-specific training, and continuous refreshers as tools change. For financial firms, that is especially important because model behavior can shift when data, prompts, or vendor settings change.
That is why explainability is such a recurring theme in the AFM’s message. If a firm cannot trace why a tool produced a result, it may still use the result, but it should not pretend the risk is small.
Data quality is particularly sensitive in asset management because model outputs are only as useful as the inputs behind them. If market data is incomplete, stale, distorted, or badly structured, AI can produce elegant but misleading conclusions. That risk is amplified when firms use AI to process unstructured information, because the system may infer patterns that are not actually there.
Explainability is another core issue. Financial decisions often need to be justified to clients, boards, auditors, and regulators. If a model’s output cannot be explained in human terms, then the firm may have a hard time demonstrating that it acted prudently, especially if the output influenced a material allocation or trade.
This is not a hypothetical concern. Financial firms already know what concentration risk looks like in cloud and payments. AI could recreate the same vulnerability, only this time with systems that are harder to inspect and easier to trust blindly.
Cybersecurity is part of the same equation. AI tools expand the number of connected systems and the volume of data flowing through them. That makes them attractive not only for productivity, but for attackers seeking weak points in workflows, permissions, or vendor integrations.
For consumers and investors, the implications are more subtle but just as important. They may benefit from faster research, better portfolio construction, and more efficient services, but they also depend on the firm’s ability to keep the process transparent and fair. If the role of AI is hidden, clients cannot easily judge whether the firm’s recommendations are truly human-led, machine-assisted, or somewhere in between.
That is why transparency is not a cosmetic issue. It affects trust. Investors do not need every algorithmic detail, but they do need a credible account of how AI influences investment policy, portfolio composition, or service delivery. The AFM is clearly moving toward that expectation.
That does not mean every firm needs a flashy AI strategy. It means the market may increasingly reward operational maturity over raw experimentation. Quiet competence could become the differentiator.
But that advantage is not guaranteed to be durable. If smaller firms can access broadly available tools without building the same level of infrastructure, they may close some of the gap on day-to-day productivity. The real differentiator will then be governance quality, not just access to tools. In other words, AI may democratize capability at the surface while concentrating advantage in firms that can manage risk well.
The report also hints at a possible market-structure effect. If a small number of technology providers dominate the AI stack, then asset managers could become similarly dependent on a few vendors. That would not only create operational risk. It could also shape competitive dynamics if pricing, model access, or feature updates are controlled by external players.
That is the real strategic lesson in the AFM’s report. AI is not merely a tool for productivity. It is becoming a test of institutional maturity.
That can be a burden, but it can also be a levelling mechanism. Strong governance may help European firms compete on trust, transparency, and robustness rather than purely on scale.
The big question is whether firms will treat this as a compliance exercise or a strategic redesign task. The former produces policies on paper. The latter creates systems that are actually fit for use. In a market where regulators are already talking about digital resilience, AI literacy, and model risk management, the distinction will matter.
Another thing to watch is whether Dutch firms begin formalizing their approach to AI agents and autonomous workflows. Those tools are still rare, but they are likely to be the next frontier. Once systems start taking structured actions rather than merely generating text or analysis, the governance burden becomes materially higher.
The broader lesson is that AI will not reward the fastest adopters alone. It will reward the firms that can combine speed with discipline, experimentation with traceability, and innovation with restraint. In asset management, as in the rest of financial services, the winners may be those who learn to be responsibly ambitious before the market and the regulator force the issue.
Source: Stibbe Asset managers and the use of AI: the AFM identifies opportunities and risks in its recent report
Overview
The Dutch Authority for the Financial Markets (AFM) has put a sharp spotlight on the way asset managers are embracing AI, and why the current pace of change is creating fresh supervisory challenges. Its 2026 report, based on a survey of 323 asset managers, shows a sector that is experimenting broadly, investing unevenly, and still underbuilding the policy frameworks needed for responsible deployment. The message is not that AI is unwelcome. It is that unmanaged AI is likely to become a problem.The AFM uses “asset managers” as an umbrella term. That matters because the survey spans a broader mix than many readers might expect, including investment firms, fund managers, depositaries, proprietary traders, and trading venues. The adoption patterns are therefore not uniform, and the regulator is careful to distinguish between firms that use AI mainly for support functions and those that apply it directly to trading and price formation.
What stands out most is the split between usage and institutional readiness. The AFM says 53% of respondents are already using AI or expect to do so within a year, but 71% have no dedicated AI budget. Many firms rely on freely available or broadly licensed tools such as ChatGPT or Microsoft Copilot, which makes deployment easier but also blurs internal accountability. In supervisory terms, that is a classic capabilities without controls problem.
The AFM’s concern is not theoretical. It ties the sector’s AI expansion to known risks such as data-quality failures, algorithmic bias, limited explainability, and dependence on a concentrated group of technology suppliers. It also warns that policy development and employee training are lagging behind usage, especially for generative AI and AI agents. That is why the report reads less like a celebration of innovation and more like a prompt for the industry to catch up before the regulatory bar rises further.
Why this report matters now
The timing is important because the EU AI Act has already begun to shape expectations. Since 2 February 2025, the Act’s AI literacy obligations have applied to providers and deployers of AI systems, meaning firms must ensure their staff have sufficient knowledge and competence to use AI responsibly. Against that backdrop, the AFM is not merely describing a trend; it is testing whether Dutch firms are actually prepared for the obligations already in force.The report also aligns with a broader supervisory pattern across the Netherlands. DNB has issued parallel findings in the insurance sector, concluding that larger firms are ahead in AI adoption while governance maturity remains uneven. Together, the two regulators are drawing a line in the sand: AI can be deployed, but the sector must be able to justify, monitor, and explain what it is doing with it.
- AI is no longer niche in Dutch financial services.
- The governance gap is becoming the real supervisory issue.
- Smaller firms are at greater risk of falling behind.
- Generative AI increases the need for policy and training.
- Supervisors are moving from observation to expectation.
Background
AI in asset management did not arrive all at once. It entered through the familiar back door of productivity: research support, data extraction, compliance drafting, and workflow automation. Once firms realized that large language models and machine-learning tools could reduce time spent on repetitive tasks, the technology quickly moved from pilot projects into day-to-day operations. The AFM’s survey confirms that the most common use cases remain relatively conservative, but they are already widespread enough to matter.That pattern mirrors what has happened in other parts of financial services. Early adoption typically begins with low-risk functions, especially where AI can assist a human rather than replace one. But as confidence grows, firms start expanding into areas that influence investment judgment, trading strategy, and risk management. That is when the regulatory stakes rise sharply, because the line between support and decision-making becomes harder to police.
The AFM’s report is especially relevant because it treats AI governance as a live market-conduct issue rather than a distant technology concern. In the Dutch asset-management context, market conduct supervision is about how firms behave toward investors, counterparties, and the market itself. If a model helps generate research, frames portfolio decisions, or influences execution strategy, then explainability, accountability, and disclosure all become part of the supervisory picture.
Another reason the report matters is that the industry is changing at different speeds. Larger firms have the resources to build internal controls, buy specialist advice, and establish formal training. Smaller firms may rely more heavily on cloud-based AI tools and generic vendor contracts, which can make implementation easier but governance harder. That asymmetry is one of the report’s most important findings, because it suggests AI adoption could widen the capability gap across the sector.
A supervisory trend, not a one-off survey
The AFM is not coming to this topic cold. It has been building a clearer agenda around digital resilience, model risk, and the responsible use of AI. Its 2026 agenda explicitly says it will intensify supervision of AI and asks institutions to map their AI applications, strengthen model-risk management and data quality, and record decision logic. That is a significant shift from broad concern to active supervisory expectation.At the same time, the AFM’s framing connects AI to other structural vulnerabilities in financial markets. Cloud dependence, IT concentration, cyber exposure, and outsourcing are all part of the same operational-risk conversation. In that sense, AI is not a standalone challenge. It is becoming one more layer in a broader stack of digital dependencies.
- The AFM sees AI as part of digital resilience.
- Governance is now a market-conduct concern.
- Technology dependence is central to the risk analysis.
- Larger firms have more room to institutionalize controls.
- Smaller firms may rely on informal or generic safeguards.
The policy context in Europe
European policy is sharpening the debate. The AI Act’s Article 4 requires staff to have sufficient AI literacy, and the European Commission has stressed that deployers and providers must consider technical knowledge, experience, training, and the context of use. That means a generic internal memo is unlikely to be enough if a firm is using AI in high-impact workflows.This also helps explain why regulators keep returning to the same themes: documentation, explainability, vendor oversight, and human accountability. The law is not asking every firm to build an in-house AI lab. It is asking them to understand what the tool does, where it is used, and who is responsible when it goes wrong. That expectation is especially important in finance, where errors can quickly move from operational nuisance to investor harm.
Adoption Patterns Across the Sector
The AFM’s most visible finding is that AI usage is growing, but not evenly. Over half of respondents are already using AI or intend to within twelve months, and the highest adoption rates appear among proprietary traders and Organised Trading Facilities (OTFs). That is intuitive: where trading speed, pattern recognition, and parameter tuning matter, firms are more likely to treat AI as a competitive necessity rather than a novelty.Size also matters. Larger firms, measured by assets under management, are generally further ahead in adoption than smaller firms. That aligns with what one would expect from resource-intensive technology transitions. Bigger firms can absorb experimentation, build controls, and sustain staff training. Smaller ones often face a harder trade-off between ambition and operational simplicity.
The report also shows that AI is being used most often for information gathering and data analysis, followed by research writing and the analysis of unstructured data. In other words, the sector is still using AI primarily as a multiplier for human judgment rather than as a full substitute for it. That is reassuring in one sense, but it also means supervisors must watch not just model outputs, but the internal processes that turn those outputs into decisions.
Trading firms are pushing the frontier
Proprietary traders stand out because their use cases are more direct and more market-sensitive. The AFM says they are especially active in optimizing trading algorithm parameters, improving trading strategies, and predicting price movements or market trends. Those are not auxiliary functions. They are close to the core of market behavior, and they raise the stakes for oversight.That makes proprietary trading a useful canary in the coal mine. If AI proves reliable in a high-frequency, high-pressure environment, other asset managers will likely follow. If it proves unstable or opaque, the problems will also travel quickly across the sector.
- Proprietary traders are among the most advanced adopters.
- Trading use cases carry higher conduct and market-integrity risk.
- Larger firms are better positioned to absorb AI investment.
- Smaller firms may adopt tools faster than they can govern them.
- Information gathering remains the most common entry point.
Why adoption curves matter
Adoption curves matter because risk does not grow linearly. A firm that uses AI to help draft a research note is not exposed in the same way as a firm that uses machine learning to alter strategy parameters or automate decision support. The AFM’s survey indicates that the sector is moving from low-friction use cases toward more operationally meaningful ones, and that transition is exactly where governance can lag.This is why the report is not just about technology uptake. It is about escalation. Each new use case widens the surface area for model error, overreliance, and undocumented decision-making.
How Firms Are Using AI Today
The practical value of AI in asset management is clear from the survey. Firms use it to gather information, summarize material, perform data analysis, support compliance tasks, and draft research. Those functions are attractive because they save time without immediately replacing the investment professional. They also scale well: once a workflow is documented, the same tool can be applied across teams.The AFM also notes that nearly all respondents rely on general-purpose models such as ChatGPT, Claude, or Google Gemini. That is a crucial point. General-purpose tools are widely accessible and easy to trial, but they often sit outside the tight governance structures firms use for core systems. That makes them useful and dangerous, especially when employees use them informally.
A smaller group uses custom-built internal models or external specialized providers. That distinction matters because custom models can be better aligned with a firm’s data, objectives, and controls, while external providers introduce additional dependency and procurement risk. The report suggests that many firms are still in the early stages of deciding whether AI is a strategic capability or simply a productivity tool.
The rise of generative AI
Generative AI is especially important because it lowers the barrier to adoption. Employees can use it without writing code, building infrastructure, or waiting for a formal IT rollout. That speed is part of the appeal, but it also means deployment can outpace governance almost by default.The AFM’s concern here is straightforward: if a tool is easy to use, it is easy to misuse. Staff may paste confidential data into public interfaces, trust outputs too readily, or treat fluent language as a proxy for accuracy. Those are not exotic risks; they are everyday operational risks made sharper by generative systems.
- General-purpose models dominate current usage.
- Generative AI accelerates adoption through simplicity.
- Ease of use can invite accidental policy breaches.
- Custom models may improve fit but increase implementation complexity.
- External providers add procurement and concentration risk.
The sector’s expected benefits
The benefits firms expect are telling. They are not primarily looking for immediate cost cuts. Instead, they expect AI to improve speed, efficiency, data processing, and internal workflows. Longer term, the promise is better portfolio allocation and deeper market analysis.That distinction matters because it suggests a more realistic narrative than the usual hype cycle. Most asset managers are not yet claiming that AI will replace the investment process. They are saying it will sharpen parts of it. That is a more credible claim, but it also means the industry may underestimate the governance effort required to keep “assistant” tools from becoming hidden decision-makers.
Governance Gaps and Compliance Readiness
This is where the AFM’s report becomes most pointed. A quarter of respondents have no policy governing employee use of AI, and the figure is even worse for generative AI. More than half do not have an ethics handbook or code of conduct that specifically addresses AI. Only a tiny minority have formal policies for AI agents. That is a startling mismatch with the pace of usage.The issue is not merely paperwork. A policy is the mechanism through which a firm defines acceptable use, escalation paths, approval thresholds, and accountability. Without it, AI adoption tends to become opportunistic: employees use tools because they are available, not because the firm has assessed the risk. That can create a compliance culture in which technology is normalized before it is understood.
Training is also lagging. Fewer than half of asset managers provide general AI awareness training for all employees, and only a small number offer advanced training for AI developers. This matters because AI literacy is no longer optional under the EU AI Act. It is now part of the baseline expectation for responsible deployment.
What “AI literacy” really means
AI literacy is often misunderstood as a vague educational ideal, but the regulatory meaning is more concrete. Firms must ensure the people dealing with AI systems have sufficient knowledge, competence, and context-specific understanding to use them responsibly. In practice, that means training staff not only on how to prompt a system, but on how to verify its outputs and when to stop using it.That is a much higher bar than casual familiarity. It implies documented responsibilities, role-specific training, and continuous refreshers as tools change. For financial firms, that is especially important because model behavior can shift when data, prompts, or vendor settings change.
- Policy absence is the clearest governance red flag.
- Ethics handbooks remain uncommon.
- AI agents are barely covered by formal rules.
- General awareness training is still incomplete.
- Advanced technical training is reserved for a small group.
Why compliance teams should care
Compliance teams should care because AI use creates recordkeeping and oversight issues even when the underlying task seems harmless. A draft research note generated by AI may still influence an investment thesis. A data analysis script generated by AI may still contain errors that no one has reviewed. If the output becomes part of a regulated process, then the burden shifts to the firm to show how it was checked.That is why explainability is such a recurring theme in the AFM’s message. If a firm cannot trace why a tool produced a result, it may still use the result, but it should not pretend the risk is small.
Risks: Data, Bias, Explainability, and Dependence
The AFM identifies a familiar but serious risk set: poor data quality, algorithmic bias, limited explainability, and dependence on a small number of technology providers. These are the standard vulnerabilities of AI in finance, but they become more concerning when adoption is broad and governance remains uneven. A small error rate can still matter a great deal if the tool is embedded across many workflows.Data quality is particularly sensitive in asset management because model outputs are only as useful as the inputs behind them. If market data is incomplete, stale, distorted, or badly structured, AI can produce elegant but misleading conclusions. That risk is amplified when firms use AI to process unstructured information, because the system may infer patterns that are not actually there.
Explainability is another core issue. Financial decisions often need to be justified to clients, boards, auditors, and regulators. If a model’s output cannot be explained in human terms, then the firm may have a hard time demonstrating that it acted prudently, especially if the output influenced a material allocation or trade.
Vendor concentration and operational dependency
The AFM is also right to highlight vendor dependence. Most firms are drawing heavily on commercial cloud services and a narrow range of AI suppliers. That can create cost efficiency, but it also creates leverage risk. If a vendor changes pricing, limits functionality, changes model behavior, or suffers an outage, the firm may have little room to maneuver quickly.This is not a hypothetical concern. Financial firms already know what concentration risk looks like in cloud and payments. AI could recreate the same vulnerability, only this time with systems that are harder to inspect and easier to trust blindly.
- Data quality failures can corrupt model outputs.
- Bias can influence investment or trading decisions.
- Limited explainability complicates supervision and accountability.
- Vendor concentration increases operational dependency.
- Cloud-based workflows can obscure data-handling risks.
Cyber and privacy implications
The report also points to data privacy and operational dependency as recurring concerns, and that is a wise focus. Public or semi-public AI tools can expose sensitive information if employees input client data, portfolio details, or proprietary research. Even where no breach occurs, the mere possibility can create reputational and legal exposure.Cybersecurity is part of the same equation. AI tools expand the number of connected systems and the volume of data flowing through them. That makes them attractive not only for productivity, but for attackers seeking weak points in workflows, permissions, or vendor integrations.
Enterprise and Consumer Impact
For enterprise users, the report is a warning that AI adoption must be accompanied by institutional design. Governance cannot be bolted on after the fact. Firms need clear rules about approved use cases, data handling, vendor vetting, logging, and escalation. Without those safeguards, AI becomes a decentralized shadow capability spread across desks and teams.For consumers and investors, the implications are more subtle but just as important. They may benefit from faster research, better portfolio construction, and more efficient services, but they also depend on the firm’s ability to keep the process transparent and fair. If the role of AI is hidden, clients cannot easily judge whether the firm’s recommendations are truly human-led, machine-assisted, or somewhere in between.
That is why transparency is not a cosmetic issue. It affects trust. Investors do not need every algorithmic detail, but they do need a credible account of how AI influences investment policy, portfolio composition, or service delivery. The AFM is clearly moving toward that expectation.
Different risk profiles, different controls
A large proprietary trading operation will need much tighter model oversight than a small firm using AI for compliance drafting. But even the smaller use case requires guardrails. The core principle is proportionality: controls should match the use case, but no regulated firm can reasonably treat AI as a casual office tool once it touches client data or regulated decision-making.- Enterprise users need governance, logging, and approval workflows.
- Consumers need transparency and fair treatment.
- High-impact uses demand stronger validation.
- Low-risk uses still require data-handling controls.
- Proportionality should guide the control framework.
Trust as a competitive asset
There is also a market implication here. Firms that can show mature AI governance may gain a competitive advantage with institutional clients, consultants, and counterparties. In a sector where trust is a selling point, being able to explain how AI is used may become as important as the tools themselves.That does not mean every firm needs a flashy AI strategy. It means the market may increasingly reward operational maturity over raw experimentation. Quiet competence could become the differentiator.
Competitive and Market Implications
The AFM’s findings have broader implications for competition in the Dutch asset management market. Larger firms are more likely to adopt AI, more likely to budget for it, and more likely to have the organizational capacity to govern it. That creates a possible productivity advantage, especially if AI improves research throughput, trading parameter optimization, and market analysis.But that advantage is not guaranteed to be durable. If smaller firms can access broadly available tools without building the same level of infrastructure, they may close some of the gap on day-to-day productivity. The real differentiator will then be governance quality, not just access to tools. In other words, AI may democratize capability at the surface while concentrating advantage in firms that can manage risk well.
The report also hints at a possible market-structure effect. If a small number of technology providers dominate the AI stack, then asset managers could become similarly dependent on a few vendors. That would not only create operational risk. It could also shape competitive dynamics if pricing, model access, or feature updates are controlled by external players.
Regulation as a competitive filter
Regulation may become a competitive filter in its own right. Firms that can document AI literacy, policy coverage, and model oversight will be better positioned to satisfy supervisors and institutional clients. Firms that cannot may still use AI, but they may be forced into more defensive, reactive, and constrained deployment.That is the real strategic lesson in the AFM’s report. AI is not merely a tool for productivity. It is becoming a test of institutional maturity.
- Larger firms are likely to consolidate their advantage.
- Smaller firms may adopt faster but control less.
- Vendor concentration could affect market bargaining power.
- Governance quality may become a selling point.
- Supervisory scrutiny may reward better-documented firms.
The role of European scale
There is also a European competitiveness angle. The AFM’s trend monitor notes that upcoming European regulation and the development of the Savings and Investments Union will affect the sector’s playing field. That means Dutch firms are not only competing domestically; they are also adapting to an increasingly harmonized European framework in which responsible AI use may become a baseline requirement.That can be a burden, but it can also be a levelling mechanism. Strong governance may help European firms compete on trust, transparency, and robustness rather than purely on scale.
Strengths and Opportunities
The AFM report is not anti-AI. It is a realistic account of where the Dutch asset-management sector stands today, and it identifies several clear benefits if firms can deploy the technology responsibly. The opportunity is substantial, but it will accrue mainly to firms that invest in the boring foundations: policy, training, data quality, and oversight.- Efficiency gains through faster information gathering and internal workflows.
- Better data analysis across large and unstructured datasets.
- Improved research support for investment teams.
- Potential portfolio enhancement through more informed decision-making.
- Stronger market analysis if outputs are validated and explainable.
- Competitive differentiation for firms with mature AI governance.
- Scalable productivity from general-purpose tools when properly controlled.
Risks and Concerns
The report’s cautionary notes are just as important as its upside. The main danger is not that AI exists in the sector, but that it is becoming normal faster than firms can govern it. That creates an environment where hidden dependencies, unchecked outputs, and weak employee practices can persist for too long.- No policy coverage in a significant minority of firms.
- Insufficient generative AI rules despite widespread use.
- Low budget allocation relative to the pace of deployment.
- Weak employee training across the sector.
- Data privacy and confidentiality risks from cloud-based tools.
- Dependence on a narrow vendor base for core AI capabilities.
- Limited explainability when outputs influence financial decisions.
What to Watch Next
The next phase will be about supervision, not just survey results. The AFM has already signaled that it will continue observing how asset managers deploy and control AI, and that means firms should expect deeper questions about policies, training, vendor oversight, and the role of AI in decision-making. The sector is moving from curiosity to accountability.The big question is whether firms will treat this as a compliance exercise or a strategic redesign task. The former produces policies on paper. The latter creates systems that are actually fit for use. In a market where regulators are already talking about digital resilience, AI literacy, and model risk management, the distinction will matter.
Another thing to watch is whether Dutch firms begin formalizing their approach to AI agents and autonomous workflows. Those tools are still rare, but they are likely to be the next frontier. Once systems start taking structured actions rather than merely generating text or analysis, the governance burden becomes materially higher.
- The AFM’s follow-up supervision will likely get more granular.
- AI literacy training should become more common under the EU AI Act.
- Generative AI policies are likely to expand quickly.
- Vendor concentration and cloud dependence will stay in focus.
- AI agents may become the next regulatory pressure point.
- Transparency to clients will become more important.
- Smaller firms may need proportionate but formalized controls.
The broader lesson is that AI will not reward the fastest adopters alone. It will reward the firms that can combine speed with discipline, experimentation with traceability, and innovation with restraint. In asset management, as in the rest of financial services, the winners may be those who learn to be responsibly ambitious before the market and the regulator force the issue.
Source: Stibbe Asset managers and the use of AI: the AFM identifies opportunities and risks in its recent report