IBM has spent the better part of two years arguing that it can use AI to make legacy systems less mysterious, less brittle, and easier to modernize. Fujitsu’s new Application Transform push raises the stakes by making that promise feel more concrete: turn COBOL into understandable design documents in minutes, not hours, and do it without demanding deep programming expertise. The real competitive threat is not just speed, but access—because if non-specialists can safely interpret old code, vendors are no longer selling merely tooling, they are selling leverage over decades of accumulated enterprise knowledge. Fujitsu has also framed the system around knowledge graph–enhanced retrieval, a move meant to reduce hallucinations and improve completeness, which is exactly where generic AI tools tend to wobble.
Legacy modernization has always been a difficult business because the problem is never only code. It is documentation, operating history, implicit business rules, old dependencies, and the human memory of engineers who may already be retiring. That is why COBOL remains such a stubbornly valuable language: it sits underneath payment systems, insurance workflows, and government platforms that cannot simply be switched off and rewritten in a weekend. IBM has long marketed mainframe modernization as a strategic discipline, and its current watsonx Code Assistant for Z messaging explicitly emphasizes code explanation, documentation, refactoring, and modernization across COBOL and JCL.
Fujitsu is approaching the same problem from a slightly different angle. Rather than centering the developer as the primary consumer of AI output, it is centering the modernization workflow itself: analyze source code, produce design documents, and make the system understandable enough for broader teams to act on it. Fujitsu’s February 2025 software analysis and visualization launch described exactly this kind of reverse-engineering service, noting that it generates user-friendly design documents from large datasets and uses Fujitsu Knowledge Graph Enhanced Retrieval Augmented Generation to improve the quality of both asset comprehension and design document generation.
The broader significance is that the modernization market is shifting from translation to interpretation. Traditional tools could help convert COBOL into Java or surface dependencies for specialists, but the newer pitch is that AI can create the missing explanatory layer between old code and modern engineering teams. IBM has been making that argument for several product cycles, including earlier announcements in 2023 and more recent updates to watsonx Code Assistant for Z and COBOL Upgrade Advisor for z/OS. Fujitsu is now pushing a comparable narrative, but with a stronger emphasis on automated documentation and reduced dependence on expert reviewers.
This matters because the market is no longer just about preserving mainframes. It is about owning the modernization workflow. Whoever can generate trustworthy explanations, reconstruction documents, and transformation paths fastest gains a deeper relationship with the enterprise, the system integrator, and the budget holder. That is why the IBM-versus-Fujitsu framing is useful: both companies understand that modernization is a long game, but Fujitsu’s pitch suggests a more aggressive attempt to lower the barrier for organizations that have delayed work because the systems are simply too opaque.
There is also a workforce reality that never fully went away. Specialist mainframe and COBOL talent remains scarce, expensive, and unevenly distributed, and that scarcity pushes enterprises toward automation even when they would prefer to keep the human review layer intact. IBM’s product copy repeatedly points to reducing reliance on senior system programmers and accelerating onboarding, which shows that the vendor side knows the bottleneck is not just code volume but institutional knowledge. Fujitsu’s new service is trying to attack the same bottleneck by producing usable documents for teams that do not already speak the language fluently.
The pandemic-era scrutiny of COBOL talent shortages only reinforced the point: old code can become a public issue when critical services depend on it and the people who understand it become harder to find. That is why modern AI-assisted documentation has such a strong business case. It promises not only lower labor costs, but also resilience—the ability to preserve system knowledge even when veteran engineers move on.
The implication is straightforward: the market no longer rewards simple code readers. It rewards systems that can explain why the code exists, how it connects, and what breaks if it changes. That is a much harder product problem, but it is also a much more valuable one. Enterprises do not just want speed; they want confidence.
Fujitsu also claims the Knowledge Graph–Enhanced RAG layer reduces omissions and hallucinations and improves comprehensiveness and readability. That aligns with the company’s broader enterprise AI messaging, which has repeatedly emphasized knowledge graph extended RAG and monitoring as the safeguard layer for trustworthy generation. In other words, Fujitsu is not pitching AI as a magical black box; it is pitching a structured retrieval system that can justify its output.
That distinction matters because documentation is unforgiving. If a model misses a dependency in a normal chatbot answer, the error is annoying. If it misses a dependency in a system design document for a core banking platform, the consequences are expensive or even dangerous. Fujitsu appears to understand that generative AI becomes commercially credible only when it is constrained by retrieval, provenance, and structured relationship mapping.
That is a strategic move because modern IT organizations are built around collaboration across roles. Architects, security teams, program managers, and developers need a common reference point. If AI can create a tolerable first draft of that reference point, the rest of the modernization process becomes easier to structure and, crucially, easier to budget.
IBM’s challenge is that it has to sell both confidence and continuity. The company’s platform story depends on the idea that the mainframe remains the most secure and dependable place for business-critical workloads, while AI can make it easier to extract value from those systems. That is a powerful message, but Fujitsu’s new service attacks the same pain point from a less platform-specific angle. If the document generation layer becomes vendor-agnostic enough, customers may care less about the underlying mainframe ecosystem and more about whichever tool helps them modernize fastest.
IBM has also been expanding modernization tooling beyond pure explanation. The company’s COBOL Upgrade Advisor for z/OS and recent AI-driven enhancement announcements show that it wants to own the upgrade path, the refactoring logic, and the operational guidance around compiler changes and application modernization. That breadth is a strength, but it can also look like a larger and more complex sales motion than Fujitsu’s tighter “analyze, explain, document” pitch.
That distinction could matter in procurement, especially where enterprises have already invested in multiple tools and want to bolt on a capability rather than rebuild their strategy. In practice, Fujitsu may win some deals not because it is broader, but because it is more surgically focused on the documentation bottleneck.
General LLMs are good at plausible language and weak at guaranteed completeness. That weakness is particularly dangerous in modernization tasks, where a missing paragraph may hide a missing control path, and a confident hallucination may disguise an actual dependency. Fujitsu’s own prior enterprise AI messaging has already framed its knowledge graph technology as a way to verify relationships across very large graphs, which is exactly the kind of control structure this use case demands.
IBM’s answer has been to train purpose-built models and design assistants around enterprise workflows. That can work well when the task is code explanation or refactoring advice, and IBM explicitly says its tooling can help developers gain knowledge faster and streamline documentation. The difference is that Fujitsu appears to be using retrieval structure as the primary trust mechanism, while IBM leans more on productized workflow and model tuning.
This is also where procurement teams may become more sophisticated. They will increasingly ask whether a system can prove where its output came from, whether it can preserve traceability, and whether it can support auditability across long-lived codebases. Fujitsu seems to be leaning into exactly that conversation.
Even if the number is context-specific, the direction of travel is believable. Enterprises do lose huge amounts of time on code archaeology, cross-team coordination, and validating whether a system’s documentation matches reality. A tool that can generate a plausible first draft in minutes can meaningfully compress the front end of a modernization project, even if human experts still need to review the output.
This is where the political economy of enterprise AI starts to show. If AI can create most of the baseline documentation, the scarce humans are no longer doing rote explanation work; they are validating exceptions, reviewing edge cases, and making architectural decisions. That changes staffing models, consulting contracts, and project schedules. It also changes what customers expect to pay for.
The most likely outcome is that AI compresses the discovery phase while human experts remain central to the approval phase. That is still a major improvement, because it reduces the drag that has historically made legacy transformation feel impossible. But it does not remove the need for experienced engineers; it simply reassigns them to higher-value checks.
For enterprises, however, the implications are immediate. A service that can generate design documents from legacy code without specialist expertise could change how IT departments prioritize modernization portfolios. It may make it easier to evaluate which applications should be refactored, which should be wrapped, and which should be retired. That in turn affects budgeting, staffing, and migration timelines.
There is also a change-management angle. In many organizations, modernization stalls because only a few people understand the old system well enough to bless a change. If AI can create a strong first draft of the design documentation, the organization can begin distributed review earlier. That reduces dependency on bottleneck experts and may help de-risk the eventual migration.
This may also change how modernization projects are sold. Instead of buying a large, open-ended assessment engagement, customers may start with an AI-generated artifact and then pay for targeted engineering services around the gaps. That is a more modular market, and probably a more competitive one.
For Fujitsu, the opportunity is to be seen as the company that makes legacy systems legible. That can be a powerful brand position, especially if enterprises are looking for a pragmatic first step rather than a complete transformation suite. The more Fujitsu can show real-world cases where design documents are accurate, complete, and easy to validate, the more it can occupy the space between raw AI and expensive consulting labor.
For IBM, the risk is not that its mainframe business disappears. The risk is that customers start to view IBM’s AI modernization stack as one option among many rather than the default path. In a market where legacy modernization is increasingly software-driven and AI-assisted, ease of adoption can matter as much as technical breadth. That is the real pressure point.
That trend should continue because it aligns with how enterprises actually modernize. They do not jump straight from old code to new architecture; they move through assessment, documentation, risk analysis, and phased refactoring. AI that helps at the front of that pipeline may be more valuable than AI that only helps at the end.
The deeper industry trend is that legacy modernization is becoming an AI trust problem, not just an engineering problem. The vendors that win will be the ones that can make AI outputs traceable, reviewable, and operationally useful at scale. In that sense, Fujitsu’s move is not just about COBOL; it is about redefining the standards by which enterprises decide whether AI is ready to touch mission-critical systems.
Source: TechRadar IBM faces pressure as Fujitsu introduces AI system that simplifies COBOL
Overview
Legacy modernization has always been a difficult business because the problem is never only code. It is documentation, operating history, implicit business rules, old dependencies, and the human memory of engineers who may already be retiring. That is why COBOL remains such a stubbornly valuable language: it sits underneath payment systems, insurance workflows, and government platforms that cannot simply be switched off and rewritten in a weekend. IBM has long marketed mainframe modernization as a strategic discipline, and its current watsonx Code Assistant for Z messaging explicitly emphasizes code explanation, documentation, refactoring, and modernization across COBOL and JCL.Fujitsu is approaching the same problem from a slightly different angle. Rather than centering the developer as the primary consumer of AI output, it is centering the modernization workflow itself: analyze source code, produce design documents, and make the system understandable enough for broader teams to act on it. Fujitsu’s February 2025 software analysis and visualization launch described exactly this kind of reverse-engineering service, noting that it generates user-friendly design documents from large datasets and uses Fujitsu Knowledge Graph Enhanced Retrieval Augmented Generation to improve the quality of both asset comprehension and design document generation.
The broader significance is that the modernization market is shifting from translation to interpretation. Traditional tools could help convert COBOL into Java or surface dependencies for specialists, but the newer pitch is that AI can create the missing explanatory layer between old code and modern engineering teams. IBM has been making that argument for several product cycles, including earlier announcements in 2023 and more recent updates to watsonx Code Assistant for Z and COBOL Upgrade Advisor for z/OS. Fujitsu is now pushing a comparable narrative, but with a stronger emphasis on automated documentation and reduced dependence on expert reviewers.
This matters because the market is no longer just about preserving mainframes. It is about owning the modernization workflow. Whoever can generate trustworthy explanations, reconstruction documents, and transformation paths fastest gains a deeper relationship with the enterprise, the system integrator, and the budget holder. That is why the IBM-versus-Fujitsu framing is useful: both companies understand that modernization is a long game, but Fujitsu’s pitch suggests a more aggressive attempt to lower the barrier for organizations that have delayed work because the systems are simply too opaque.
Why COBOL Still Matters
COBOL is often treated as a relic, but that view misses the reason it keeps showing up in modernization stories. The language is old, yes, but so are many of the systems that carry the world’s daily financial and administrative traffic. Fujitsu’s own framing in the new service announcement ties the value of better documentation directly to modernization planning, because the first hurdle is often understanding what exists before anyone can decide what to replace or refactor.There is also a workforce reality that never fully went away. Specialist mainframe and COBOL talent remains scarce, expensive, and unevenly distributed, and that scarcity pushes enterprises toward automation even when they would prefer to keep the human review layer intact. IBM’s product copy repeatedly points to reducing reliance on senior system programmers and accelerating onboarding, which shows that the vendor side knows the bottleneck is not just code volume but institutional knowledge. Fujitsu’s new service is trying to attack the same bottleneck by producing usable documents for teams that do not already speak the language fluently.
The pandemic-era scrutiny of COBOL talent shortages only reinforced the point: old code can become a public issue when critical services depend on it and the people who understand it become harder to find. That is why modern AI-assisted documentation has such a strong business case. It promises not only lower labor costs, but also resilience—the ability to preserve system knowledge even when veteran engineers move on.
The real problem is not code syntax
What makes COBOL modernization so hard is not the syntax alone. It is the accumulation of business logic, interprogram dependencies, batch jobs, and operational assumptions that live outside the source files. The minute a system has been in production for decades, the line between code and institutional memory disappears, and that is where general-purpose AI tends to get flaky. Fujitsu’s knowledge graph approach is meant to preserve those relationships, while IBM’s approach leans on purpose-trained models and assistant workflows to explain them.The implication is straightforward: the market no longer rewards simple code readers. It rewards systems that can explain why the code exists, how it connects, and what breaks if it changes. That is a much harder product problem, but it is also a much more valuable one. Enterprises do not just want speed; they want confidence.
What Fujitsu Is Actually Shipping
The Fujitsu announcement is more interesting than a generic “AI for legacy code” headline because it is centered on documentation generation, not just code summarization. The company says the service can analyze COBOL and other legacy source code and automatically produce design documents, with Fujitsu claiming the workflow can cut analysis time by about 97% compared with manual review. That is the sort of claim that should always be treated carefully, but it clearly signals where Fujitsu thinks the value lies: not in helping experts work a little faster, but in helping organizations work without waiting on scarce experts.Fujitsu also claims the Knowledge Graph–Enhanced RAG layer reduces omissions and hallucinations and improves comprehensiveness and readability. That aligns with the company’s broader enterprise AI messaging, which has repeatedly emphasized knowledge graph extended RAG and monitoring as the safeguard layer for trustworthy generation. In other words, Fujitsu is not pitching AI as a magical black box; it is pitching a structured retrieval system that can justify its output.
That distinction matters because documentation is unforgiving. If a model misses a dependency in a normal chatbot answer, the error is annoying. If it misses a dependency in a system design document for a core banking platform, the consequences are expensive or even dangerous. Fujitsu appears to understand that generative AI becomes commercially credible only when it is constrained by retrieval, provenance, and structured relationship mapping.
Why design documents matter more than summaries
A summary is useful for orientation. A design document is useful for action. It gives teams a shared artifact they can validate, review, annotate, and use as the basis for refactoring, migration, or operational planning. Fujitsu’s service is therefore not just a productivity tool; it is a knowledge-transfer tool that turns legacy software into something that can be discussed by modern teams.That is a strategic move because modern IT organizations are built around collaboration across roles. Architects, security teams, program managers, and developers need a common reference point. If AI can create a tolerable first draft of that reference point, the rest of the modernization process becomes easier to structure and, crucially, easier to budget.
- It lowers the entry barrier for modernization projects.
- It speeds up the creation of baseline documentation.
- It reduces dependence on a handful of legacy experts.
- It creates a cleaner starting point for refactoring.
- It may improve cross-team communication around old systems.
IBM’s Response Is Already Underway
IBM is not standing still, and that is why Fujitsu’s move is best read as competitive pressure rather than a surprise ambush. IBM’s watsonx Code Assistant for Z already positions itself as a modernization assistant that can explain code, document applications, and help teams understand COBOL and JCL more quickly. The newest IBM updates add support for JCL explanation, optimization advice, and developer productivity workflows, all under a broader modernization story that includes the IBM Z platform itself.IBM’s challenge is that it has to sell both confidence and continuity. The company’s platform story depends on the idea that the mainframe remains the most secure and dependable place for business-critical workloads, while AI can make it easier to extract value from those systems. That is a powerful message, but Fujitsu’s new service attacks the same pain point from a less platform-specific angle. If the document generation layer becomes vendor-agnostic enough, customers may care less about the underlying mainframe ecosystem and more about whichever tool helps them modernize fastest.
IBM has also been expanding modernization tooling beyond pure explanation. The company’s COBOL Upgrade Advisor for z/OS and recent AI-driven enhancement announcements show that it wants to own the upgrade path, the refactoring logic, and the operational guidance around compiler changes and application modernization. That breadth is a strength, but it can also look like a larger and more complex sales motion than Fujitsu’s tighter “analyze, explain, document” pitch.
The strategic difference
IBM is building a modernization platform. Fujitsu is building a modernization accelerator. Those are related, but not identical, propositions. A platform asks customers to commit to a broader lifecycle; an accelerator promises immediate relief from the first and hardest stage of the journey.That distinction could matter in procurement, especially where enterprises have already invested in multiple tools and want to bolt on a capability rather than rebuild their strategy. In practice, Fujitsu may win some deals not because it is broader, but because it is more surgically focused on the documentation bottleneck.
- IBM is emphasizing end-to-end modernization.
- Fujitsu is emphasizing rapid comprehension and reverse engineering.
- Both are using AI to reduce expert dependency.
- Both are trying to improve confidence in legacy transformation.
- The winner may depend on how much a customer values breadth versus speed.
Knowledge Graphs Versus Generic LLMs
Fujitsu’s biggest technical selling point is not simply that it uses generative AI. It is that the AI is anchored by a Knowledge Graph–Enhanced RAG layer designed to connect large volumes of source code and related artifacts. That is important because legacy systems are relational by nature: a COBOL program does not exist in isolation, but as part of a mesh of batch jobs, data files, downstream consumers, and rules accumulated over time.General LLMs are good at plausible language and weak at guaranteed completeness. That weakness is particularly dangerous in modernization tasks, where a missing paragraph may hide a missing control path, and a confident hallucination may disguise an actual dependency. Fujitsu’s own prior enterprise AI messaging has already framed its knowledge graph technology as a way to verify relationships across very large graphs, which is exactly the kind of control structure this use case demands.
IBM’s answer has been to train purpose-built models and design assistants around enterprise workflows. That can work well when the task is code explanation or refactoring advice, and IBM explicitly says its tooling can help developers gain knowledge faster and streamline documentation. The difference is that Fujitsu appears to be using retrieval structure as the primary trust mechanism, while IBM leans more on productized workflow and model tuning.
Why hallucination control is a business issue
In enterprise documentation, an AI mistake is not just a text error. It can distort architecture diagrams, misstate dependencies, or lead teams to under-estimate migration risk. That is why accuracy and completeness matter more than fluency, and why the Knowledge Graph angle is likely to resonate with IT buyers who have already been burned by generic AI pilots.This is also where procurement teams may become more sophisticated. They will increasingly ask whether a system can prove where its output came from, whether it can preserve traceability, and whether it can support auditability across long-lived codebases. Fujitsu seems to be leaning into exactly that conversation.
- Knowledge graphs help preserve relationships between artifacts.
- RAG improves grounding and reduces free-floating generation.
- Generic LLMs are still risky for completeness-heavy tasks.
- Traceability matters as much as raw speed.
- Enterprise buyers will demand evidence, not just demos.
Why the 97% Claim Matters
The headline number is eye-catching: Fujitsu says the service can reduce the time required to understand complex source code by approximately 97%. On its face, that sounds almost too good to be true, and readers should treat it as a vendor-reported benchmark rather than a universal law. Still, the claim is valuable because it highlights the real cost structure of legacy work: understanding the system often takes far longer than writing new code.Even if the number is context-specific, the direction of travel is believable. Enterprises do lose huge amounts of time on code archaeology, cross-team coordination, and validating whether a system’s documentation matches reality. A tool that can generate a plausible first draft in minutes can meaningfully compress the front end of a modernization project, even if human experts still need to review the output.
This is where the political economy of enterprise AI starts to show. If AI can create most of the baseline documentation, the scarce humans are no longer doing rote explanation work; they are validating exceptions, reviewing edge cases, and making architectural decisions. That changes staffing models, consulting contracts, and project schedules. It also changes what customers expect to pay for.
Speed is not the same as safety
There is a temptation to read a 97% time reduction as proof that humans are becoming unnecessary. That would be a mistake. In high-stakes environments, faster output is only useful if the organization has the controls to catch errors, annotate assumptions, and validate the generated design against operational reality. Acceleration without governance is just a faster way to make expensive mistakes.The most likely outcome is that AI compresses the discovery phase while human experts remain central to the approval phase. That is still a major improvement, because it reduces the drag that has historically made legacy transformation feel impossible. But it does not remove the need for experienced engineers; it simply reassigns them to higher-value checks.
- Faster analysis shortens project kickoff times.
- Human review remains essential for validation.
- Vendor claims should be seen as context-dependent.
- The biggest gain is likely in baseline documentation.
- The real value is in reducing the “unknown unknowns.”
Enterprise Impact Versus Consumer Impact
For consumers, this story is mostly invisible. Nobody at home is going to care whether a bank’s COBOL codebase was reverse engineered in ten minutes or ten hours, except indirectly through better service reliability and fewer outages. The real audience is enterprise buyers, systems integrators, and public-sector IT leaders who need modernization tools that reduce cost and risk without demanding a full rewrite.For enterprises, however, the implications are immediate. A service that can generate design documents from legacy code without specialist expertise could change how IT departments prioritize modernization portfolios. It may make it easier to evaluate which applications should be refactored, which should be wrapped, and which should be retired. That in turn affects budgeting, staffing, and migration timelines.
There is also a change-management angle. In many organizations, modernization stalls because only a few people understand the old system well enough to bless a change. If AI can create a strong first draft of the design documentation, the organization can begin distributed review earlier. That reduces dependency on bottleneck experts and may help de-risk the eventual migration.
Procurement, consulting, and services
The biggest winners may not be the software vendors alone. System integrators, consulting firms, and modernization services partners stand to benefit if AI tools make the discovery phase faster and more repeatable. That is especially true in regulated industries, where every migration still needs governance, audit trails, and human sign-off.This may also change how modernization projects are sold. Instead of buying a large, open-ended assessment engagement, customers may start with an AI-generated artifact and then pay for targeted engineering services around the gaps. That is a more modular market, and probably a more competitive one.
- Enterprises gain faster discovery and planning.
- Consumers benefit only indirectly through better systems.
- Consulting demand may shift toward validation and remediation.
- Procurement could favor modular, lower-risk engagements.
- Regulatory sectors will still need human approval.
Competitive Implications for IBM, Fujitsu, and the Market
The immediate competitive question is whether Fujitsu’s move forces IBM to sharpen its own modernization story. The answer is probably yes, but not in a dramatic way. IBM already has a mature modernization narrative, a strong mainframe base, and increasingly rich AI capabilities around code explanation, refactoring, and compiler upgrades. What Fujitsu introduces is a more explicit challenge to the idea that the best way to monetize AI in legacy systems is through a deeply integrated platform.For Fujitsu, the opportunity is to be seen as the company that makes legacy systems legible. That can be a powerful brand position, especially if enterprises are looking for a pragmatic first step rather than a complete transformation suite. The more Fujitsu can show real-world cases where design documents are accurate, complete, and easy to validate, the more it can occupy the space between raw AI and expensive consulting labor.
For IBM, the risk is not that its mainframe business disappears. The risk is that customers start to view IBM’s AI modernization stack as one option among many rather than the default path. In a market where legacy modernization is increasingly software-driven and AI-assisted, ease of adoption can matter as much as technical breadth. That is the real pressure point.
A market moving toward explainability
The broader market is clearly moving toward explainability-first modernization. IBM’s own materials emphasize natural language explanations and automated discovery, while Fujitsu’s latest service leans on graph-backed retrieval and document generation. The common thread is that buyers want AI to do more than write code—they want it to explain systems in a way humans can safely use.That trend should continue because it aligns with how enterprises actually modernize. They do not jump straight from old code to new architecture; they move through assessment, documentation, risk analysis, and phased refactoring. AI that helps at the front of that pipeline may be more valuable than AI that only helps at the end.
- IBM still has the stronger platform ecosystem.
- Fujitsu may have the sharper documentation pitch.
- Buyers are prioritizing explainability and trust.
- The market is shifting from code generation to code interpretation.
- Modular modernization tools are gaining appeal.
Strengths and Opportunities
Fujitsu’s announcement is strong because it addresses the hardest part of modernization: turning opaque legacy code into something people can reason about. It combines a practical workflow, a believable enterprise use case, and a technical trust story around knowledge graphs and retrieval, which is exactly the mix customers want when they are trying to reduce risk rather than chase novelty.- Shorter discovery cycles can accelerate modernization planning.
- Knowledge graph grounding may improve trust and completeness.
- Broader team access reduces dependence on scarce COBOL specialists.
- Design document generation creates a tangible artifact for governance.
- Enterprise consulting opportunities may expand around validation and remediation.
- Legacy portfolio visibility can help prioritize which systems to modernize first.
- Competitive differentiation could make Fujitsu a more visible modernization partner.
Risks and Concerns
The main risk is that vendor claims about dramatic time savings can outpace real-world validation. COBOL environments are messy, idiosyncratic, and deeply interconnected, so any system that promises near-complete automation must still prove itself in production-grade scenarios where omissions are costly and false confidence is dangerous.- Benchmark inflation may make results look more universal than they are.
- Hallucinations can still slip through if retrieval coverage is incomplete.
- Human expertise remains necessary for approvals and edge cases.
- Integration complexity may slow deployment in older environments.
- Security and compliance scrutiny will be intense in regulated sectors.
- Overreliance on generated docs could create a false sense of certainty.
- Vendor lock-in concerns may grow if workflows become tool-specific.
Looking Ahead
The next phase of this story is likely to be less about announcement language and more about field evidence. Enterprises will want to know how well the Fujitsu service performs on real production codebases, how much human review it still requires, and whether the generated documents hold up when audited against live system behavior. IBM will almost certainly answer with its own modernization updates, because the company cannot afford to let the narrative shift toward Fujitsu owning the “understand legacy code fast” category.The deeper industry trend is that legacy modernization is becoming an AI trust problem, not just an engineering problem. The vendors that win will be the ones that can make AI outputs traceable, reviewable, and operationally useful at scale. In that sense, Fujitsu’s move is not just about COBOL; it is about redefining the standards by which enterprises decide whether AI is ready to touch mission-critical systems.
- Watch for customer references from banks, insurers, and public-sector agencies.
- Watch for IBM counter-messaging around watsonx Code Assistant for Z and upgrade tooling.
- Watch for proof of accuracy beyond vendor-run benchmarks.
- Watch for integration with refactoring and rewrite workflows later this year.
- Watch for consulting partnerships that package AI documentation with migration services.
Source: TechRadar IBM faces pressure as Fujitsu introduces AI system that simplifies COBOL