The transition into the artificial intelligence (AI) era is rapidly redefining business landscapes worldwide, according to Dr. Ndubuisi Ekekwe, whose insights illuminate the trajectory most companies take on their AI journey. As revealed in his June 2025 commentary on Tekedia, three pivotal levels shape the modern AI ecosystem: foundation models (FMs), large language models (LLMs), and the generative AI (GenAI) level. Drawing compelling parallels to how cloud computing underpinned the digital transformation of the last decade, Ekekwe frames foundation models and LLMs as the infrastructural bedrock for innovation in the age of AI, comparable to the significance of AWS, Google Cloud, and Microsoft Azure for cloud-centric organizations.
Innovative enterprises must now navigate these critical layers:
Those who limit themselves to the surface level risk being left behind in the next phase of value creation, when bespoke models, proprietary data pipelines, and creative problem-solving will set the pace. At the same time, the accessibility and flexibility of today’s GenAI platforms are lowering the bar to experimentation, allowing visionaries in every sector to reimagine products, processes, and services in ways previously unimaginable.
Navigating this journey requires clarity of purpose, sober risk assessment, and a willingness to invest in both people and platforms. As with the cloud revolution, the ultimate winners of the AI era will be those who not only use the new tools, but master their underlying mechanics—building organizations that learn, adapt, and create lasting value amid the rapid churn of technological change.
Source: Tekedia Most Companies Focus Here On AI Journey [video]
Understanding the Three Core Levels of Corporate AI Integration
Innovative enterprises must now navigate these critical layers:- Foundation Models—These are vast machine learning models (such as GPT, BERT, or stable diffusion models) trained on massive datasets. They serve as adaptable, multipurpose engines that can be fine-tuned for bespoke solutions.
- Large Language Models (LLMs)—A specialized subset of foundation models, LLMs power conversational AI, contextual analysis, and advanced automation. Their versatility enables businesses to deploy intelligent agents, automate content generation, and enhance decision-making processes.
- Generative AI (GenAI)—The application layer, GenAI leverages the outputs of FMs and LLMs to produce new text, images, code, or even complex data artifacts. Its impact is tangible in industries ranging from marketing and media to software engineering and healthcare.
The Strengths of Focusing on Generative AI
Accelerated Time-to-Market
One of the driving forces behind the corporate tilt toward GenAI is the dramatic acceleration of time-to-market for new products, content, and services. With APIs and platforms now offering plug-and-play access to advanced models, businesses can launch AI-enhanced offerings in a fraction of the time required five years ago. For example, Microsoft's Copilot, Google's Gemini, and third-party generators like Jasper or Synthesia empower companies to build new customer experiences atop reliable, pretrained intelligence.Cost Efficiency and Democratization
The open availability of foundation models via cloud APIs reduces the barrier to entry for AI adoption. Firms no longer need to field multidisciplinary data science teams or maintain vast GPU clusters—they can rent intelligence much as they previously rented compute from AWS or Azure. This “AI-as-a-service” model democratizes advanced capabilities, making sophisticated tools available to both startups and legacy incumbents.Enhanced Customization and Personalization
With access to advanced LLMs, companies can quickly fine-tune and deploy chatbots, search engines, recommendation systems, and creative tools that learn and adapt to individual user behavior. This real-time personalization is opening up new revenue streams in e-commerce, media, and finance, where relevance equals competitive advantage.Rapid Prototyping and Experimentation
The generative AI layer is the nexus of corporate experimentation. Firms can iterate on product ideas, automate internal processes, generate synthetic data for simulations, and even architect new business models without incurring the traditional costs of developing AI from scratch. This flexibility empowers “fail fast, learn fast” cultures—crucial for innovation in volatile markets.Potential Risks and Hidden Costs of a GenAI-First Approach
While concentrating on the GenAI layer unlocks immediate business value, this focus also exposes organizations to several strategic and operational hazards.Supplier Lock-In and Model Dependencies
The analogy with the cloud infrastructure boom is instructive: by relying heavily on external foundation models or LLM APIs (often owned by a handful of cloud giants), companies risk being locked into proprietary ecosystems. Price changes, API limits, or abrupt licensing revisions—as seen with recent API cost hikes by OpenAI and Google—can disrupt business operations overnight. A lack of internal expertise in building or fine-tuning foundational models also leaves firms at the mercy of vendors for updates, bug fixes, and new features.Data Governance and Regulatory Risk
GenAI systems, particularly those handling sensitive data or producing regulatory-compliant content, raise thorny issues of compliance, data privacy, and ethical use. Enterprises relying on third-party LLM providers must ensure that customer data is handled securely and processed in line with local regulations such as GDPR or emerging AI-specific laws like the EU AI Act. Mishandling personal data, inadvertently producing biased or defamatory content, or failing to provide transparent explanations for automated decisions can result in costly legal ramifications and reputational damage.Erosion of Competitive Moats
If every competitor in a given sector has equal access to the same generative AI platforms, the technology itself ceases to be a differentiator. In such environments, success depends on proprietary data, domain expertise, and unique workflows layered atop commoditized AI. Companies that simply reskin public models risk commoditization; those that strategically enrich models with exclusive datasets and business logic can carve out more enduring advantages.Quality and Reliability Concerns
Despite the progress in AI safety and performance, generative models can hallucinate, produce inappropriate outputs, or propagate biases from their training data. For industries under strict regulatory scrutiny—such as healthcare, finance, or law—these imperfections pose material business risks. Overreliance on GenAI in mission-critical workflows, absent robust human oversight and continuous validation, remains a significant concern.Critical Analysis: Striking the Right Balance on the AI Journey
While Ekekwe’s assessment of industry tendencies is accurate, a closer reading of market patterns and expert commentary suggests that leadership in the AI era demands more than surface integration at the generative layer. Rather, it requires a nuanced engagement across all three levels—foundation models, LLMs, and GenAI—tailored to sector, scale, and ambition.Sectoral Nuance: Not One-Size-Fits-All
The most successful AI transformations are sector-specific. In banking, for instance, automation of customer service and fraud detection has been revolutionized by LLMs, but only when coupled with proprietary financial data sets and rigorous oversight frameworks. In manufacturing, generative design and predictive maintenance rely as much on foundation model tuning as on the front-end interfaces. The pharma industry, for example, is investing heavily in custom FMs and LLMs to accelerate drug discovery, leveraging unique biological datasets to generate molecules unlikely to arise from commodity models alone.Investing in Internal Capabilities
A critical risk flagged by domain analysts is the "capability hollowing" that occurs when firms over-depend on external AI solutions. While leveraging third-party models can deliver speed, leading enterprises are quietly building hybrid approaches: combining prebuilt LLMs with custom architectures, proprietary embeddings, and fine-tuning on their historical data. Such “modular” AI strategies preserve flexibility while insulating against vendor risk. Open-source models like Meta’s Llama 3 and Mistral, for instance, are empowering companies to run AI workloads on-premises or in sovereign cloud environments, granting greater control and privacy.The Role of Data Flywheels
Another key differentiator is the establishment of data flywheels—virtuous cycles where user interactions with GenAI systems feed proprietary datasets, further training and refining future models. Amazon, Google, and Microsoft all operate such flywheels, constantly harvesting customer usage patterns to enhance their tools. For smaller firms, deploying GenAI is merely an entry point; true defensibility lies in amassing unique, high-fidelity data through user engagement.Talent, Culture, and Governance
The AI era is as much a cultural transformation as a technical one. Even with the world’s best models at their disposal, organizations flounder if they lack the in-house talent to experiment, evaluate, and manage AI systems. Building multidisciplinary teams that bridge data science, UX, compliance, and domain expertise is essential. Additionally, engaging critically with evolving AI governance frameworks—and fostering transparency about how AI decisions are made—builds customer trust and organizational resilience.Real-World Successes—and Cautionary Tales
Several companies exemplify the spectrum of AI maturity highlighted above.- IBM Watson Health: Initially, IBM concentrated on providing AI-powered insights directly to medical practitioners using cloud-hosted models. However, the service struggled with reliability and regulatory fit, ultimately pivoting to partnerships where Watson AI underpinned proprietary healthcare workflows with greater controls over data and updates.
- Goldman Sachs: The banking giant has openly discussed the limitations of out-of-the-box LLMs for regulatory compliance and risk modelling. It pairs open-source LLM frameworks with its own financial datasets, maintaining tight control over both model behavior and auditability.
- Duolingo: By layering LLM-powered lesson generation on top of decades of proprietary learner data, the company delivers personalized language instruction, exploiting a closed data loop to stay ahead of competitors relying solely on public models.
- Stability AI and Runway: Smaller firms in creative industries use foundation models like Stable Diffusion not just for image generation, but as flexible platforms for building entirely new creative workflows, from video editing to music composition.
Foresight: Positioning for Value in the AI Era
To maximize returns from the AI journey, firms must assess their positioning across all three layers:- Adopt selectively at the GenAI layer: Use commercial LLMs and APIs to quickly prototype and validate business cases, but avoid overcommitting core operations to these platforms alone.
- Develop internal expertise: Invest in data engineering, AI architecture, and MLOps to build, fine-tune, and evaluate models in-house, at least for mission-critical applications.
- Own your data: Engineer processes to collect, annotate, and protect unique datasets, transforming front-end AI deployments into data-gathering engines for future advantage.
- Actively manage regulatory exposure: Engage legal and compliance teams early and often to ensure safe, fair, and transparent use of AI system outputs.
- Cultivate an agile, experiment-friendly culture: Encourage cross-functional teams to explore AI use cases, iterate quickly, and learn from both failures and successes.
Conclusion: Beyond the Generative AI Gold Rush
While the current wave of corporate AI adoption rightly focuses on generative applications for their immediate, visible value, sustainable success hinges on treating foundation models, LLMs, and GenAI as interconnected levers—not isolated tools. As Dr. Ekekwe’s Tekedia commentary underscores, companies should approach AI as a layered ecosystem: infrastructure, intelligence, and interface.Those who limit themselves to the surface level risk being left behind in the next phase of value creation, when bespoke models, proprietary data pipelines, and creative problem-solving will set the pace. At the same time, the accessibility and flexibility of today’s GenAI platforms are lowering the bar to experimentation, allowing visionaries in every sector to reimagine products, processes, and services in ways previously unimaginable.
Navigating this journey requires clarity of purpose, sober risk assessment, and a willingness to invest in both people and platforms. As with the cloud revolution, the ultimate winners of the AI era will be those who not only use the new tools, but master their underlying mechanics—building organizations that learn, adapt, and create lasting value amid the rapid churn of technological change.
Source: Tekedia Most Companies Focus Here On AI Journey [video]