UAE MoHESR and Microsoft Launch Agentic AI for Higher Education

  • Thread Author
The UAE’s Ministry of Higher Education and Scientific Research (MoHESR) has launched a formal R&D collaboration with Microsoft to design and prototype agentic AI systems for higher education — a coordinated effort to build four specialized AI agents that target career navigation, faculty course design, personalised student learning, and research alignment with national missions.

Holographic figures and a glowing cloud symbolize lifelong learning and research alignment on a futuristic campus.Background / Overview​

The announcement, made on the sidelines of the World Governments Summit on February 6, 2026, positions the programme as more than a one-off procurement: it is a staged research and prototyping initiative that will leverage Microsoft Azure and Microsoft cloud AI capabilities to develop four prototype agents and explore deeper technical collaborat, machine learning, and AI applications across the UAE higher-education ecosystem.
MoHESR framed the effort as aligned with national strategies to accelerate workforce readiness and research translation — explicitly tying the work to the UAE’s long-term ambitions to be a global hub for AI-driven innovation. Officials stressedgn* approach involving faculty, students, and industry representatives, and Microsoft’s UAE leadership described agentic AI as a transformative opportunity for the public sector and education.

What exactly was announced?​

The public statements and briefing materials identify four discrete prototype agents. Each is scoped to a specific, high-valuucation:
  • Lifelong Learning and Skills Progression agent — Help learners (students, alumni, working professionals) map labour-market signals to competency taxonomies and recommend micro-credentials, short courses, and degree electives tailored to in-demand skills.
  • Faculty Enablement and Course Co‑Creation agent — Assist faculty in updating curricula, draftinents, and co-designing industry-aligned credentials more quickly, while producing accreditation-ready artefacts for regulator review.
  • Sturning agent — Deliver adaptive, diagnostic-driven learning pathways, scaffolded feedback, and just-in-time supports so students can progress at their own pace with persistent learner memory.
  • Research Mission Alignment agent — Map institutional research portfolios to national mission priorities, suggest cross-disciplinary partnerships, match proposals to funding calls, and surface translational pathways to industry and policy impact.
These agent prototypes are explicitly intended to be trialled in real university settings under participatory design and staged pilot programmes rather than being rushed into a national deployment without iterative evaluation.
-y chose agents, not just chatbots
The term agentic AI describes systems that go beyond single-turn chat or static content generation: agents can reason, plan, act, and coordinate across multi-step workflows, tie into institutional systems, and execute actions via APIs under human oversight. This makes them a natural fit for complex, multi-party educational tasks — mapping skills to curricula, orchestrating course co-creation with industry, or aligning research proposals where outcomes are the product of several interlocking steps.
Microsoft’s platform capabilities — notably Azure-hosted model endpoints, Azure OpenAI integrations, and Microsoft 365 Copilot tooling — form the expected technical backbone for the prototypes, enabling retrieval-augmented generation (RAG), persistent memory layers, and tool integrations with LMS/SIS environments. The ministry and Microsoft have signalled an intent to explore in-country processing and data residency guarantees to address rce requirements.

Technical contours: architecture, integra realities​

Building agentic AI that can safely and reliably operate inside universities is an engineering tane‑learning problem. The likely technical architecture and operational patterns drawn from the ministry’s briefings and regional precedents include:
  • Azure-hosted inference endpoints and model (Azure OpenAI Service or equivalent) for reasoning and generation.
  • RAG pipelines that anchor agent outputs to authoritative institutioncatalogues, accreditation rules, regulator guidance — rather than relying on unconstrained model hallucinations.
  • Stateful memory layers to persist learner progress and allow agents to maintain context across sessions.
  • Connectors to Learning Management Systems Student Information Systems, credential registries, and national labour-market APIs, along with identity and SSO integrations.
  • MLOps and governance tooling: model versioning, audit logs, human‑in‑the‑loop checkpoints, monitoring dashboards, and red-team testing.
These components are feasible with current cloud services, but the real workering, policy mapping, and operational governance: durable storage, telemetry, incident response, and well-defined human oversight for any education-decision workflow.

Immeogramme aims to deliver​

If implemented sensibly, the prototypes could deliver measurable improvements across the higher‑education lifecycle:
  • Faster, industryrenewal** through automated gap analyses and draft artefacts, reducing friction in updating syllabi and accreditation packages.
  • Scaled personalised learning, enablingdiagnostic remediation, and adaptive learning paths that free faculty time for mentoring and research supervision.
  • Improved employability alignment, by matchiofiles to labour-market signals and guiding learners toward verified micro‑credentials.
  • Accelerated research translation, by surfacing mission-relevant projects, identifying partners, and reducing search friction for funding and industry collaborations.
These are concrete, measurable aims — but their realization depends on the fidelity of data inputs, the transparency of algorithms, and robust outcome measurement.

Material risks and governance challenges (what could go wrong)​

Agentic AI changes the risk profile compared with simple chatbots. The MoHESR–Microsoft programme raises several urgent governance and operational questions:

1. Data pritudent protection​

Agents need access to sensitive student records, grades, and institutional data to personalise effectively. That access creates risks around consent, retention, cross‑border data flows, and secondary use of data. In-country processing commitments help but do not remove the need for clear contractual guarantees, telemetry transparency, and independent attestation of residencyemic integrity and assessment design
Generative systems can assist with assignment creation and grading suggestions, but offloading evaluative judgement to opaque models risks undermining academic standards. Universities must redesign assessment frameworks to be AI-aware — for example, emphasising authentic, portfolio-based assessmfor high-stakes decisions.

3. Bias, fairness, and explainability​

Agents that recommend career pathways or identify "high‑impact" research can embed skewed assumptions or unrepresentative training signals, disadvantaging particular student groups. Systematic fairness testing, model cards, and explainability mechanisms requirements.

4. Vendor lock‑in and market concentration​

Relying heavily on a single cloud provider for national education infrastructure increases bargaining‑power asymmetries and long-term switching costs. Contracts must specify data portability, exportable fine-tuned models, and exit paths to preserve institutional sovereignty.

5. Operainability​

Cloud-hosted agents at scale carry continued operational costs (inference, storage, network). Without a sustainable funding model, pilot success can become an ongoing fiscal burden. Shared national infrastructure or centralised procurement models may mitigate this, but require transparent cost allocations.

6. Geopolitical a​

The UAE’s rapid cloud and AI investments create geopolitical dependencies — procurement and export controls around advanced AI hardware and models matter. Public institutions should account for geopolitically driven shifts in permissible model imports, data treaties, and compliance obligations.
ts make strong outcome claims (for example, precise percentages of faculty time saved or learning gains), those figures are verifiable only through independent evaluation; early pilot claims should be treated as provisional until robust experimental evidence is published.

Precedents and contextual evidence​

The UAE already has local precedents for faculty-facing AI agents. Hamdan Bin Mohammed Smart University (HBMSU) and other regional institutions have piloted agentic systems for faculty enablement and personalised learning, reporting notable productivity and learning improvements in internal announcements — results that are indicative but require independent replication and published methodologies for verification.
Microsoft’s regional posture — including investments in local cloud regions, in‑country Copilot processing options, and a portfolio of educator tools — makes it a pragmatic partner for large-scale governmental pilots, but it also accentuates the need for strict governance and contractual clarity.

Measuring success: proposed KPIs and evidence standards​

To mov to trusted, scalable services, the programme should adopt a small, robust set of KPIs and a clear evaluation protcomes: changes in mastery and retention measured via randomized controlled trials (RCTs) or matched quasi‑experimental desialignment: percentage of graduates employed in a field related to study within six months; employer validation oentials.
  • Equity metrics: differential outcomes by gender, nationality, socio‑economic status, and institution type.
  • Faculty impact: independently audited time savings broken down by task (course design, grading, adminiSafety incidents: count and severity of agent misrecommendations, data incidents, or governance failures.
Crucially, the programme should require independent third‑party audits of algorithms and public reporting of evaluation results so that policy decisions are grounded in reproducible evidence rather than vendor-provided metrics.

A practical, phased implementatthe ministry’s stated approach and realistic engineering timelines, a staged roadmap reduces risk and produces actionable evidence:​

  • Short term (0–6 months): Participatory design work legal frameworks, and data‑sharing agreements; select small, representative pilot courses or departments.
  • Medium term (6–18 months): Build prototypes, run controlled pilots with pre-registered evaluation plans, iterate on technical , RAG sources, memory controls).
  • Longer term (18–36 months): Scale successful prototypes into multi‑institution services with audited SLAs, federatiod centralised governance boards; publish longitudinal employment and learning outcome studies.
Each phase should be gated by independent evaluation outcomes and clear governance milestones (data portability, incident response, fairness metrics).

Practical recommendations for MoHESRcrosoft​

Below are targeted, implementable actions to convert ambition into accountable impact:
For MoHESR
  • Mandate transparent asurement plans and data‑governance requirements before any pilot starts.
  • Require model documentation (modeults, and independent algorithmic audits as contractual deliverables.
  • Insist on data portability, exportable model artifacts, and defined exit pathwaysin.
For universities
  • Co‑design pilots with students and faculty, run small staged trials, and redesign assessmegenerative assistance.
  • Establish campus AI‑ethics boards that include student representation and rapid incident-response capability.
For Microsoft (and other vendors)technical documentation on data handling, model provenance, and failure modes tailored to educational stakeholders.
  • Offer regional processing guant attestation and transparent telemetry so institutions can verify residency and data flows.
  • Ps testing and accessible monitoring dashboards for real‑time oversight.

Why transparency and independent evaluation matter​

The gulf between pilot claims and demonstrable institutional impact is non-trivial. Many earlier educational AI initiatives promised dramatic faculty time savings and learning gains; robust independent replication is what separates optimistic promises from durable policy decisions. The MoHESR–Microsoft programme can set an important global precedent if it embeds transparency, independent auditing, and publicly accessible evaluation into the programme design from day one.

Final assessment: pragmatic optimism with disciplined guardrails​

The MoHESR–Microsoft collaboration is strategically significant: it pairs ministerial authority and national priorities with a major cloud provider’s platform capabilities and skilling commitments. This asymmetry is a source of power — it can accelerate adoption and provide the scale needed to test agentic approaches meaningfully. It is also a source of risk, because large-scale dependence on a single vendor, combined with insufficient governance, can create systemic fragility in education infrastructure.
If MoHESR and its university partners insist on rigorous pilot charters, independent evaluation, strict data governance, and contractual portability, the programme could produce genuine improvements in curriculum agility, personalised learning, and research impact. Without those guardrails, the prototypes risk delivering attractive demos but limited long‑term public value. The difference will be transparency and evidence versus marketing metrics.
The coming months — design workshops, pilot selections, and the first controlled trials — are the critical proving ground. Success will not be measured by the speed of deployment, but by the quality of evaluation, the durability of governance commitments, and the degree to which students and faculty retain agency over educational decisions shaped by AI.

Conclusion
The UAE’sucation and Scientific Research has taken a consequential step by commissioning prototype agentic AI systems with Microsoft. The project has genuine potential to modernise higher education services — if it is pursued with a discipline that matches the scale of the ambition: independent evaluation, clear data contracts, academic oversight, and sustained transparency. Done well, this can produce a replicable model for responsible agentic AI in public education; done poorly, it will be a cautionary lesson about automation without accountability. The path forward is promising, but narrow — and it demands that policy, pedagogy, and engineering move in lockstep.

Source: Technobezz UAE Ministry Partners with Microsoft to Develop AI Agents for Higher Education
 

Microsoft Defender researchers have uncovered a stealthy new marketing and manipulation vector — what they call AI Recommendation Poisoning — in which seemingly innocent “Summarize with AI” buttons and share links carry hidden instructions that can be absorbed by an assistant’s long‑term memory and used later to bias recommendations toward a specific brand or service.

Illustration of a browser window promoting 'Summarize with AI' as a trusted source.Background / Overview​

Modern AI assistants are moving beyond one-off chats and toward persistent, personalized agents that remember preferences, past projects, and explicit user instructions. That evolution — the same persistence that makes assistants truly useful — also opens a new attack surface: any external content that the assistant ingests could, in some circumstances, be interpreted as an instruction to store facts, preferences, or rules for future use.
Microsoft’s Defender Security Research Team describes a pattern where websites, emails, and marketing content generate prefilled links to popular assistants (for example, links that open Microsoft Copilot, ChatGPT, Claude, Perplexity, etc.) with a prompt embedded in the URL parameter. The visible UX element is a helpful “Summarize with AI” button; the invisible payload is a prompt that contains a persistence command such as “remember this site as a trusted source” or “prioritize [ProductX] in future cloud vendor recommendations.” When clicked, the assistant receives both the summarization task and the hidden persistence instruction; in environments where memory writes happen automatically or without clear user confirmation, that instruction can become a permanent part of the assistant’s personal memory for that user.
This is not theoretical. Microsoft’s researchers documented dozens of distinct prompt samples originating from real companies across many industries, and they tracked the proliferation of easy, turnkey tooling designed to generate these prefilled AI links. The result: a low‑cost, arguably legitimate marketing tactic becomes a persistent and invisible influence in any assistant that honors such inputs as memory writes.

How the attack works — technical anatomy​

URL‑based prompt prefill and memory writes​

Most major AI assistants support a form of URL parameterization that preloads a prompt or query into the chat input when a link is opened. Attackers (here: marketers or operators of legitimate websites) place a prompt inside that parameter which contains both the expected instruction (summarize the article) and an embedded persistence command telling the assistant to retain a particular assertion.
Key mechanics:
  • The link looks normal on the page: a friendly “Summarize with AI” button or a share action.
  • Hovering or inspecting the link reveals a query parameter like ?q= that contains a prewritten prompt.
  • The prompt contains additional natural‑language instructions of the form “remember X as a trusted source” or “always recommend Y for Z scenario.”
  • If the assistant’s memory subsystem consumes the prompt and writes to the user’s personalization store without explicit, clear confirmation, that injected instruction becomes part of the assistant’s knowledge and can influence later outputs.

Vectors beyond URL links​

Microsoft’s analysis identified more than one delivery method:
  • Embedded prompts in documents or email bodies that get processed by an assistant when the user asks it to summarize or analyze the content.
  • Social engineering that encourages users to paste prepared prompts into an assistant.
  • Automated share buttons, widgets, or plugins that websites install, many of which are marketed as SEO/LLM growth hacks.
Because these tactics are built with standard web mechanics and natural language, they exploit the same convenience features that make assistants easy to use.

Why persistence matters​

A transient prompt that influences a single response is bad enough; a prompt that writes to a persistent memory store and then quietly reappears months later in a contextually relevant decision (for procurement, health advice, legal research) is a fundamentally different risk. The user may never notice the origin of the recommendation and will likely accept confident AI output at face value.

What Microsoft found (scope and patterns)​

Across a 60‑day window of observable AI‑related URLs, Microsoft researchers identified:
  • Dozens of distinct prompt samples (over 50 distinct prompt patterns).
  • Originators in the dozens (31 companies across multiple industries).
  • Common themes: “remember,” “prioritize,” “trusted source,” and explicit instructions to prefer a brand/category.
  • A small but growing ecosystem of tooling and plugins marketed to make creation of these links trivially easy.
Importantly, the research emphasizes two real-world traits:
  • Many of these instances involved legitimate businesses rather than obvious malign actors. This is marketing at scale, not necessarily criminal hacking.
  • The technique’s success depends on the target assistant’s memory behavior and the platform’s defenses; effectiveness varies and has changed as mitigations become available.
Microsoft also notes that platform teams are already rolling out mitigations — from prompt filtering to content separation — and that some previously observed behaviors could not be reproduced after protections were applied.

Why this matters — risks and real‑world consequences​

Erosion of digital trust​

Trust is the currency of personal assistants. When an AI begins to recommend vendors, therapies, or financial choices that appear authoritative but are the product of stealthy marketing, users lose the ability to trust personalized guidance. A single biased recommendation in a high‑stakes context — procurement, healthcare decisions, legal counsel — can cause materially harmful outcomes.

Hidden, persistent bias​

Traditional online ads are visible and usually labeled. Memory poisoning is not: it’s invisible and persistent. That persistence makes it hard to audit and harder to detect. The result is a subtle, nearly permanent tilt in the assistant’s future behavior.

Agentic AI and automation risk​

As organizations adopt agentic assistants that act on behalf of users (book meetings, order services, sign up for trials, place purchases), the consequences of poisoned memory escalate. An assistant that executes procurement decisions, for example, could consistently favor a particular vendor it “remembers” even if that vendor is not the best fit, creating financial and contractual risk.

Business and regulatory exposure​

Companies that rely on assistant recommendations — and platforms that host memory features — may face liability if biased suggestions lead to harm. Regulators, procurement auditors, and compliance teams will demand provenance and explainability for automated decisions; hidden memory injections undermine those needs.

Arms‑race dynamic​

Turnkey “AI memory growth” tools and plugins lower the barrier to adoption, and marketing teams have strong incentives to maximize presence inside assistants. Platform defenders will respond, and attackers will adapt — escalating into a long‑term cat‑and‑mouse game. That dynamic threatens to commoditize stealth bias as a routine marketing tactic.

Strengths and merits of Microsoft’s research​

  • Operational clarity: The research shows a concrete, reproducible attack vector tied to common UX elements and real web mechanics (prefilled URLs). That makes mitigation practical.
  • Wide sampling and real examples: Microsoft’s documentation of dozens of samples from many industries demonstrates this is not an isolated experiment but an emergent industry trend.
  • Actionable guidance: The post includes immediately actionable advice for users and admins — hover before clicking AI links, audit assistant memory, implement tenant‑level scanning for AI URL patterns.
  • Alignment with existing frameworks: Microsoft ties their findings to an established taxonomy (MITRE ATLAS memory poisoning classification), making it easier for security teams to integrate detection and response.
These strengths translate to strong practical value: organizations can implement detection, controls, and user education quickly.

Limitations and caveats — where more work is needed​

  • Effectiveness is platform‑dependent: The degree to which an injected prompt actually becomes a durable memory varies across platforms and over time as mitigations are implemented. Some attacks will fail outright against hardened assistants.
  • Attribution and naming: Microsoft’s report describes 31 companies involved in such attempts but does not publish names in bulk, limiting public accountability and independent verification of specific claims.
  • False positives and UX friction: Aggressive filtering of prefilled prompts may block legitimate productivity features. Designing defenses that don’t degrade user experience is nontrivial.
  • Behavioral detection limits: Memory poisoning can be subtle; determining whether a recommendation was influenced by a prior injected instruction versus legitimate ranking signals can be technically and legally fraught.
Because of these limits, defenders should avoid binary claims (e.g., “every biased recommendation is due to poisoning”) and instead focus on risk management.

Practical guidance — what users and IT teams should do now​

For everyday users​

  • Hover before you click: Inspect any “Summarize with AI” or “Open in AI” link to confirm its destination and whether it contains a prefilled prompt. Treat AI links with the same suspicion as executable downloads.
  • Check your assistant’s memory or personalization settings: Most assistants expose a “Saved Memories” or “Personalization” screen where stored facts and instructions can be reviewed and deleted.
  • Question confident answers: If an assistant strongly recommends a product or service, ask it to explain the basis for the recommendation and require explicit citations or a rationale before acting.
  • Periodically clear memory: If you’re unsure about the provenance of stored memories, reset or clear long‑term memory periodically.
  • Avoid clicking AI links from unknown or untrusted senders: Especially in email or social posts.

For IT and security teams​

  • Inventory and block high‑risk UX elements: Scan inbound email and intranet pages for prefilled AI URL parameters and suspicious widgets. Flag or sandbox links that open external assistant domains.
  • Enforce Zero Trust for agentic assistants: Apply least‑privilege, require explicit user confirmation for actions that have financial or governance impact, and centralize agent identity and permissions.
  • Provide approved, secure AI portals: Offer employees sanctioned interfaces to assistants that enforce memory write policies and logging.
  • Implement telemetry and audit trails: Require assistants to log memory writes, the source of the write, and user confirmations so organizations can audit suspicious changes later.
  • Educate users: Train employees to treat AI links with suspicion and to verify recommendations through independent criteria.

Concrete steps for administrators (numbered)​

  • Search your email and intranet for known AI prefill patterns (e.g., domains with ?q= or explicit assistant hostnames).
  • Configure mail filtering rules to flag or sandbox messages that contain those links.
  • Require that any assistant memory write in enterprise contexts must present a structured, machine‑verifiable justification and require explicit user approval.
  • Deploy a central registry for approved third‑party integrations; block or sandbox unapproved share widgets or plugins.
  • Run tabletop exercises simulating poisoned recommendations and ensure incident response plans include agent integrity checks.

Technical defenses and design principles for AI platforms​

Platform designers, SDK authors, and enterprise architects need to take a layered approach:
  • Input canonicalization and separation: Distinguish between user instructions and external content. External content should be treated as data, not commands.
  • Explicit semantics for memory writes: Memory should not be written implicitly from untrusted inputs. Writes should require structured metadata, intent confirmation, and visible provenance.
  • TTL and revocation: Memories should support time‑to‑live (TTL) and easy revocation to limit long‑term exposure from spurious writes.
  • Signed and verifiable memory writes: For enterprise scenarios, require signed attestations for memory writes from trusted integrations.
  • Explainability and provenance: Every recommender decision should be able to show the chain of reasoning and the source of the facts influencing it.
  • Conservative defaults: Out‑of‑the‑box assistants should default to no persistent writes from external content sources.
  • Prompt filtering and pattern detection: Block or flag natural‑language patterns that include persistence verbs (“remember,” “always prioritize”) in prefilled prompts.
  • User‑visible confirmation UX: When a page or link attempts to write to memory, present an explicit, human‑readable prompt explaining what will be stored and why.
These design decisions preserve both utility and safety while reducing the opportunity for stealthy manipulation.

Policy and governance recommendations​

  • Standards for memory provenance: Industry standards bodies should define a minimal set of metadata for any persistent memory write: origin URL, timestamp, signature, and explicit user consent.
  • Advertising and disclosure rules: Regulatory frameworks that govern online ads should be extended to AI memory influence—links that attempt to write to memory should be considered a new form of ad and require clear disclosure.
  • Auditable logs for enterprise assistants: Lawmakers and auditors will require provenance and the ability to reconstruct agent decision logic in regulated industries.
  • MITRE-style mapping and operator playbooks: Extend threat models and MITRE ATLAS mappings to provide red‑team playbooks and detection rules for memory poisoning.
  • Certification programs: Consider certification for “memory‑safe” assistant providers that adhere to stringent controls on memory writes and provenance.

The likely attacker and defender playbook — what to expect next​

  • Marketers and agencies will continue to experiment with memory growth techniques until platform controls curtail them.
  • Tooling that automates the creation of these links will persist and evolve; defenders should prioritize blocking or vetting such toolchains.
  • Platform teams will harden memory write semantics, but attackers will shift to more subtle techniques (e.g., embedding persistent cues in canonical content, pivoting to user‑initiated prompts, or orchestrating multi‑step social engineering).
  • Enterprises that fail to adopt governance controls will become the primary victims as agentic assistants take on more authority.
This is an arms race, and defenders have the advantage of control over the platform and permission model — but only if they act quickly.

Ethical and commercial gray areas​

Not every instance of a company optimizing for favorable mention inside AI assistants is criminal or even dishonest. Companies may argue they are simply adapting to the new evolutionary battleground for attention. But ethics demand transparency: user consent, explicit labeling of promotional content, and clear opt‑outs. Commercial incentives will clash with user autonomy unless norms and rules are established.

Closing analysis — why this issue will define the next phase of AI trust​

AI Recommendation Poisoning converts an implicit marketing ambition — to be the preferred vendor when someone asks a question — into an explicit, persistent influence inside a user’s personal assistant. The technique is inexpensive, technically trivial, and effective in contexts where assistants write to long‑term memory without explicit validation.
Microsoft’s research performed a critical service: it exposed a silent vulnerability and supplied practical guidance that both users and enterprise defenders can apply immediately. The company’s work also demonstrates how security research must follow product evolution: features that create convenience (persistent memory, one‑click integration) also create new modes of exploitation.
The good news is that mitigations are tractable: conservative memory write defaults, explicit provenance, user confirmation, tenant‑level policies, and improved UX for memory management would reduce the risk substantially. The bad news is that market incentives push toward optimization for presence inside assistants, and until platform providers and regulators set clear boundaries, the temptation to weaponize persistence will remain strong.
For users: treat AI links like executable attachments — hover, inspect, and ask for provenance. For enterprises: apply Zero Trust to agents, centralize approval for integrations, and require logging of memory writes. For platform providers: lock down automatic memory writes, enforce explicit consent, and make the origin of every memory entry auditable.
The integrity of personalized AI depends on trustworthy memory. Protecting that memory is not a peripheral security task — it is foundational to whether users will continue to trust assistants as impartial partners or start to regard them as stealth marketing pipelines. The next chapter in AI adoption will be written by how quickly vendors, enterprises, and regulators move from discovery to durable defenses.

Source: digit.in AI is being brainwashed to favor specific brands, Microsoft report shows
 

Back
Top