• Thread Author
Starting this fall, the U.S. House of Representatives will begin a controlled pilot giving thousands of House staffers access to Microsoft Copilot — a marked reversal from a 2024 prohibition — as leadership frames the move as a pragmatic modernization push that must be matched by strict technical, legal, and audit controls. (axios.com)

Lawyers in a high-tech briefing room work on holographic Copilot data amid digital legal screens, with the Capitol visible outside.Background​

The announcement, unveiled at the bipartisan Congressional Hackathon and presented by Speaker Mike Johnson, signals a transition from a one‑year‑old restriction to a staged, auditable experiment intended to evaluate how generative AI can support legislative workflows. Leadership described the deployment as accompanied by “heightened legal and data protections,” while the operational rollout is being presented as a one‑year pilot making up to 6,000 licenses available to staffers across offices. (axios.com)
This move arrives after a high‑profile enforcement decision in March 2024 when the House’s Office of Cybersecurity declared commercial Microsoft Copilot “unauthorized for House use” and removed the software from House Windows devices amid concerns that user inputs could be processed in non‑House cloud services. That ban became a baseline example of how public institutions approached commercial generative AI before government‑grade vendor offerings and procurement pathways matured. (reuters.com)

Overview: What was announced and why it matters​

  • The House will provide access to Microsoft Copilot as a managed pilot beginning this fall, with leadership framing the step as part of a modernization effort for legislative offices.
  • The pilot is framed to include “heightened legal and data protections,” though the public announcement so far does not publish the granular technical architecture, tenancy, or contractual non‑training guarantees necessary to independently verify that claim.
  • The decision is enabled in part by changes in the federal procurement and product landscape: Microsoft and other AI vendors have expanded government‑oriented offerings (e.g., Copilot in government clouds), and the General Services Administration’s OneGov procurement strategy makes enterprise AI licences easier and cheaper for federal entities. (techcommunity.microsoft.com)
Why this matters: the legislative branch writes rules about technology while simultaneously adopting it. Hands‑on use can improve lawmakers’ understanding of AI trade‑offs, but it also raises questions of parity, accountability, and public trust: will Congress apply to itself the same safeguards it expects from private companies?

Timeline: From ban to pilot​

March 2024 — Restriction and removal​

In March 2024 the House Office of Cybersecurity and the Chief Administrative Officer ordered Copilot removed and blocked from House Windows machines amid data leakage concerns. That decision reflected real operational risks around model telemetry and off‑tenant processing. (reuters.com)

2024–2025 — Product and procurement evolution​

Over the following 12–18 months, vendors accelerated government‑targeted variants and sought FedRAMP and DoD‑level authorizations. Microsoft publicly targeted Copilot and Azure OpenAI components for government cloud environments (GCC High / Azure Government / DoD) and announced FedRAMP High authorizations for certain services — materially changing the technical options available to cautious government IT teams. Meanwhile, GSA’s OneGov strategy consolidated procurement options for cloud and AI services, including a major OneGov agreement with Microsoft that dramatically reduces short‑term costs for M365 and Copilot offerings. (techcommunity.microsoft.com)

September 2025 — Converting caution into a governed pilot​

Speaker Mike Johnson introduced the pilot at the Congressional Hackathon, announcing the one‑year staged rollout and the initial licensing scope (up to 6,000 staffers). Officials emphasized that the program will aim to “better serve constituents and streamline workflows,” while continuing to evaluate other AI vendors. The House framed the effort as part of a broader push to “win the AI race.” (axios.com)

What is Microsoft Copilot — technical primer​

Microsoft Copilot is an umbrella name for AI assistants integrated across Windows and Microsoft 365 apps (Word, Excel, PowerPoint, Outlook, and Teams). At scale, Copilot uses large language models (LLMs) and multimodal routing to perform tasks such as drafting email, summarizing documents or hearings, extracting structured data from spreadsheets, and automating repetitive administrative functions.
Crucially for government use, Microsoft now offers variants and deployment options aimed at isolated government tenancy:
  • Azure Government / GCC High / DoD environments are designed to keep data and inference processing within approved cloud boundaries.
  • FedRAMP High authorizations and other compliance pathways are being pursued to make enterprise Copilot viable for regulated customers.
  • Management and governance features (role‑based access, telemetry controls, data grounding) are part of the enterprise product roadmap that Microsoft highlights for public sector adopters. (techcommunity.microsoft.com)
These architectural choices — where inferences run, where telemetry is stored, and whether user inputs are used for vendor model training — determine whether a Copilot deployment can be made compatible with legislative confidentiality, constituent privacy, and records‑management obligations.

The House plan: available details and immediate gaps​

What is public so far:
  • The pilot will begin this fall as a one‑year phased deployment and will make licenses available to a sizable portion of staff in each office — reported figures put the initial scale at up to 6,000 staffers.
  • The Chief Administrative Officer has communicated that the deal brings Microsoft’s M365 product suite, which includes Outlook and OneDrive, into the chamber under negotiated terms.
  • Leadership claims the pilot will operate with “heightened legal and data protections,” and that the House will continue discussions with other AI vendors.
What is not yet public (critical details the House should publish before expansion):
  • Cloud tenancy and data residency: Is the pilot running in Azure Government/GCC High, a dedicated government tenant, or on commercial Microsoft cloud infrastructure? The difference matters for compliance and verification of protections.
  • Non‑training contractual guarantees: Will Microsoft be contractually prohibited from using House inputs to train vendor models? Independent verification of non‑training clauses is necessary to restore the trust that sparked the 2024 ban.
  • Auditability and immutable logs: Will every Copilot interaction be recorded, exportable, and auditable by House oversight bodies and the Inspector General? Without immutable provenance, post‑hoc accountability is weakened.
  • Records, FOIA, and retention rules: How will AI‑generated drafts be treated under congressional records law and Freedom of Information rules? Will outputs that draw on third‑party datasets be archived or restricted?
Until these artifacts are published — technical white papers, contract excerpts, tenancy configurations, and an audit plan — claims of “heightened protections” remain directional rather than independently verifiable.

Expected benefits for House workflows​

If implemented with tight governance, Copilot can deliver measurable efficiency gains on routine tasks that consume disproportionate staff time:
  • Rapid drafting and iteration of constituent responses, memos, and press materials.
  • Summarizing long transcripts, committee testimony, and reports into digestible briefings for members.
  • Extracting and reshaping data from spreadsheets and producing tables or charts for hearings and briefings.
  • Triage and categorization of inbound email to prioritize constituent cases and flag urgent items.
For smaller congressional offices that operate with thin staffing, these productivity improvements could translate into better constituent service and more time for substantive policy work — provided outputs are treated as drafts requiring human review and attribution.

Technical and governance checklist the House must enforce​

To meaningfully reduce the risks that led to the 2024 prohibition, the pilot must be accompanied by binding technical and procedural controls. Key non‑negotiables include:
  • Dedicated government tenancy and data residency
  • Host Copilot processing and telemetry in an isolated government cloud (Azure Government / GCC High / DoD) with FedRAMP High or equivalent certification. (techcommunity.microsoft.com)
  • Contractual non‑training and usage limits
  • Explicit, auditable contract clauses that prevent House inputs from being used to train vendor models without express consent and oversight.
  • Role‑based access and least‑privilege provisioning
  • Provision licenses only to staff with defined use cases and access justifications; use granular RBAC and session controls.
  • Immutable logging and external auditability
  • Generate time‑stamped, tamper‑resistant logs of prompts, sources accessed, and outputs; provide Inspector General or third‑party auditors access.
  • Human‑in‑the‑loop mandates and record rules
  • Require human sign‑off on any AI‑assisted material released publicly or used in legislative drafting. Update records retention policies and FOIA guidance to explicitly treat AI‑assisted documents.
  • Ongoing red‑team testing and monitoring
  • Conduct adversarial testing for data exfiltration, model hallucination, and template abuse; run periodic compliance and privacy assessments.
Implementing these controls will not make Copilot risk‑free; they will, however, transform the deployment from an opaque tool into a measurable, auditable service aligned with public sector obligations.

Risks and failure modes — why the 2024 caution still matters​

Even with enterprise controls, generative AI introduces new operational and legal risks:
  • Data exfiltration: Incorrect tenancy or misconfiguration could permit House inputs or metadata to leave approved cloud boundaries. The 2024 ban was primarily motivated by this risk.
  • Hallucination and legal exposure: LLMs can produce plausible but incorrect language, which is especially dangerous in legal text, legislative language, or constituent advice. AI‑generated inaccuracies might create reputational or legal liabilities if not caught.
  • Accountability gap: If staff increasingly rely on AI drafts, tracing responsibility for erroneous or defamatory content becomes harder without clear attribution and sign‑off policies.
  • Vendor lock‑in and downstream costs: Promotional pricing (e.g., initial free or $1 offers) can accelerate adoption but may entrench a vendor’s platform and increase long‑term costs and migration friction. The GSA OneGov pricing window reduces near‑term procurement barriers, but offices should assess total cost of ownership beyond the pilot term. (gsa.gov)
  • Transparency and public trust: The House is uniquely vulnerable to criticism if protections are perceived as weaker than the standards lawmakers demand externally. Deploying Copilot without transparent contractual and technical artifacts would heighten political backlash.
Where possible, the pilot should evaluate risk metrics quantitatively: incident counts, false‑positive/false‑negative rates for red‑team tests, audit completeness, and human review failure rates.

Procurement and cost dynamics: why OneGov matters​

Procurement realities shaped this pivot. The GSA’s OneGov agreements have created centralized pathways for agencies — including the legislative branch — to access vendor products under standardized terms and steep discounts. Microsoft’s OneGov arrangement with GSA makes Microsoft 365 Copilot broadly available on favorable terms, including limited free access for qualifying government customers during initial opt‑in windows. That economic backdrop reduces the short‑term financial friction of piloting Copilot at scale. (gsa.gov)
However, procurement incentives should not be the sole driver of adoption. Pilot decisions must weigh:
  • Long‑term vendor dependency and migration complexity.
  • Contractual commitments and the ability to enforce non‑training and audit clauses beyond promotional windows.
  • Whether the GSA vehicle binds the House to renewal terms that complicate future competition or create single‑vendor lock‑in.

Operational rollout: recommended staged approach​

A principled, conservative rollout will balance learning with safety. Recommended stages:
  • Narrow initial pilot (3 months)
  • Limit to a few offices and non‑sensitive workflows (e.g., public outreach templates, non‑privileged summaries).
  • Collect baseline metrics and error reports.
  • Expanded pilot with audit hooks (3–6 months)
  • Increase to additional offices (up to the reported 6,000 licenses) while enforcing immutable logging and Inspector General review.
  • Independent evaluation and transparency
  • Publish a technical white paper describing tenancy, logging, contractual non‑training clauses, and audit results before broader adoption.
  • Conditional broadening or rollback
  • Use measurable thresholds (incident counts, compliance pass rates, red‑team results) to decide expansion, pause, or rollback.
This staged approach allows the House to realize productivity gains while keeping the institution accountable to its own standards.

Political and ethical considerations​

Bringing Copilot into the chamber has broad symbolic implications:
  • Lawmakers will now be materially affected by the capabilities and limitations of tools they debate and regulate. That may improve legislative sophistication but also creates a conflict of interest if internal protections are not at least as stringent as external regulatory expectations.
  • Use of AI in constituent interactions raises equity and ethics questions: standardized templates and AI‑augmented responses can speed service delivery but might also homogenize communications and hide human decision‑making in sensitive cases. Ethical guidance and disclosure rules should be part of the pilot charter.

What to watch next​

  • Publication of the House CAO’s technical guidance, tenancy details, and contract excerpts that specify non‑training commitments and telemetry handling. The presence — or absence — of these artifacts will determine whether the “heightened protections” claim is verifiable.
  • Inspector General or independent third‑party audit results demonstrating that logs and controls match public claims.
  • Whether the House uses the GSA OneGov vehicle or a different contracting route, and the specific terms that will apply beyond the initial pilot window. (gsa.gov)
  • Congressional oversight activity: hearings from relevant committees that examine the deployment, recommend guardrails, and clarify records and FOIA implications.

Final assessment​

The House’s decision to pilot Microsoft Copilot for staff this fall is consequential and, in many ways, overdue: hands‑on experience inside the institution is essential for informed policymaking. The decision also reflects real changes in the product and procurement ecosystem — Microsoft’s push to certify Copilot for government clouds and the GSA’s OneGov pricing strategy materially change the options available to congressional IT teams. (techcommunity.microsoft.com)
But this announcement is only the opening act. The difference between a responsible pilot and an opaque experiment will be determined by published technical details, enforceable contractual non‑training language, immutable audit trails, independent verification, and clear records and FOIA guidance. Until those elements are released and validated, claims of “heightened legal and data protections” should be treated as pledges requiring proof.
If the House couples its Copilot rollout with transparent documentation, rigorous oversight, and staged expansion tied to objective safety metrics, the pilot has the potential to become a replicable model for responsible institutional AI adoption. If it proceeds without those safeguards, the deployment risks becoming a cautionary example that accelerates regulatory backlash and erodes public trust. The coming weeks and months will reveal which path the institution chooses.

Quick takeaways (for IT teams and staff)​

  • Short term: Expect limited access under a one‑year pilot; treat all AI outputs as draft material requiring human sign‑off.
  • Security: Confirm which cloud tenancy (Azure Government / GCC High) hosts Copilot and insist on exportable, immutable logs.
  • Procurement: Be aware the GSA OneGov deal reduces upfront cost pressure but review longer‑term contractual commitments. (gsa.gov)
  • Governance: Demand published CAO/CIO guidance, independent audits, and rules for records/FOIA before widespread adoption.
This is a consequential institutional experiment in the interplay between AI, governance, and public service. Its success will be judged not by the novelty of having Copilot in the chamber, but by the transparency, auditability, and enforceability of the safeguards that accompany it.

Source: Newsmax https://www.newsmax.com/newsfront/house-mike-johnson-microsoft-copilot/2025/09/17/id/1226800/
 

The U.S. House of Representatives is moving from prohibition to pilot: beginning this fall, a limited rollout will make Microsoft Copilot available to Members of Congress and a subset of House staffers under a one‑year pilot that promises “heightened legal and data protections,” expands access to productivity tools across the chamber, and mirrors a broader federal push to bring enterprise AI into government workflows.

Business team in a conference room uses a holographic Copilot interface with the Capitol building in view.Overview​

This shift marks a striking reversal from the institution’s posture a year and a half ago, when the House barred staffers from using the commercial Copilot offering over data‑security concerns. The new approach is deliberately incremental: only a sizable portion of staff in each office will be eligible, and as many as 6,000 licenses are expected to be made available for roughly a year. The rollout will begin this month and continue through November, with senior staff and leadership also included in the initial wave. The pilot pairs the Copilot chatbot with the chamber’s Microsoft 365 footprint, and House leadership has emphasized that the deployment will include enhanced legal review and data protections designed to meet congressional security needs.

Background​

From ban to blueprint​

The House’s earlier ban on Copilot reflected acute worries about data exfiltration from legislative systems to non‑approved cloud environments. That decision came after the Office of Cybersecurity identified the commercial Copilot offering as a potential risk for leaking House data. Since then, Microsoft and other AI vendors have marketed versions of enterprise and government‑targeted tools with contractual and technical commitments intended to limit data usage and increase controls.

Why now​

Two forces converged to make a pilot politically and operationally viable. First, major AI vendors have structured government‑friendly procurement offers and price points designed to accelerate adoption across federal entities. Second, congressional leaders framed the move as necessary to modernize workflows, reduce routine burdens, and unlock cost efficiencies — positing that AI can increase legislative capacity while trimming time spent on paperwork and constituent casework.

What the pilot will look like​

Scope and schedule​

  • The pilot will run for roughly one year, with licensing available to up to 6,000 House staffers.
  • Deployment begins immediately this fall and is staggered, continuing through November to allow offices time to train users and integrate protections.
  • Access is limited within offices: only selected staffers in each office — often those whose roles focus on research, drafting, or constituent services — will receive licenses during the pilot.

Product and protections​

  • The deployment links Copilot capabilities to the House’s Microsoft 365 ecosystem, giving staffers integrated assistance inside Word, Excel, Outlook, and other productivity apps.
  • Officials assert the pilot includes heightened legal protections and data controls compared with commercial consumer offerings. These protections are intended to address the previous security objections that led to the ban.

Change management and governance​

  • The rollout is framed as a managed pilot: offices must follow use policies set by the Chief Administrative Officer and the Office of Cybersecurity.
  • Training and oversight are central parts of the program, and the pilot will likely produce guidance on permitted workflows, redaction practices, and audit logging for subsequent expansion decisions.

Where this fits in the federal AI landscape​

The House pilot is not happening in isolation. The executive branch and other vendors have pursued large‑scale arrangements that make enterprise AI accessible to public sector organizations at nominal costs and with FedRAMP‑level assurances. Several major AI providers have structured government programs that lower the price barrier for agencies and emphasize compliance features, and procurement authorities have been working to make those offerings broadly available to federal users.
At the same time, federal IT‑asset reviews and watchdog reports have exposed heavy concentration of software licensing among a small set of vendors. In particular, Microsoft represents a substantial share of software licensing spend across major agencies, a contextual factor that shapes both procurement leverage and political debate when the government enacts broad technology decisions.

The practical upside: productivity, speed, and savings​

AI assistants like Copilot can deliver immediate and measurable gains across routine congressional tasks:
  • Faster drafting and research — summarizing bills, preparing briefings, and producing constituent correspondence in less time.
  • Improved constituent service — triaging casework, drafting reply templates, and extracting key facts from long messages.
  • Data‑driven support — generating roll call analyses, collating research citations, and helping with fact‑checking when properly constrained.
  • Potential cost savings — automating repetitive tasks can reduce staff hours spent on administrative work and enable reallocation to higher‑value legislative activities.
These benefits are most likely to be realized in offices that pair Copilot access with clear workflows, staff training, and auditing to ensure outputs are validated before public use.

Risks and unresolved technical challenges​

Data protection and leakage​

The central reason for last year’s ban — risk of sensitive data exposure — remains the most consequential worry. Even enterprise AI products that promise not to use customer inputs to retrain models still require careful architecture to prevent inadvertent transfer of classified, constituent, or privileged information into systems that could be mirrored elsewhere.
  • Operational risk: staffers might paste or upload non‑public text into prompts despite policies.
  • Technical gaps: telemetry, logging, and clear separation between experimental outputs and authoritative records are necessary but not always uniformly implemented.
  • Legal exposure: inadvertent disclosure of privileged communications could create legal and reputational liabilities for offices and for the institution.

Model hallucination and factual reliability​

Large language models sometimes produce plausible‑sounding but incorrect responses. In a congressional context, that risk translates into:
  • Erroneous legislative language or inaccurate summaries that could mislead votes or constituent communications.
  • Overreliance on model outputs instead of primary sources, particularly for legal or regulatory drafting.
Strong editorial workflows and human‑in‑the‑loop verification will be essential to mitigate these hazards.

Bias, fairness, and political risk​

Models can reflect biases present in training data. For Members and staff operating in a politically charged environment, even subtle framing errors or asymmetric outputs may be weaponized by opponents or amplified by media scrutiny. Controls over training data provenance and guardrails for politically sensitive content are needed but not foolproof.

Vendor lock‑in and procurement concentration​

The House’s alignment with Microsoft 365 and Copilot raises questions about vendor concentration and long‑term flexibility. Given Microsoft’s sizable share of federal software licensing, choices that lean heavily on a single vendor can:
  • Reduce competitive leverage in future procurements.
  • Make migration costly if the House later decides to adopt a different AI provider or a multi‑vendor strategy.
  • Amplify dependency on one vendor’s approach to security, compliance, and model governance.

Governance, auditability, and legal review: what must happen​

To move beyond a pilot and scale responsibly, the House will need to close several governance gaps:
  • Implement detailed usage policies that specify permissible prompt content, redaction requirements, and recordkeeping mandates.
  • Ensure audit logs capture who asked what, when, and how outputs were used in decision‑making — logs should be immutable and regularly reviewed by cybersecurity staff.
  • Mandate training and certification for users, with refresher courses and simulated misuse scenarios.
  • Define data classification boundaries to ensure Copilot prompts cannot include high‑sensitivity material.
  • Establish third‑party assessments and penetration testing of the integration to validate controls and verify contractual commitments.
  • Create a transparency process that documents the pilot’s outcomes and publicizes redlines, while protecting legitimately sensitive information.
These steps, taken together, will help collapse the distance between promising technology and safe, accountable use in a public institution.

Political and institutional implications​

The choice to pilot Copilot is also a political signal: it demonstrates the House’s willingness to experiment with AI while balancing oversight and modernization narratives. That posture carries downstream impacts:
  • Bipartisan optics: framing the pilot as an efficiency and constituent‑service tool makes it easier to secure cross‑bench support, but political rancor over data security or vendor favoritism could still escalate.
  • Legislative workflow changes: sustained use could reshape staff roles, with fewer time‑consuming drafting tasks and a greater focus on policy analysis and stakeholder engagement.
  • Precedent for other branches: if the pilot succeeds, it may encourage broader legislative branch adoption and influence judicial or executive branch policies on AI procurement and governance.

How this pilot compares to other government AI programs​

Across the federal landscape, several vendors have offered government‑tailored AI products at nominal prices and with compliance features geared to the public sector. Recent government procurement vehicles and announcements have aimed to widen access while addressing security and FedRAMP requirements.
  • Multiple major AI vendors have publicized programs offering reduced pricing for government customers alongside options that meet higher security standards.
  • The federal procurement apparatus has sought to make enterprise AI available through centralized agreements and schedules, accelerating agency access while preserving oversight channels.
That context means the House pilot not only tests Copilot’s utility in legislative workflows, but also tests the mechanics of acquiring and governing modern AI inside a public‑sector environment rife with unique constraints.

Short‑term checklist for offices receiving Copilot access​

  • Require mandatory training for licensed users within the first two weeks of rollout.
  • Define a short list of permitted use cases (e.g., drafting constituent responses, summarizing public documents, generating meeting notes) and forbidden use cases (e.g., drafting privileged legal strategy, uploading classified or non‑public constituent PHI).
  • Implement pre‑publish review for any document created with Copilot that will be made public or used for official decision‑making.
  • Ensure audit logging is enabled and integrated with the House’s cybersecurity monitoring systems.
  • Appoint an office‑level AI steward responsible for compliance and user support.

Measured expectations: what success looks like​

Success for this pilot should be defined by concrete metrics that capture both productivity gains and risk containment. Recommended indicators include:
  • Reduction in average time to draft routine constituent replies.
  • Number of incidents or policy violations tied to Copilot usage (target: zero).
  • User satisfaction scores combined with accuracy assessments of AI‑produced drafts.
  • Completeness of audit logs and the speed with which suspicious activities can be detected and remediated.
  • Cost comparisons demonstrating whether AI‑enabled workflows deliver measurable efficiencies versus baseline staffing and time expenditures.
If these indicators show net gains without compromising data security or institutional integrity, the pilot can be elevated or expanded with confidence.

Critical analysis: strengths and blind spots​

Strengths​

  • Modernization potential: The pilot acknowledges that carefully configured AI can relieve staff of repetitive tasks, allowing human expertise to focus on higher‑value legislative work.
  • Controlled rollout design: Limiting access initially and framing the deployment as a pilot with audits and legal review demonstrates a cautious, accountable approach.
  • Alignment with broader federal moves: Bringing Copilot into the House fits within a wider wave of government‑oriented AI procurements, which will facilitate interagency interoperability and shared governance models.

Blind spots and concerns​

  • Operational enforcement is difficult: Policies are only as effective as enforcement. The greatest risk is informal or inadvertent misuse by well‑intentioned staff that erodes security protections.
  • Overreliance without verification: There is a temptation to accept polished AI outputs at face value, especially under time pressure; without strict human verification, the institution may absorb errors into official records.
  • Procurement and competition effects: Deepening integration with a single vendor risks crowding out competition and reduces the government’s flexibility to pivot as technology or threat models evolve.
  • Transparency versus confidentiality: The House must balance the need for transparency about AI use with legitimate confidentiality and national security considerations. That tension will be difficult to reconcile publicly.

Conclusion​

The House’s decision to pilot Microsoft Copilot for up to 6,000 staffers is an important case study in government adoption of generative AI: it combines ambition with restraint, promising productivity gains while attempting to address the data‑security fears that prompted the earlier ban. The experiment will be judged not just by whether staff find the tool useful, but by whether the institution can operationalize robust governance — enforceable usage policies, immutable audit trails, mandatory training, and a culture of human verification.
If those safeguards are implemented effectively, the pilot could become a model for how legislatures modernize without surrendering control over sensitive information. If they are not, the move risks reintroducing the very vulnerabilities that prompted the ban. The coming months will reveal whether the House’s cautious iteration yields the practical benefits its leaders promise, or whether it becomes another lesson in the complexities of bringing frontier AI into the public square.

Source: PYMNTS.com US House to Offer Microsoft Copilot to Members and Staffers | PYMNTS.com
 

Back
Top