Here’s a summary of the key points from Microsoft’s 2025 Responsible AI Transparency Report, as shared on their official blog:
Overview
This is Microsoft’s second annual Responsible AI Transparency Report, building on their inaugural report from 2024.
The report focuses on new developments in building and deploying AI responsibly, supporting customers and the ecosystem, and how Microsoft continually learns and evolves.
Microsoft highlights its commitment to creating AI technologies that people trust, emphasizing that trust and good governance are beneficial for both users and business.
Key Themes & Takeaways
1. Investments in Tools and Practices
In 2024, Microsoft focused on expanding risk measurement tools and mitigation strategies for diverse modalities (text, images, audio, video) and new agentic/semi-autonomous systems.
They launched internal workflow tools to centralize responsible AI requirements for teams.
2. Regulatory Compliance & Customer Support
Microsoft took a proactive approach to comply with new regulations (e.g., EU AI Act), providing compliance resources to customers.
Early investments positioned them to rapidly address regulatory readiness in 2024.
3. Risk Management
Consistent risk management is applied, including oversight of high-impact AI uses and generative models.
Every flagship model released on Azure OpenAI Service and all “Phi” models undergo rigorous review and red teaming.
4. Hands-On Guidance for Sensitive Uses
Microsoft’s Sensitive Uses and Emerging Technologies team provides case-by-case guidance on high-impact and novel AI uses, especially in healthcare and sciences.
Early guidance helps develop and pilot internal policies and guidelines.
5. Research and Collaboration
The AI Frontiers Lab was established to push the boundaries of AI in safety, efficiency, and capability.
Collaboration with stakeholders around the world is ongoing to promote coherent governance and standards.
6. Focus for 2025 and Beyond
Microsoft will prioritize:
Developing more agile, adaptable risk management tools and fostering skills to anticipate rapid AI advances.
Supporting effective AI governance across the supply chain, clarifying the roles and expectations for developers, app builders, and users.
Advancing shared norms, standards, and tools for AI risk measurement and evaluation, contributing to the broader ecosystem and research.
Microsoft’s Position
Trust is central: Microsoft aims to foster broad, beneficial AI adoption by building technologies users can trust.
Microsoft is committed to sharing advances and insights publicly and welcomes stakeholder feedback to further improve AI governance.