Azure’s Storage Mover now supports direct cloud-to-cloud migrations from AWS S3 to Azure Blob Storage, delivering a fully managed, agentless path for moving large object stores into Azure with built-in orchestration, incremental sync, and observability—an offering that could reshape how organizations approach multicloud consolidation and large-scale data migrations.
Cloud strategies continue to diverge: some organizations embrace multicloud for resilience and best-of-breed services, while others consolidate to a single cloud to simplify operations and reduce costs. Data mobility sits at the heart of those choices. Large object repositories—backup archives, telemetry dumps, AI training sets, and content libraries—are often measured in terabytes or petabytes, and moving them across clouds has traditionally required custom tooling, temporary compute, or third-party services.
Microsoft’s Azure Storage Mover, first introduced for on-premises file and NAS migrations, has evolved into a broader migration control plane. In October 2025, Microsoft announced the General Availability of cloud-to-cloud migration from AWS S3 to Azure Blob Storage using Azure Storage Mover. The feature enables server-to-server transfers without deploying self-hosted agents in the source cloud, leverages Azure Arc multicloud connectors for authentication, and integrates with Azure monitoring and governance controls.
This release is significant because it addresses several recurring migration pain points: removing custom scripts and ephemeral compute, preserving metadata during transfer, providing incremental sync to minimize cutover downtime, and giving teams centralized job tracking inside the Azure portal or CLI. The service is offered as a fully managed capability inside Azure Storage Mover and is positioned by Microsoft as a no-cost migration path (the service itself is listed as free; destination storage and egress charges may still apply under normal cloud billing models).
Those case details illustrate the service’s intended use: phased migrations with low downtime and immediate access to Azure analytics/AI services once objects land in Blob Storage. They also reinforce that Storage Mover targets real production workloads (backup stores, analytics datasets, AI training data) rather than only small-scale testing.
A prudent reader should treat vendor-published customer figures as useful proof points but not as comprehensive evidence of universal performance. Vendor case studies typically focus on successful pilots; independent, third‑party benchmarks—when available—are the best way to validate throughput at scale for your specific dataset, object size mix, and network conditions.
For migration leaders, the tool offers an efficient first option: treat it as an enterprise-grade, managed transfer mechanism that should be validated with pilot projects and thorough cost and compliance checks. For organizations with strict private-networking requirements or specialized transformation needs, the service is an important addition to the migration toolbox but not an automatic replacement for bespoke pipelines.
Plan carefully, validate with your own tests, and use the incremental sync and observability features to stage cutovers. When executed with operational rigor, Storage Mover can dramatically shorten migration timelines and unlock the value of data within Azure’s analytics and AI ecosystem.
Source: Microsoft Azure Fully managed cloud-to-cloud transfers with Azure Storage Mover | Microsoft Azure Blog
Background
Cloud strategies continue to diverge: some organizations embrace multicloud for resilience and best-of-breed services, while others consolidate to a single cloud to simplify operations and reduce costs. Data mobility sits at the heart of those choices. Large object repositories—backup archives, telemetry dumps, AI training sets, and content libraries—are often measured in terabytes or petabytes, and moving them across clouds has traditionally required custom tooling, temporary compute, or third-party services.Microsoft’s Azure Storage Mover, first introduced for on-premises file and NAS migrations, has evolved into a broader migration control plane. In October 2025, Microsoft announced the General Availability of cloud-to-cloud migration from AWS S3 to Azure Blob Storage using Azure Storage Mover. The feature enables server-to-server transfers without deploying self-hosted agents in the source cloud, leverages Azure Arc multicloud connectors for authentication, and integrates with Azure monitoring and governance controls.
This release is significant because it addresses several recurring migration pain points: removing custom scripts and ephemeral compute, preserving metadata during transfer, providing incremental sync to minimize cutover downtime, and giving teams centralized job tracking inside the Azure portal or CLI. The service is offered as a fully managed capability inside Azure Storage Mover and is positioned by Microsoft as a no-cost migration path (the service itself is listed as free; destination storage and egress charges may still apply under normal cloud billing models).
Overview: what Azure Storage Mover brings to cloud-to-cloud migration
Azure Storage Mover’s cloud-to-cloud capability is built around a few clear design goals:- Remove the need for a client or bridge VM that reads from the source and writes to the destination (agentless, server-to-server transfers).
- Provide centralized orchestration, job tracking, logging, and observability from inside Azure.
- Preserve file/object metadata and support both one-time bulk migrations and incremental syncs for cutover scenarios.
- Enforce security and governance through Azure-native controls: role-based access control (RBAC), integration with Azure Active Directory (Microsoft Entra ID), and use of Azure Arc multicloud connectors for AWS authentication.
- Scale to very large datasets while imposing explicit limits to protect reliability and predictability.
How it works — architecture and operational flow
Azure Storage Mover’s cloud-to-cloud workflow is intentionally simple for operators while hiding a complex set of server-to-server interactions behind the portal and APIs.High-level architecture
- Azure Storage Mover acts as the orchestrator: you create a Storage Mover resource in your Azure subscription and define migration projects and jobs.
- Authentication to AWS is handled via Azure Arc multicloud connectors (for AWS). The multicloud connector registers your AWS account/buckets to Azure and allows Storage Mover to orchestrate secure access to S3.
- Transfers are executed server-to-server: Azure’s backend services read from S3 and write to Azure Blob Storage using parallel transfer pipelines, avoiding the need to route data through customer-managed VMs or client endpoints.
- Observability and logging surface in the Azure portal and route into Azure Monitor and Log Analytics for telemetry and troubleshooting.
Typical migration steps
- Deploy a Storage Mover resource in your Azure subscription.
- Create an Azure Arc multicloud connector for AWS and grant it access to the S3 buckets you intend to migrate.
- Register source (S3) and target (Azure Blob) endpoints in Storage Mover.
- Run discovery to inventory objects and estimate job sizes.
- Create and execute a migration job (one-time copy or incremental sync).
- Use Azure Monitor / Log Analytics and job logs to validate progress and troubleshoot.
- Perform post-migration validation and finalize cutover (enable clients to point at Azure, decommission S3 buckets when appropriate).
Key features in practice
Azure Storage Mover’s cloud-to-cloud feature exposes several practical capabilities that matter for real migrations:- Direct parallel transfers: The system performs server-to-server parallel reads and writes to maximize throughput for large datasets. Parallelism and chunking reduce wall-clock time compared to single-client transfers.
- Agentless operation: No self-hosted agent is required in the source AWS environment, which eliminates the need to provision, secure, and manage temporary compute resources during migration.
- Incremental sync: After the initial bulk copy, incremental syncs can move only changed objects to reduce the final cutover window and limit data drift between source and destination.
- Metadata preservation: Storage Mover maintains important object metadata during transfer to help preserve application semantics (timestamps, permissions where possible, and object properties).
- Integrated automation and APIs: Jobs are orchestrated via the Azure portal, CLI, or REST APIs, enabling embedding into CI/CD or migration automation pipelines without third-party orchestration.
- Observability and logging: Migration progress, speed, and estimated completion appear in the portal; job run and copy logs can be ingested into Azure Monitor and Log Analytics for deeper analysis.
Verified limits and operational constraints (what to watch for)
Microsoft’s product documentation lists operational limits for the cloud-to-cloud feature that every migration plan should account for:- Each migration job supports up to 500 million objects.
- A subscription can run a maximum of 10 concurrent jobs by default; additional concurrency requires a support request.
- Archived objects stored in AWS Glacier or Deep Archive must be rehydrated before migration; Storage Mover does not automatically rehydrate archived objects.
- Private networking (direct private connectivity between AWS and Azure for these transfers) is not currently supported; Storage Mover uses restricted public IP ranges and Azure’s control plane to secure transfers.
Security, compliance, and governance
Security is a central selling point for Storage Mover’s cloud-to-cloud capability:- Encryption in transit is enforced for all transfers, ensuring data traversing the public internet is protected.
- Azure-native identity and access controls are available: Microsoft Entra ID (Azure Active Directory) and RBAC determine who can configure migration resources and run jobs.
- Azure Arc multicloud connectors are used for cross-cloud authentication and resource mapping; this approach centralizes permissions and reduces the need to distribute long-lived access keys.
- Integration with Azure Key Vault (where used) keeps credentials and keys out of scripts and reduces secret sprawl.
- Logging into Azure Monitor and Log Analytics provides audit trails and operational telemetry suitable for forensic analysis or compliance reporting.
Real-world impact: case study analysis and vendor claims
Microsoft’s announcement includes customer examples from the public preview. One cited example describes a phased migration of hundreds of terabytes carried out by a customer (Syncro) in partnership with an implementation partner (SOUTHWORKS). The quoted pilot details a 60 TB first phase and plans for an additional 120 TB in ongoing phases.Those case details illustrate the service’s intended use: phased migrations with low downtime and immediate access to Azure analytics/AI services once objects land in Blob Storage. They also reinforce that Storage Mover targets real production workloads (backup stores, analytics datasets, AI training data) rather than only small-scale testing.
A prudent reader should treat vendor-published customer figures as useful proof points but not as comprehensive evidence of universal performance. Vendor case studies typically focus on successful pilots; independent, third‑party benchmarks—when available—are the best way to validate throughput at scale for your specific dataset, object size mix, and network conditions.
How Storage Mover compares with alternative approaches
Several established methods also move data from S3 to Azure Blob Storage. Understanding comparative trade-offs is crucial.- AzCopy server-to-server copy: AzCopy supports copying data directly from S3 using the PutBlockFromURL API pattern, enabling high-speed transfers and fine-grained control for script-driven jobs. It’s useful for teams that want full command-line control and are comfortable with scripting and credential management.
- Third-party migration tools and managed services: Commercial migration platforms provide additional protocol support, custom transformations, or advanced scheduling but usually carry license costs and operational complexity.
- AWS DataSync: AWS provides DataSync with support for writing to Azure Blob Storage; DataSync can be convenient for customers who prefer an AWS-native transfer utility and integration with AWS monitoring.
- DIY pipelines with temporary compute: Historically, many large migrations used fleets of temporary VMs or containers to parallelize transfers. This approach offers control but introduces operational burden—managing VM fleets, security, scaling, and cost.
Planning a migration with Azure Storage Mover — recommended checklist
Successful migrations are always the result of careful planning. The following checklist condenses best practices for using Storage Mover in cloud-to-cloud scenarios:- Inventory and classify:
- Catalog buckets, object counts, average object size, and access patterns.
- Flag archived data (Glacier / Deep Archive) and plan rehydration ahead of any migration job.
- Estimate volume and throughput:
- Use discovery scans to obtain object counts and size distribution to better estimate job durations and concurrency needs.
- Validate limits:
- Confirm object counts fit within the documented 500 million objects per job bound or plan to split into multiple jobs.
- If you need more than 10 concurrent jobs per subscription, open a support request in advance.
- Secure access:
- Configure Azure Arc multicloud connectors and limit AWS IAM permissions to minimize blast radius.
- Store secrets in Azure Key Vault where possible and follow least-privilege RBAC.
- Plan phased cutover:
- Use incremental syncs to keep source and target aligned. Schedule final cutover during low-traffic windows to reduce end-user impact.
- Audit and verify:
- Enable copy and job logs and integrate with Azure Monitor and Log Analytics.
- Perform hash-based validation or sample checks post-migration to confirm integrity.
- Cost modeling:
- Factor in destination storage costs, egress pricing (if any), and any data transformation or rehydration costs on AWS.
- Plan for temporary storage or versioning if you need rollback capability.
Risks, limitations, and caveats
No migration service is without trade-offs. Storage Mover’s cloud-to-cloud offering has several important limitations and risk factors:- No private networking: Transfers rely on controlled public IP ranges; organizations that require VPN-only or private connectivity between clouds may find this unacceptable.
- Archived data: Objects in deep archive tiers must be restored before migration, which can add cost and time—this is an operational detail that often surprises teams during planning.
- Concurrency caps: The default limit of 10 concurrent jobs per subscription may throttle aggressive, multi-team migrations unless you coordinate with support.
- Vendor-reported performance: Published case studies and vendor statements describe “petabyte-scale” migrations and high throughput, but the real throughput you’ll achieve depends on object size distribution, AWS-side request limits, Azure region performance variability, and transient network conditions.
- Feature parity vs. custom tooling: While Storage Mover provides a managed, low-code path, it may not cover every niche requirement—custom metadata transforms, object re-keying, or application payload reformatting may still demand bespoke pipelines.
- Costs beyond the service: The Storage Mover service can be free, but storage, retrieval, rehydration, and networking charges may still apply from the cloud providers. Don’t treat “free migration service” as “no-cost migration.”
Security and compliance deep dive
For security-conscious organizations, here are deeper considerations:- Secrets management: Use Azure Key Vault to store any AWS credentials or Azure storage keys used during migration. Avoid embedding secrets in scripts or copy jobs.
- Least privilege: Implement narrow IAM roles on AWS specifically for the migration—list/get/read—but avoid giving broad administrative privileges.
- Audit trails: Ensure job logs are retained in Azure Monitor/Log Analytics for the duration your compliance policies require. Combine these logs with account-level audit trails from AWS for full-chain evidence.
- Data residency: Verify the destination Azure region’s compliance posture and data residency requirements. Migration moves data across geopolitical boundaries; legal counsel should sign off where regulated data is involved.
- Encryption and integrity: Although transfers are encrypted in transit, enforce post-migration verification (checksums or hash comparisons) to confirm integrity. If data must remain encrypted at-rest with customer-managed keys, plan key access and rotation in Azure Key Vault.
When Storage Mover is the right tool — and when it isn’t
Storage Mover is compelling for many scenarios:- You need a low‑operational-overhead solution to move large S3 object stores into Azure Blob Storage.
- You want centralized job orchestration inside Azure, integrated monitoring, and RBAC-driven access control.
- You’re migrating data to leverage Azure-native analytics or AI services as soon as objects are available in Azure.
- You want incremental sync capability to shorten final cutover windows.
- Your security policy requires private-only direct links between clouds.
- You must run custom transformations on objects that Storage Mover cannot perform.
- You require guaranteed, sustained transfer rates beyond what a managed service SLA covers—then bench-testing with AzCopy or your own parallel pipeline can provide a more predictable benchmark.
Practical recommendations and next actions for Windows and Azure administrators
- Start with discovery: run bucket inventories and estimate object counts and size distributions before any tool selection.
- Perform a small pilot: migrate a representative subset (small, medium, and large objects) to measure throughput, error rates, and metadata fidelity.
- Evaluate cutover options: test incremental syncs and simulate failover to confirm application behavior after migration.
- Coordinate with stakeholders: align security, compliance, and finance teams early—rehydration and storage costs can be substantial if overlooked.
- Use automation: script job creation and monitoring through the CLI or REST APIs to make repeated, phased migrations repeatable and auditable.
- Validate end-to-end: use checksum comparisons, application smoke tests, and user acceptance testing as part of your closure criteria.
Final analysis — strengths, blind spots, and strategic implications
Azure Storage Mover’s cloud-to-cloud GA is a strategic expansion of Azure’s migration portfolio. Its strengths are clear:- Simplicity and orchestration: Removing the need for customer-managed agents and providing a portal-driven experience lowers barrier to entry for migrations.
- Azure-native governance: Integration with Azure RBAC, Key Vault, and Azure Monitor aligns migration operations with existing cloud governance models.
- Incremental sync: Built-in sync capability shortens the risk window for production cutover.
- Private connectivity omission limits adoption in high-regulation contexts.
- Operational caps (object counts, concurrency) can force architectural workarounds for very large or highly parallel programs.
- Reliance on vendor data for performance claims means teams should still perform their own benchmarks.
Conclusion
Azure Storage Mover’s General Availability of cloud-to-cloud transfers from AWS S3 to Azure Blob Storage represents a meaningful step toward simplifying large-scale, multicloud migrations. It offers a low‑operations, orchestrated path for moving terabytes to petabytes of objects into Azure, integrated with Azure Arc for authentication, Azure Monitor for observability, and Azure-native governance controls.For migration leaders, the tool offers an efficient first option: treat it as an enterprise-grade, managed transfer mechanism that should be validated with pilot projects and thorough cost and compliance checks. For organizations with strict private-networking requirements or specialized transformation needs, the service is an important addition to the migration toolbox but not an automatic replacement for bespoke pipelines.
Plan carefully, validate with your own tests, and use the incremental sync and observability features to stage cutovers. When executed with operational rigor, Storage Mover can dramatically shorten migration timelines and unlock the value of data within Azure’s analytics and AI ecosystem.
Source: Microsoft Azure Fully managed cloud-to-cloud transfers with Azure Storage Mover | Microsoft Azure Blog