Table of Contents

Introduction

On-prem to AWS migrations fail less because of compute and more because of planning gaps: unclear cutover objectives, overlooked dependencies, and rushed testing. This guide keeps the process easy to follow while staying operational. You will define what success looks like, prepare a minimal landing zone, run AWS MGN test launches, cut over confidently, and then optimise the workload after it is live.

TSplus Remote Access Free Trial

Ultimate Citrix/RDS alternative for desktop/app access. Secure, cost-effective, on-premises/cloud

What Should You Decide Before Migrating Anything?

Which migration strategy fits this server (the AWS “7 Rs”)?

The fastest way to lose time is to migrate the wrong thing. Before you install any agent, decide which migration strategy the server deserves so you do not lift-and-shift something that should be retired or replaced. In practice, many teams start with rehosting for speed, then optimise later once the workload is stable in AWS.

However, that only works when the server is a good “as-is” candidate and won’t create expensive technical debt immediately after cutover. Practical decision shortcuts:

  • Rehost: move fast with minimal change when time is tight.
  • Replatform: keep the app but make small adjustments for AWS fit.
  • Refactor: reserve effort for business-critical differentiators.
  • Repurchase: replace with SaaS instead of migrating the server.
  • Retire/Retain: remove unused systems or keep constrained workloads on-prem.

A useful internal checkpoint is to ask whether the workload has a “cloud future.” If the server will later be decomposed into managed services or containerised, document that now and treat rehosting as a temporary step rather than a permanent design.

What Are The RTO/RPO, Downtime Window, and Rollback Triggers?

Cutovers succeed when success is measurable. Define the acceptable downtime and data-loss tolerance, then write down the conditions that force rollback. This keeps the migration objective and prevents teams from improvising during the cutover window. It also helps business stakeholders sign off because they can see exactly what risk is being accepted.

Define and document:

  • RTO: maximum acceptable downtime.
  • RPO: maximum acceptable data loss.
  • Downtime window: when you are allowed to switch production traffic.
  • Rollback triggers: specific failure conditions (auth outage, failed transactions, data mismatch).
  • Cutover mechanism: DNS flip, load balancer switch, routing/firewall changes.

To keep the rollback plan realistic, specify who owns each action during cutover. For example, one person owns DNS changes, one owns application validation, and one owns the “rollback decision” based on the triggers above.

What Do You Need Ready in AWS and On-Prem First?

Connectivity and firewall basics for replication

Replication only works if the source environment can reach AWS consistently. The most common blockers are strict egress controls, proxies, and TLS inspection that interfere with outbound HTTPS traffic. Validate connectivity early and keep the network path stable during initial replication and test launches. In many environments, replication is not “blocked” outright; instead, intermittent drops or packet inspection cause unstable behaviour that is hard to diagnose later.

Common connectivity patterns:

  • Public internet egress (simplest when allowed)
  • Site-to-site VPN (common for private connectivity)
  • Direct Connect (more predictable for larger environments)

Pre-flight checks:

  • Outbound HTTPS works reliably from the source network
  • Proxy behaviour is understood and tested with the migration flow
  • Security teams approve the required egress for the migration window

If your environment is highly locked down, add a short “network proving” step to your wave plan: validate endpoints from one pilot server, then replicate that exact rule set for the rest of the wave.

Minimal AWS landing zone checklist

You do not need a perfect landing zone to begin, but you do need a consistent target that won’t change mid-wave. Keep the build minimal, but deliberate, so testing reflects what cutover will look like. Many migration issues come from “temporary” network shortcuts that become permanent because no one has time to rebuild them after launch.

Minimum landing zone elements:

  • A VPC and subnets where instances will launch (often separate test vs production)
  • Security groups aligned to real application flows (avoid “open now, fix later”)
  • IAM ready for migration operations and day-two access and tooling
  • Basic tagging so ownership and cost tracking are clear after cutover

It also helps to decide early how admins will access instances (bastion, VPN , SSM) and how outbound internet access will be provided (NAT gateway, proxy). These choices affect patching, monitoring agents, and troubleshooting on day one.

Source server readiness checklist

A clean migration depends on a clean source. Confirm the workload is compatible with the method you chose, then identify anything that depends on local assumptions that will change in AWS. This is also where you flag “special case” servers that may require a different sequence. For example, a file server with heavy write activity may need a tighter cutover window and stricter validation for open files and shares.

Readiness checks that prevent surprises:

  • OS/workload compatibility with the migration approach
  • Sufficient disk and steady I/O for replication overhead
  • Dependencies mapped: DNS , AD/LDAP , internal PKI/certificates , databases, shares
  • Hidden brittleness: hard-coded IPs, legacy TLS, uncommon scheduled tasks
  • Special cases flagged early: domain controllers, clusters, appliances, dongle licensing

Before leaving this step, capture “must stay the same” items such as hostname, IP address requirements, or certificate bindings, because these directly affect your AWS launch settings and your cutover sequence.

How Do You Migrate a Server to AWS with AWS MGN?

Initialize MGN and set replication defaults

Initialize AWS MGN in the region where the server will run, then define replication defaults so wave execution stays consistent. A stable template reduces per-server variance and makes troubleshooting repeatable. Think of this as your standard operating procedure for replication, similar to a gold image in a virtualised environment.

Set replication defaults up front:

  • Target subnet strategy and network placement
  • Security group baseline for launched instances
  • Storage behaviour (volume mapping, encryption expectations
  • Replication throttling to protect production traffic

If you already know that production will require different settings than testing, define those differences explicitly. That way, test launches remain representative without exposing production networks prematurely.

Install the agent and complete initial sync

Install the replication agent on the source server and confirm it registers successfully. Initial sync is where instability costs you the most, so avoid unnecessary changes and monitor replication health closely. This is also where teams benefit from documenting the “known good” install flow so they don’t troubleshoot the same issues in each wave.

Operational guidance:

  • Keep the server stable during initial replication (avoid reboots if possible)
  • Monitor replication status and address errors immediately
  • Document the install method so future waves are consistent

During initial sync, monitor not only the migration console but also server performance. Replication overhead can reveal storage bottlenecks or disk errors that were previously masked in the on-prem environment.

Launch a test instance and validate

A test launch turns assumptions into evidence. Launch the test instance, then validate application health end-to-end, not just boot success. Use a checklist so testing is repeatable across servers and waves. If end users will connect through TSplus Remote Access include an access-path check in the validation. Consistency matters because it allows you to compare results between workloads and spot patterns, such as DNS resolution issues affecting multiple servers.

Minimum validation checklist:

  • Boot completes and services start cleanly
  • Application smoke tests pass for key workflows
  • Authentication works (AD/LDAP/local)
  • Data paths work (DB connections, file shares, integrations)
  • Scheduled jobs and background services run as expected
  • Logs and monitoring signals appear where your ops team expects them

Add one more step that teams often skip: validate how users will actually access the application, including internal routing, firewall rules, and any upstream systems. A server can be “healthy” but unreachable in practice.

Launch cutover and finalize

Cutover is a controlled switch, not a leap of faith. Freeze changes, when possible, execute the traffic move using the planned mechanism, then validate using the same checklist as testing. Keep rollback ownership explicit so decisions are fast. Treat this as a repeatable playbook: the less you improvise, the lower the risk.

Cutover execution essentials:

  • Confirm change freeze and communications plan
  • Launch cutover instance and switch traffic (DNS/LB/routing)
  • Re-run validation checklist with extra focus on data integrity
  • Apply rollback triggers if required and revert traffic cleanly
  • Finalize cutover and remove or terminate test resources

Immediately after cutover, capture what changed in production (new IPs, new routes, new security group rules) and document it. This is the information the ops team needs when something breaks weeks later.

What Usually Breaks, and What Should You Do Right After Cutover?

Network egress, DNS/AD dependencies, and “lift-and-shift isn’t done”

Most failures are dependency failures. Replication tends to break on egress and proxy constraints, while application behaviour tends to break on identity, name resolution, and certificates. Even when cutover succeeds, rehosting is only the first milestone, not the final state. Without a second phase, you often end up with “cloud-hosted legacy” that costs more and is harder to operate.

Most common breakpoints:

  • Outbound HTTPS blocked or altered by proxy TLS inspection
  • DNS resolution changes (split-horizon issues, missing resolver rules)
  • AD/LDAP reachability gaps from the VPC
  • Internal PKI chains missing or not trusted in the new environment
  • Hard-coded endpoints and legacy assumptions about local network paths

A simple mitigation is to test identity and DNS early with a pilot launch. If those fundamentals work, the rest of the application validation becomes far more predictable.

Post-cutover stabilization: security, backups, monitoring, cost

The first 48 hours after cutover should prioritise stability and control. Make sure the workload is observable, recoverable, and securely managed before you spend time on deeper optimisation. This is also where your migration succeeds long-term, because good day-two operations prevent “we moved it, but nobody wants to own it” outcomes.

Immediate post-cutover actions:

  • Confirm monitoring/alerting is live and owned
  • Ensure backups are enabled and complete a restore validation
  • Tighten security groups and apply least-privilege IAM
  • Standardise patching approach and administrative access (auditable paths)
  • Start rightsizing after you collect real utilisation data
  • Enforce tagging to prevent “unknown owner” cost drift

Once stability is proven, schedule a short optimisation review for each migrated server. Even a light pass on storage types, instance family choice, and reserved capacity strategy can materially reduce cost.

Where Does TSplus Fit After You Move Servers to AWS?

After Windows workloads run on AWS, many teams still need a simple way to publish Windows applications and desktops to users without building a heavy VDI stack. TSplus Remote Access delivers application publishing and remote desktop access for Windows servers in AWS, on-prem, or hybrid environments, with straightforward administration and predictable licensing that fits SMB and mid-market operations.

Conclusion

Migrating an on-premises server to AWS is most successful when it follows a repeatable runbook: choose the right migration strategy, validate dependencies, replicate safely, test realistically, and cut over with clear rollback triggers. Once production is stable, shift focus to day-two operations: security hardening, backup validation, monitoring, and rightsizing. This turns a “move” into a reliable, cost-controlled platform.

TSplus Remote Access Free Trial

Ultimate Citrix/RDS alternative for desktop/app access. Secure, cost-effective, on-premises/cloud

Further reading

back to top of the page icon