Introduction
VMware Cloud on AWS provides a reliable, elastic, and highly scalable solution for customers who want to extend their workloads into the cloud.
However, when it comes to migration or bi-directional workload mobility, software and network incompatibilities between on-premises and cloud environments can complicate your migration process.
VMware Hybrid Cloud Extension (HCX) helps overcome those challenges by building an abstraction layer on top of existing site-specific implementations, allowing you to extend their networks and environments to the cloud seamlessly without the need for extensive reconfiguration and upgrades.
Here are some key benefits of HCX:
- Ability to migrate workloads across different versions of vSphere (6.0 or later).
- WAN optimization, compression, and de-duplication enable high throughput for faster migrations.
- Network extension enables stretching layer 2 networks between on-premises and VMware Cloud on AWS without the need for complex network reconfiguration. Virtual machines (VM) can be moved between on-premises and cloud environments with no need the change or re-assign IP addresses.
HCX is a software-as-a-service (SaaS) offering, available at no extra cost for VMware Cloud on AWS customers.
The HCX solution is built out of several component services, each supporting a specific function within the overall solution.
- HCX Enterprise Manager: System management component on the on-premises side, which is always deployed as a “source.”
- HCX Cloud Manager: System management component on the cloud side and is always deployed as “destination.”
- HCX-IX Interconnect Appliance: Provides replication and vMotion-based migration capabilities.
- HCX WAN Optimization Service: Provides improved network performance by using techniques such as de-duplication and compression to help speed up migrations.
- HCX Network Extension Service: Provides layer 2 extension capabilities, enabling VMs to migrate between on-premises and cloud without the need to re-IP.
HCX Use Cases
Older vSphere Versions
HCX allows migrating VMs from older versions of vSphere (6.0 or later) to VMware Cloud on AWS. Hosts in VMware Cloud on AWS are automatically patched, updated, and are thus likely to be running the latest (or near) version of vSphere software. This eliminates the need for customers to perform time-consuming system upgrades in order to prepare for migrations.
Bulk Migrations
In certain situations, customers may want to migrate workloads out of their current data centers in a “lift-and-shift” manner. An example of this is if you have an upcoming lease expiration on the hardware or data center facility. In this situation, when there’s not enough time for migration planning and execution, HCX can help customers migrate thousands of VMs simultaneously with no downtime. HCX, with WAN optimization services, can provide a high throughput connection over which on-premises networks can be extended into the cloud.
Heterogeneous Network Environments
Typically, your current on-premises network environment is one of the most important considerations in the migration planning process. Whether you have VXLANs, NSX for vSphere, NSX-T, or No NSX at all, each of these factors can complicate your migration plan. The good news is that HCX works by abstracting out the underlying network implementation, extending your networks from on-premises to the cloud seamlessly without the need for complex and time-consuming network re-architecture.
Slow/Sub Optimal Network Connectivity
A live vMotion across WAN with vSphere is sensitive to network bandwidth. Typically, a connection speed greater than 250 Mbps is required, but with its advanced WAN optimization capabilities, HCX can migrate live VMs over much slower connection speeds of around 100 Mbps per migration.
In this lab, we will walk through the deployment of HCX, site pairing, Service Mesh configuration Plus migration, and Network extension.
TASKS
HCX has been successfully deployed in the cloud, let's try configure access to it via the SDDC management Gateway. HCX like all management components (such as vCenter) are protected by the Management Gateway, and until an allow rule is put in place it will be inaccessible.
- From the VDI desktop access your SDDC, Using your SDDC Student account
- Log in as:
- vmcexpert{1|2|3}-##@vmware-hol.com (where ## is your student number) i.e [email protected]
- VMware1!
- Click View Details
- Select the Integrated Services tab
- Click Open HCX
- In the new browser tab Click Open HCX in your SDDC Tile Note: We expect this to fail because we need to define a firewall policy through the Management Gateway to allow access to HCX.
- Back at the SDDC Console Select the Settings tab
- Expand the HCX FQDN Section
Confirm that the Resolution Address is set to Public, if not - Click Edit
- In the Resolution Address section select Public IP <x.x.x.x> from the drop-down
- Click Save
- Record the Public IP address of HCX
You will update the on-premises HCX Manager host file with these settings, if needed.
- In the VMware Cloud on AWS portal click the OPEN NSX MANAGER button
- Click ACCESS VIA THE INTERNET to connect to NSX Manager UI
- Choose Security tab
- Click Gateway Firewall
- Click Management Gateway
- Click Add Rule twice to add two new rules
- Configure the Rules as follows:
- RULE 1
- NAME: HCX Outbound
- Sources: HCX
- Destinations: On-Prem Mgmt-Net
- Services: Any
- Action: Allow
- RULE 2
- NAME: HCX Inbound
- Sources: On-Prem Mgmt-Net
- Destinations: HCX
- Services: Appliance Management, SSH, HTTPS
- Action: Allow
- RULE 1
- Click Publish
Now, let’s try accessing HCX again. From the On-Prem Control Center, access your SDDC Console
- Click the Integrated Services tab in your SDDC VMC on AWS console
- Click Open HCX
- In the new Browser tab, click Open HCX in your SDDC Tile
NOTE: If opening HCX still fails, first try to close the Chrome browser and reopen it to clear any cache. Then verify that you did not forget to add 192.168.110.0/24 and your VDI Desktop public IP to the On-prem MGMT NET group back in Lab #4 Task 2.3. - Back in your SDDC Console, Select the Setting tab
- Copy the vCenter Username and Password (We will use this to log into HCX)
- Back in the HCX Login Tab
- Login with the Username and Password recorded earlier
Now that HCX has been deployed in the cloud the next step is to download the HCX appliance On-Premises, import it onto an ESXi host, and configure it.
NOTE: In this lab environment the Appliance has already been imported, so we’ll move to configure it.
The first thing we will do is ensure the on-premises HCX Manager can resolve the IP of the Cloud HCX Manager, for this we will edit the host file of the On-premises HCX manager
Task 3.1 - Activate HCX Manager
- In the browser click on the HCX browser tab (If you closed it for some reason you can access it from your VMC SDDC > Add Ons > Open HCX)
- Click Activation Keys Tab at the top
- Click the blue Create Activation Key in the top right corner
- On the pop up Click Confirm
- Copy the activation Key
- Click Close
- In a new browser tab access the VI Management > HCX Manager Bookmark or https://hcxmgr-l-01a.vcn.ninja.local:9443 (you may need to proceed through the warning)
- Log in as:
- admin
- VMwareNinja1! Note: You can also use ctrl+m to paste in the password
- Paste the Activation Key you copied in step 5 in the HCX Activation Key field
- Click Activate
- In the Location field Choose Tampa
- Click Continue
- Prefix in the System name with <vmcexpert#-xx-> i.e. vmcexpert3-01-hcxmgr-l-01a.ninja.local-enterprise
- Click Continue
- When asked if you want to continue setting up HCX Click Yes,Continue to Configure HCX
- Enter the following values to connect the On-Premises vCenter
- vCenter Name: https://vc-l-01a.vcn.ninja.local
- Username:[email protected]
- Password:VMwareNinja1!
- Connect your NSX: [check]
- NOTE: This checkbox is only required if workloads to be migrated are connected to NSX segments
- NSX Manager:https://nsxtmgr-l-01a.vcn.ninja.local
- Username:admin
- Password:VMwareNinja1!
- Click Continue
- On the popup Click Import Certificate
- Type https://vc-l-01a.vcn.ninja.local for SSO Identity source
- Click Continue
- Click the green Restart button
NOTE: The restart could take up to 5 mins
Task 3.2 - Configure HCX Site Pairing
- From the VDI desktop open the bookmark for VI Management > vSphere Client (your on-prem vSphere) in a Chrome browser tab
Note: If you are currently logged into your On-Premises vCenter, you need to log off. Otherwise, the HCX icon will not be visible. - Log in as
- Username: [email protected]
- Password: VMwareNinja1! Note: You can also use ctrl+m to paste in the password
- Click the vSphere Client dropdown then HCX
- In the left pane under Infrastructure click Site Pairing
- Click the Connect to Remote Site blue button
- Enter the following values
- Remote HCX URL: https://<Your_Cloud_HCX_Manager_FQDN>
- Username: [email protected]
- Password: <Cloudadmin_Password>
- Click Connect
NOTE: This information can be retrieved form the settings tab of your VMC SDDC or your Excel workbook - You should now see a connection between the Student On Prem to the HCX Cloud
Task 3.2.1 - Configure Interconnect
- In the left menu click Infrastructure > Interconnect
- Under the Compute Profile tab Click the Create Compute Profile button
- In the top left of the popup name the compute Profile Shinobi-Com-Prof
- Click Continue
- On the Select Services page, Click Continue (you will leave the settings as default, but you should have hybrid connect, WAN Optimization, Cross Cloud Migration, Bulk Migration, RAV, Network Extension, DR bubbles all green)
- In the top left of the Select Service Resources page Select Shotoku Compute01 from the drop-down and Click OK
- Click Continue
- On the Select Deployment Resources and Reservations page select the following values from the drop-downs on the top left. They appear as you select values.
- Resource: Shotoku Compute01 then close
- Datastore: Shinobi-NFS-DS01 then close
- Folder: expand vm --> select HCX VMs radio button then close
- Click Continue
- On the Select Management Network Profile page click the Select Management Network Profile Drop-Down
- Click Create Network Profile
- On the pop up, Select the following options and enter the following values:
- vCenter - Leave Default vc-l-01a.vcn.ninja.local
- Network: Distributed Portgroup
- Network: Shinobi_vDS Mgmt
- Prefix Length: 24
- Gateway: 192.168.110.1
- IP Ranges (in the large text box no spaces) : 192.168.110.151-192.168.110.160
- Primary DNS: 192.168.110.10
- DNS Suffix: vcn.ninja.local
- HCX Traffic Type: (Check) Management, (Check) vSphere Replication
- Click Create
- You will see your Mgmt network checked in the drop down
- Click Close
- Click Continue
- On the Select Uplink Network Profile page click the Select Uplink Network Profile Drop-Down
- Click Create Network Profile
- Select the following options and enter the following values:
- Network: Distributed Portgroup
- Network: Shinobi_vDS - HCX Uplink
- Prefix Length: 24
- Gateway: 192.168.10.1
- IP Ranges: 192.168.10.151-192.168.10.160
- Primary DNS: 192.168.110.10
- DNS Suffix: vcn.ninja.local
- HCX Traffic Type: (Check) HCX Uplink
- Click Create
- You will now see your HCX uplink in the drop down as well as mgmt you previously created
- Click Close
- Click Continue
- On the Select vMotion Network Profile page select the dropdown next to vMotion Network Profile
- Click Create Network Profile
- Select the following options and enter the following values:
- Network: Distributed Portgroup
- Network: Shinobi_vDS vMotion
- Prefix Length: 24
- Gateway: 192.168.111.1
- IP Ranges: 192.168.111.151-192.168.111.160
- Primary DNS: 192.168.110.10
- DNS Suffix: vcn.ninja.local
- HCX Traffic Type: (Check) vMotion
- Click Create
- You will now see 3 profiles
- Click Close
- Click Continue
- On the vSphere replication Network Profile page just click Continue
- On the Select Network Containers Eligible for Network Extension page select the Select Network Containers drop-down and select Ninja-Overlay-TZ
- Click Close
- Click Continue
- On the Review Connection Rules page you will see a rules popup Click Continue
- On the Ready to Complete page Click Finish
Task 3.2.2 - Create and Configure Service Mesh
- Click Service Mesh Tab
- Click the Create Service Mesh button
Now we will create a Service Mesh between the On-Premises and VMC SDDC.
An HCX Service Mesh is the effective HCX services configuration for a source and destination sites. A Service Mesh can be added to a connected Site Pair that has a valid Compute Profile created on both of the sites.
Adding the Service Mesh will initiate the deployment of HCX Interconnect virtual appliances on both of the sites. An interconnect Service Mesh is always created at the source site
- On the Select Sites Page Ensure the On-Prem is Set as the Source and the SDDC is set as Destination and Click Continue
- On the Select Compute Profiles page select the following for each drop-down:
- Source Compute Profile:Shinobi-Com-Prof then click Close
- Select Remote Compute Profile: ComputeProfile(vcenter) then click close
- Click Continue
- On the Select Service to be Activated Page, ensure all services (Hybrid Interconnect, Wan Optimization, Cross Cloud Migration, Bulk Migration, RAV, Network Extension, DR) are checked
- Click Continue
- On the Advanced Configuration - Override Uplink Network Profiles (Optional) page Click Continue
- On the Advanced Configuration - Network Extension Appliance Scale Out page Click Continue
- On the Advanced Configuration -Traffic Engineering page
- check TCP Flow Conditioning
- Click Continue
- On the Review Topology Preview page Click Continue
- On the Ready to Complete page Name the friendly name for the Service Mesh On-Prem-to-VMC
- Click Finish
With the Service Mesh defined, HCX will begin the deployment and configuration of its service appliances in the On-Premises environment and VMC on AWS.
- While on the Interconnect page, in the right pane under the Service Mesh tab click Tasks to view the Progress.
NOTE: This process takes around 15 minutes. Take this time to read ahead and review your configuration. You should check in and refresh every few minutes.
NOTE: If your Service Mesh fails or the Mobility Agent deployment fails, perform a RESYNC. If that step fails, delete and recreate the Service Mesh.
Conclusion
HCX is included with VMware Cloud on AWS subscription. HCX is an application mobility platform that is designed for simplifying application migration, workload rebalancing, and business continuity across data centers and clouds.
VMware HCX enables:
- Application migration to VMC on AWS
- You can schedule and migrate thousands of vSphere virtual machines from your data center(s) to VMC on AWS without requiring a reboot.
- Change platforms or upgrade vSphere versions
- Workload rebalancing
- Workload rebalancing provides a mobility platform across cloud regions and cloud providers to allow customers to move applications and workloads at any time to meet the scale, cost management, compliance, and vendor neutrality goals.
0 Comments
Add your comment