vCNS to NSX Upgrades – vShield Endpoint to NSX Guest Introspection

vShield Endpoint to NSX Guest Introspection

  1. The Installation Status column says Upgrade Available.In the Installation tab, click Service Deployments.
  2. Select the Guest Introspection deployment that you want to upgrade.
  3. The Upgrade ( ) icon in the toolbar above the services table is enabled.
  4. Click the Upgrade ( ) icon and follow the UI prompts.

After Guest Introspection is upgraded, the installation status is Succeeded and service status is Up. Guest Introspection service virtual machines are visible in the vCenter Server inventory.

For more information in this series please continue on to the next part

vCNS to NSX Upgrades – vShield Edges to NSX Edges

vShield Edge to NSX Edge Upgrade Steps

  1. In the vSphere Web Client, select Networking & Security > NSX Edges
  2. For each NSX Edge instance, double click the edge and check for the following configuration settings before upgrading
    1. Click ManageVPN > L2 VPN and check if L2 VPN is enabled. If it is, take note of the configuration details and then delete all L2 VPN configuration
    2. Click ManageRouting Static Routes and check if any static routes are missing a next hop setting. If they are, add the next hop before upgrading the NSX Edge
  3. For each NSX Edge instance, select Upgrade Version from the Actions menu

After the NSX Edge is upgraded successfully, the Status is Deployed, and the Version column displays the new NSX versionIf the upgrade fails with the error message “Failed to deploy edge appliance,” make sure that the host on which the NSX edge appliance is deployed is connected and not in maintenance mode.

  1. If an Edge fails to upgrade and does not rollback to the old version, click the Redeploy NSX Edge icon and then retry the upgrade

For more information in this series please continue on to the next part

vCNS to NSX Upgrades – Host Upgrades

Host Upgrades

  1. Place DRS in to manual mode (Do not disable DRS)
  2. Click Networking & Securityand then click Installation.
  3. Click the Host Preparation

All clusters in your infrastructure are displayed.

  1. For each cluster, click Update or Install in the Installation Status column.
  2. Each host in the cluster receives the new logical switch software.

The host upgrade initiates a host scan. The old VIBs are removed (though they are not completely deleted until after the reboot). New VIBs are installed on the altboot partition. To view the new VIBs on a host that has not yet rebooted, you can run the esxcli software vib list –rebooting-image | grep esx command.]

  1. Monitor the installation until the Installation Status column displays a green check mark
  2. After manually evacuating the hosts, select the cluster and click the Resolve The Resolveaction attempts to complete the upgrade and reboot all hosts in the cluster. If the host reboot fails for any reason, the Resolve action halts. Check the hosts in the Hosts and Clusters view, make sure the hosts are powered on, connected, and contain no running VMs. Then retry the Resolve action.
  3. You may have to repeat the above process for each host.
  4. You can confirm connectivity by performing the following checks
    1. Verify that VXLAN segments are functional. Make sure to set the packet size correctly and include the don’t fragment bit.
    2. Ping between two VMs that are on same virtual wire but on two different hosts (one host that has been upgraded and one host that has not)
      1. From a Windows VM: ping -l 1472 –f <dest VM>
      2. From a Linux VM: ping -s 1472 –M do <dest VM>
    3. Ping between two hosts’ VTEP interfaces.
      1. ping ++netstack=vxlan -d -s 1572 <dest VTEP IP>
    4. All virtual wires from your 5.5 infrastructure are renamed to NSX logical switches, and the VXLAN column for the cluster says Enabled

For more information in this series please continue on to the next part

vCNS to NSX Upgrades – vShield Manager Upgrade Steps

vShield Manager Upgrade Steps

1. Confirm the steps in 2.1.2 have been actioned.

2. Backup to FTP upgraded vCNS and shut-down the VM.

3. Check Snapshot has been taken of vShield Manager.

4. Check Support Bundle has been taken.

5. Shutdown vShield Manager and check the appliance has 4 vCPU’s and 16GB of memory.

6. Power on vShield Manager.

7. Upload vShield-NSX Upgrade, apply and reboot (upgrade file size is 2.4G!).

8. Check you can login to NSX Manager A once the upgrade has completed.

9. You may need to restart the VC Web Client in order to see the plugin in the vSphere Web Client.

10. Check SSO single sign-on in NSX Manager configuration. May need to re-register.

11. Configure Segment ID’s and Multicast Address (Recorded from vShield Manager).

12. Configure backup to FTP location – take backup of NSX Manager A.

13. Create Snapshot on NSX Manager A.

14. Shutdown NSX Manager A.

15. Deploy new NSX Manager B from OVF with same IP as A.

16. Restore FTP Backup from NSX Manager A.

17. Check vCenter Registration and NSX Manager Login.

18. Check NSX Manager for list of Edges, Logical Switches.

19. Once you are happy connectivity is functioning correctly continue with the upgrade.

For more information in this series please continue on to the next part

vCNS to NSX Upgrades – Pre Upgrade Steps

Pre-Upgrade Steps

One or more days before the upgrade, do the following:

  1. Verify that vCNS is at least version (See point 8 Below)
  2. Check you are running one of the following recommended builds vSphere 5.5U3 or vSphere 6.0U2
  3. Verify that all required ports are open (please see appendix A)
  4. Verify that your vSphere environment has sufficient resource for the NSX components.
  1. Verify that all your applicable vSphere clusters have sufficient resource to allow DRS to migrate running workloads during the host preparation stage (n+1).
  2. Verify that you can retrieve uplink port name information for vSphere Distributed Switches. See VMware KB 2129200 for further information. (Note. This is not applicable to NHS Manchester as we are expected to upgrade to NSX 6.2.4)
  3. Ensure that forward and reverse DNS, NTP as well as Lookup Service is working.
  4. If any vShield Endpoint partner services are deployed, verify compatibility before upgrading:
    • Consult the VMware Compatibility Guide for Networking and Security.
    • Consult the partner documentation for compatibility and upgrade details
  5. If you have Data Security in your environment, uninstall it before upgrading vShield Manager.
  6. Check all running edges are on the same latest version as the vShield Manager i.e.
  7. Verify that the vShield Manager vNIC adaptor is VMXNET3.this should be the case if running vShield Manager version however the e1000 vNIC may have been retained if you have previously upgraded the vShield Manager. In order to replace the vNIC follow the steps in KB 2114813. This in part involves deploying a fresh vShield Manager and restoring the configuration. See Appendix C or (http://kb.vmware.com/lb/2114813)
  8. Increase the vShield Manager memory to 16GB.

Pre-Upgrade Validation Steps

Immediately before you begin the upgrade, do the following to validate the existing installation.

  1. Identify administrative user IDs and passwords.
  2. Verify that forward and reverse name resolution is working for all components.
  3. Verify you can log in to all vSphere and vShield components.
  4. Note the current versions of vShield Manager, vCenter Server, ESXi and vShield
  5. Check Multicast address ranges are valid (The recommended multicast address range starts at 239.0.1.0/24 and excludes 239.128.0.0/24.)
  6. Verify that VXLAN segments are functional. Make sure to set the packet size correctly and include the don’t fragment bit.
    1. Ping between two VMs that are on same virtual wire but on two different hosts.
      1. From a Windows VM: ping -l 1472 –f <dest VM>
      2. From a Linux VM: ping -s 1472 –M do <dest VM>
    2. Ping between two hosts’ VTEP interfaces.
      1. ping ++netstack=vxlan -d -s 1572 <dest VTEP IP>
    3. Validate North-South connectivity by pinging out from a VM.
    4. Visually inspect the vShield environment to make sure all status indicators are green, normal, or deployed.
    5. Verify that syslog is configured.
    6. If possible, in the pre-upgrade environment, create some new components and test their functionality.
      1. Validate netcpad and vsfwd user-world agent (UWA) connections.
        1. On an ESXi host, run esxcli network vswitch dvs vmware vxlan network list –vds-name= and check the controller connection state.
        2. On vShield Manager, run the show tech-support save session command, and search for “5671” to ensure that all hosts are connected to vShield Manager.
      2. Check Firewall functionality via Telnet or netcat to confirm the edge firewalls are working as expected.

Pre-Upgrade Steps

Immediately before you begin the upgrade, do the following:

  1. Verify that you have a current backup of the vShield Manager, vCenter and other vCloud Networking and Security components. See Appendix B for the necessary steps to accomplish this.
  2. Purge old logs from the vShield Manager “Purge log Manager” and “purge log system”
  3. Take a snapshot of the vShield Manager, including its virtual memory.
  4. Take a backup of the vDS
  5. Create a Tech Support Bundle.
  6. Record Segment ID’s and Multicast address ranges in use
  7. Increase the memory on the vShield Manager to 16GB and 4 vCPU.
  8. Ensure that forward and reverse domain name resolution is working, using the nslookup command.
  9. If VUM is in use in the environment, ensure that the bypassVumEnabled flag is set to true in vCenter. This setting configures the EAM to install the VIBs directly to the ESXi hosts even when the VUM is installed and/or not available.
  10. Download and stage the upgrade bundle, validate with md5sum.
  11. Do not power down or delete any vCloud Networking and Security components or appliances before instructed to do so.
  12. VMware recommends to do the upgrade work in a maintenance window as defined by your company.

For more information in this series please continue on to the next part

Hardware VTEPS – Design Considerations

Remember that a logical switch with a lif on the DLR can not be extended using a HW VTEP. The GUI will not let you select that logical switch, therefore you have to create the lif on a ESG (Edge). This has an impact on traffic flow as now all VM to VM traffic that needs to be routed will need to traverse your edge cluster and that ESG which is not optimal. This needs to be carefully considered in your design.

Also remember that Universal Logical Switches can not be used in conjunction with a Hardware VTEP.

For a list of current supported Hardware VTEPs please see https://www.vmware.com/resources/compatibility/pdf/vi_hvxg_guide.pdf

 

 

VMware VMworld 2016 Content Preview Video – The…

VMware VMworld 2016 Content Preview Video – The Future of Network Virtualization

VMware VMworld 2016 Content Preview Video – The…

Check out the highlights from one of our VMworld 2015 star sessions – “The Future of Network Virtualization with VMware NSX”. Join us at VMworld 2016 to continue the conversation with CTO Bruce Davie as he presents cutting-edge content in “The Architectual Future of Network Virutalization” [NET8193R]. Now that you’ve seen the highlights, want to see the entire video?


VMware Social Media Advocacy

NSX: Using the DFW then don’t upgrade to 6.2.3

If you are using the DFW which at a guess I would suggest most NSX customers are then please read the following KB closely before upgrading to 6.2.3.

VMware’s current advice is not to upgrade if you are using the DFW due to an issue with with new Global Address Set optimization feature introduced in 6.2.3

Symptoms

After upgrading to NSX for vSphere 6.2.3 with Distributed Firewall (DFW) and Security Groups (SG) configured, you experience these symptoms:

  • Traffic disruption may be encountered upon a vMotion operation on compute virtual machines followed by changes to configuration of the Global Address Sets in the SG referenced for that virtual machine

Cause

In NSX-V 6.2.3, a new Global Address set (Addrset) is introduced as an optimization feature. Any virtual machines that are created on NSX-V 6.2.3 would be using the Shared Global Addrset and would refer to the new Global Addrset.

After upgrading to NSX for vSphere 6.2.3, when virtual machines that were part of a SG that was created in NSX-V 6.2.3 and earlier version are migrated to another host running NSX-V 6.2.3, would continue to refer to the old local copy of Addrsets and ignore new updates in the Global Addrsets.

Resolution

This is a known issue affecting VMware NSX for vSphere 6.2.3.

Currently, there is no resolution.

To work around this issue:

If you have already upgraded to NSX for vSphere 6.2.3

  1. Disable vMotion on the VMK interface on all hosts in the compute cluster.
  2. If your Default_Rule rule is set to DENY, change it to ALLOW.
  3. Disable Distributed Firewall (DFW), per cluster, one at a time.
  4. Wait 15 minutes between each cluster change.
  5. Enable Distributed Firewall (DFW), per cluster, one cluster at a time.
  6. Wait for all applications to recover. (Note: This process is application dependent and can take some time to recover).
  7. Change the Default_Rule rule to DENY.

If you have not yet upgraded to NSX for vSphere 6.2.3

VMware recommends to not upgrade to this version if you are using the Distributed Firewall (DFW) feature.

Source: VMware KB 2146227

 

 

NSX IS-IS Routing Protocol Support

Something that finally got cleared up at last is the deprecation of the IS-IS routing protocol from NSX. It has always been a bit of a confusing issue is it supported is it not ?

Well its been in the UI and the manual for a while however it was always my understanding that eventually this would be removed and from 6.2.3 this is now finally the case and should help clarify our position on using this protocol.

http://pubs.vmware.com/Release_Notes/en/nsx/6.2.3/releasenotes_nsx_vsphere_623.html

Deprecated and Discontinued Functionality

IS-IS is not a supported routing protocol for the Edge Services Gateway router

It will be removed from the UI and APIs in a future release. (Issue 1498251)