VMware Announces General Availability of…

VMware Announces General Availability of vSphere 6.5 Update 1

VMware Announces General Availability of…

vSphere 6.5 Update 1 is the Update You’ve Been Looking For! Today, VMware is excited to announce the general availability of vSphere 6.5 Update 1. This is the first major update to the well-received vSphere 6.5 that was released in November of 2016. With this update release, VMware builds upon the already robust industry-leading virtualization The post VMware Announces General Availability of vSphere 6.5 Update 1 appeared first on VMware vSphere Blog .


VMware Social Media Advocacy

vRA 7 & NSX 6.3 – The Security Tag Gotcha!

Lets assume you’re wishing to deploy a vRA 7.x blueprint into an environment where NSX 6.3.x has been deployed, and the DFW default rule is set to deny. During the provisioning of the vRA VMs they will of course need firewall access for services such as Active Directory and DNS to allow them to customise successfully, and here in lies the problem.

You might assume you could go about creating your security policies and security groups as normal and simply include the security tag within the blueprint to grant access to these services. However; vRA won’t assign the security tag until after the machine has finished customizing. So that creates us a potential issue as the VM won’t have access to the applicable network resources such as AD & DNS to finish customizing successfully as the default DFW is set to deny.

So to design around this you need to consider having some shared services rules at the top of the DFW rule table which allow services such as Active Directory and DNS access to these VMs, this will allow the vRA VMs to have the necessary network access to finish deploying & customizing successfully. You could achieve is in a number of ways such as creating a security group based on OS name of “Windows” and VM Name that equals the name of your vRA VM’s. Therefore as soon as vRA creates the VM object it will be assigned to the correct shared services security group and given the correct access, you can then layer in additional services using security tags as originally intended.

vSphere 6.5 PSC’s – Multisite SSO Domains & Failures

Since vSphere 6.5 came along we have deprecated a PSC/SSO topology that I have seen customers deploy quite frequently, therefore I wanted to quickly explain what has changed and why.

Since vSphere 6.5 it is no longer possible to re-point a vCenter server between SSO sites – See VMware KB 2131191

This has a big knock on effect for some customers as traditionally they only deployed a single PSC (Externally) and a single vCenter at each site. In the event of a failure of the PSC at site 1 the vCenter server at site 1 could be re-pointed to the PSC in Site 2. This would then get you back up and running until you recovered the failed PSC.

However in vSphere 6.5 this is no longer possible – If you have two SSO sites lets call them Site 1 and Site 2 you cannot re-point a vCenter server between the two sites. So if you only had a single PSC (think back to the above example and vSphere 6.0) you would not be able to recover that site successfully.

So how do I get round this?

Deploy two PSC’s at site and within the same SSO site/domain (See the below diagram) you don’t even need to use a load balancer, as you can manually re-point your vCenter to the other PSC running at the same site in the event of a failure.

Therefore when deploying multiple SSO sites and PSC’s across physical sites consider the above and make sure you are deploying a supported and recoverable topology.

For a list of supported topologies in vSphere 6.5 and for further reading please see VMware KB 2147672

 

Implementing a multi-tenant networking platform…

Implementing a multi-tenant networking platform with NSX

Implementing a multi-tenant networking platform…

So we have covered the typical challenges of a multi-tenant network and designed a solution to one of these, it’s time to get down to the bones of it and do some configuration! Let’s implement it in the lab, I have set up an NSX ESG Cust_1-ESG and an NSX DLR control VM Cust_1-DLR with the below IP configuration:


VMware Social Media Advocacy

vCNS to NSX Upgrades – T+1 Post-Upgrade Steps

T+1 Post-Upgrade Steps

After the upgrade, do the following:

  1. Delete the snapshot of the NSX Manager taken before the upgrade.
  2. Create a current backup of the NSX Manager after the upgrade.
  3. Check that VIBs have been installed on the hosts.

NSX installs these VIBs:

esxcli software vib get --vibname esx-vxlan

esxcli software vib get --vibname esx-vsip
  1. If Guest Introspection has been installed, also check that this VIB is present on the hosts:
esxcli software vib get --vibname epsec-mux
  1. Resynchronize the host message bus. VMware advises that all customers perform resync after an upgrade. You can use the following API call to perform the resynchronization on each host.
URL : https://<nsx-mgr-ip>/api/4.0/firewall/forceSync/<host-id>

HTTP Method : POST 

 Headers: 

Authorization : base64encoded value of username password

Accept : application/xml

Content-Type : application/xml

vCNS to NSX Upgrades – vShield Endpoint to NSX Guest Introspection

vShield Endpoint to NSX Guest Introspection

  1. The Installation Status column says Upgrade Available.In the Installation tab, click Service Deployments.
  2. Select the Guest Introspection deployment that you want to upgrade.
  3. The Upgrade ( ) icon in the toolbar above the services table is enabled.
  4. Click the Upgrade ( ) icon and follow the UI prompts.

After Guest Introspection is upgraded, the installation status is Succeeded and service status is Up. Guest Introspection service virtual machines are visible in the vCenter Server inventory.

For more information in this series please continue on to the next part

vCNS to NSX Upgrades – vShield Edges to NSX Edges

vShield Edge to NSX Edge Upgrade Steps

  1. In the vSphere Web Client, select Networking & Security > NSX Edges
  2. For each NSX Edge instance, double click the edge and check for the following configuration settings before upgrading
    1. Click ManageVPN > L2 VPN and check if L2 VPN is enabled. If it is, take note of the configuration details and then delete all L2 VPN configuration
    2. Click ManageRouting Static Routes and check if any static routes are missing a next hop setting. If they are, add the next hop before upgrading the NSX Edge
  3. For each NSX Edge instance, select Upgrade Version from the Actions menu

After the NSX Edge is upgraded successfully, the Status is Deployed, and the Version column displays the new NSX versionIf the upgrade fails with the error message “Failed to deploy edge appliance,” make sure that the host on which the NSX edge appliance is deployed is connected and not in maintenance mode.

  1. If an Edge fails to upgrade and does not rollback to the old version, click the Redeploy NSX Edge icon and then retry the upgrade

For more information in this series please continue on to the next part

vCNS to NSX Upgrades – Host Upgrades

Host Upgrades

  1. Place DRS in to manual mode (Do not disable DRS)
  2. Click Networking & Securityand then click Installation.
  3. Click the Host Preparation

All clusters in your infrastructure are displayed.

  1. For each cluster, click Update or Install in the Installation Status column.
  2. Each host in the cluster receives the new logical switch software.

The host upgrade initiates a host scan. The old VIBs are removed (though they are not completely deleted until after the reboot). New VIBs are installed on the altboot partition. To view the new VIBs on a host that has not yet rebooted, you can run the esxcli software vib list –rebooting-image | grep esx command.]

  1. Monitor the installation until the Installation Status column displays a green check mark
  2. After manually evacuating the hosts, select the cluster and click the Resolve The Resolveaction attempts to complete the upgrade and reboot all hosts in the cluster. If the host reboot fails for any reason, the Resolve action halts. Check the hosts in the Hosts and Clusters view, make sure the hosts are powered on, connected, and contain no running VMs. Then retry the Resolve action.
  3. You may have to repeat the above process for each host.
  4. You can confirm connectivity by performing the following checks
    1. Verify that VXLAN segments are functional. Make sure to set the packet size correctly and include the don’t fragment bit.
    2. Ping between two VMs that are on same virtual wire but on two different hosts (one host that has been upgraded and one host that has not)
      1. From a Windows VM: ping -l 1472 –f <dest VM>
      2. From a Linux VM: ping -s 1472 –M do <dest VM>
    3. Ping between two hosts’ VTEP interfaces.
      1. ping ++netstack=vxlan -d -s 1572 <dest VTEP IP>
    4. All virtual wires from your 5.5 infrastructure are renamed to NSX logical switches, and the VXLAN column for the cluster says Enabled

For more information in this series please continue on to the next part