vCNS to NSX Upgrades – vShield Manager Upgrade Steps

vShield Manager Upgrade Steps

1. Confirm the steps in 2.1.2 have been actioned.

2. Backup to FTP upgraded vCNS and shut-down the VM.

3. Check Snapshot has been taken of vShield Manager.

4. Check Support Bundle has been taken.

5. Shutdown vShield Manager and check the appliance has 4 vCPU’s and 16GB of memory.

6. Power on vShield Manager.

7. Upload vShield-NSX Upgrade, apply and reboot (upgrade file size is 2.4G!).

8. Check you can login to NSX Manager A once the upgrade has completed.

9. You may need to restart the VC Web Client in order to see the plugin in the vSphere Web Client.

10. Check SSO single sign-on in NSX Manager configuration. May need to re-register.

11. Configure Segment ID’s and Multicast Address (Recorded from vShield Manager).

12. Configure backup to FTP location – take backup of NSX Manager A.

13. Create Snapshot on NSX Manager A.

14. Shutdown NSX Manager A.

15. Deploy new NSX Manager B from OVF with same IP as A.

16. Restore FTP Backup from NSX Manager A.

17. Check vCenter Registration and NSX Manager Login.

18. Check NSX Manager for list of Edges, Logical Switches.

19. Once you are happy connectivity is functioning correctly continue with the upgrade.

For more information in this series please continue on to the next part

vCNS to NSX Upgrades – Pre Upgrade Steps

Pre-Upgrade Steps

One or more days before the upgrade, do the following:

  1. Verify that vCNS is at least version (See point 8 Below)
  2. Check you are running one of the following recommended builds vSphere 5.5U3 or vSphere 6.0U2
  3. Verify that all required ports are open (please see appendix A)
  4. Verify that your vSphere environment has sufficient resource for the NSX components.
  1. Verify that all your applicable vSphere clusters have sufficient resource to allow DRS to migrate running workloads during the host preparation stage (n+1).
  2. Verify that you can retrieve uplink port name information for vSphere Distributed Switches. See VMware KB 2129200 for further information. (Note. This is not applicable to NHS Manchester as we are expected to upgrade to NSX 6.2.4)
  3. Ensure that forward and reverse DNS, NTP as well as Lookup Service is working.
  4. If any vShield Endpoint partner services are deployed, verify compatibility before upgrading:
    • Consult the VMware Compatibility Guide for Networking and Security.
    • Consult the partner documentation for compatibility and upgrade details
  5. If you have Data Security in your environment, uninstall it before upgrading vShield Manager.
  6. Check all running edges are on the same latest version as the vShield Manager i.e.
  7. Verify that the vShield Manager vNIC adaptor is VMXNET3.this should be the case if running vShield Manager version however the e1000 vNIC may have been retained if you have previously upgraded the vShield Manager. In order to replace the vNIC follow the steps in KB 2114813. This in part involves deploying a fresh vShield Manager and restoring the configuration. See Appendix C or (http://kb.vmware.com/lb/2114813)
  8. Increase the vShield Manager memory to 16GB.

Pre-Upgrade Validation Steps

Immediately before you begin the upgrade, do the following to validate the existing installation.

  1. Identify administrative user IDs and passwords.
  2. Verify that forward and reverse name resolution is working for all components.
  3. Verify you can log in to all vSphere and vShield components.
  4. Note the current versions of vShield Manager, vCenter Server, ESXi and vShield
  5. Check Multicast address ranges are valid (The recommended multicast address range starts at 239.0.1.0/24 and excludes 239.128.0.0/24.)
  6. Verify that VXLAN segments are functional. Make sure to set the packet size correctly and include the don’t fragment bit.
    1. Ping between two VMs that are on same virtual wire but on two different hosts.
      1. From a Windows VM: ping -l 1472 –f <dest VM>
      2. From a Linux VM: ping -s 1472 –M do <dest VM>
    2. Ping between two hosts’ VTEP interfaces.
      1. ping ++netstack=vxlan -d -s 1572 <dest VTEP IP>
    3. Validate North-South connectivity by pinging out from a VM.
    4. Visually inspect the vShield environment to make sure all status indicators are green, normal, or deployed.
    5. Verify that syslog is configured.
    6. If possible, in the pre-upgrade environment, create some new components and test their functionality.
      1. Validate netcpad and vsfwd user-world agent (UWA) connections.
        1. On an ESXi host, run esxcli network vswitch dvs vmware vxlan network list –vds-name= and check the controller connection state.
        2. On vShield Manager, run the show tech-support save session command, and search for “5671” to ensure that all hosts are connected to vShield Manager.
      2. Check Firewall functionality via Telnet or netcat to confirm the edge firewalls are working as expected.

Pre-Upgrade Steps

Immediately before you begin the upgrade, do the following:

  1. Verify that you have a current backup of the vShield Manager, vCenter and other vCloud Networking and Security components. See Appendix B for the necessary steps to accomplish this.
  2. Purge old logs from the vShield Manager “Purge log Manager” and “purge log system”
  3. Take a snapshot of the vShield Manager, including its virtual memory.
  4. Take a backup of the vDS
  5. Create a Tech Support Bundle.
  6. Record Segment ID’s and Multicast address ranges in use
  7. Increase the memory on the vShield Manager to 16GB and 4 vCPU.
  8. Ensure that forward and reverse domain name resolution is working, using the nslookup command.
  9. If VUM is in use in the environment, ensure that the bypassVumEnabled flag is set to true in vCenter. This setting configures the EAM to install the VIBs directly to the ESXi hosts even when the VUM is installed and/or not available.
  10. Download and stage the upgrade bundle, validate with md5sum.
  11. Do not power down or delete any vCloud Networking and Security components or appliances before instructed to do so.
  12. VMware recommends to do the upgrade work in a maintenance window as defined by your company.

For more information in this series please continue on to the next part

VMware VMworld 2016 Content Preview Video – The…

VMware VMworld 2016 Content Preview Video – The Future of Network Virtualization

VMware VMworld 2016 Content Preview Video – The…

Check out the highlights from one of our VMworld 2015 star sessions – “The Future of Network Virtualization with VMware NSX”. Join us at VMworld 2016 to continue the conversation with CTO Bruce Davie as he presents cutting-edge content in “The Architectual Future of Network Virutalization” [NET8193R]. Now that you’ve seen the highlights, want to see the entire video?


VMware Social Media Advocacy

vSphere Authentication Proxy – No Support for 2012/2012R2 Domains

After checking this internally it turns out that vSphere Authentication is in fact supported with 2012 and 2012R2 functional levels and the KB will be updated. 

Surprisingly vSphere Authentication Proxy currently has no support for domains that have a functional level of 2012 or 2012R2.

The product will install correctly and register the CAM account but will be unable to authenticate the ESXi host with the domain.

 

Domain Functional Level
Windows 2000 native Windows Server 2003 Windows Server 2008 Windows Server 2008 R2 Windows Server 2012 Windows Server 2012 R2
ESXi Version 4.x Yes Yes Yes No No 1,2 No 1,2
5.0 Yes Yes Yes Yes Yes 2,3 No 1,2
5.1 No Yes Yes Yes Yes 2 Yes 2,4
5.5 No Yes Yes Yes Yes 2 Yes 2,5
6.0 No Yes Yes Yes Yes 2 Yes 2

 

Notes:
  1. Due to the most recent revisions of ESXi having been released before the release of the Domain Functionality level at the time of this article’s writing, the ESXi version is untested to run on the Active Directory Domain Functionality Level.
  2. Due to limitations in the vSphere Authentication Proxy, this version of Active Directory will not work. vSphere Authentication Proxy will only work with Windows Server 2008 R2 or lower. For more information, see Install or Upgrade vSphere Authentication Proxy section in the vSphere Installation Guide. If you are not using vSphere Authentication Proxy, this may be ignored.
  3. As of vSphere 5.0 Update 3, this Active Directory Domain Functionality Level is now supported.
  4. As of vSphere 5.1 Update 3, this Active Directory Domain Functionality Level is now supported.
  5. As of vSphere 5.5 Update 1, this Active Directory Domain Functionality Level is now supported.

Source: https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2113023

 

SRM 6.1.1 Released

What’s New in Site Recovery Manager 6.1.1

So SRM 6.1.1 got released yesterday so I thought I would share with you what’s new!

VMware Site Recovery Manager 6.1.1 delivers new bug fixes described in the Resolved Issues section.

VMware Site Recovery Manager 6.1.1 provides the following new features:

  • Support for VMware vSphere 6.0 update 2.
  • Support for Two-factor authentication with RSA SecurID for vCenter Server 6.0U2.
  • Support for Smart Card (Common Access Card) authentication for vCenter Server 6.0U2.
  • Site Recovery Manager 6.1.1 now supports the following external databases:
    • Microsoft SQL Server 2012 Service Pack 3
    • Microsoft SQL Server 2014 Service Pack 1
  • Adding localization support for Spanish language.

See the release notes for further information.

 

 

NSX Design & Deploy – Course Notes

So I have just finished a week in Munich on an internal NSX design & deploy course which had some great content. I have made a few notes and wanted to very briefly blog those, mainly for my benefit but also in the hope they may be useful or interesting to others as well.

In no particular order:

  • vSphere 5.5 – When upgrading the vDS from 5.1 to 5.5 you need to upgrade vDS to Advanced mode after you do the vDS upgrade. This enables features such as LACP and NIOC v3 but is a requirement for NSX to function correctly.
  • When considering Multicast mode with VTEPs in different subnets be aware that PIM is not included in the Cisco standard licence and requires advanced! PIM is a requirement when using multiple VTEPs in different subnets.
  • When using dynamic routing with a HA enabled DLR the total failover time could be up to 120 seconds
    • 15 sec dead time ( 6 sec min 9-12 recommended- 6 not recommended as its pinging every 3 seconds)
    • claim interfaces/ start services
    • ESXi update
    • route reconvergence
  • Consider setting a static route (summary route) with the correct weight which is lower than the dynamic route  so traffic isn’t disrupted during DLR failover
  • Different subnets for virtual and physical workloads lets you have 1 summary route for the virtual world. (for the above event – this is likely not possible in a brownfield deployment)
  • Note during a DLR failover it looses the routing adjacency – the routing table entry upstream is removed during this period – which looses the route – this could be avoided by increasing the OSPF timers upstream so they won’t lose the route.
  • If you fail to spread the transport zones across all the applicable hosts that are part of the same vDS -the port groups will exist but won’t be able to communicate (No traffic flow – if someone accidentally selects that port group)

Part two coming soon!

Don’t forget the logs

Scratch Partition

A design consideration when using ESXi hosts which do not have local storage or a SD/USB device under 5.2GB is where should I place my scratch location?

You may be aware that if you do use Autodeploy, a SD or USB device to boot your hosts then the scratch partition runs on the Ramdisk which is lost after a host reboot for obvious reasons as its non-persistent.

Consider configuring a dedicated NFS or VMFS volume to store the scratch logs, making sure it has the IO characteristics to handle this. You also need to create a separate folder for each ESXi host.

The scratch log location can be specified by configuring the following ESXi advanced setting.

Screen Shot 2016-02-16 at 11.21.33

 

 

Syslog Server

Specifying a remote syslog server allows ESXi to ship its logs to a remote location, so it can help mitigate some of the issues we have just discussed above, however the location of the syslog server is just as important as remembering to configure it in the first place.

You don’t want to place your syslog server in the same cluster as the ESXi hosts your configuring it on or on the same primary storage for that matter, consider placing it inside your management cluster or on another cluster altogether.

The syslog location can be specified by configuring the following ESXi advanced setting.

Screen Shot 2016-02-16 at 11.24.44

Disabling vSphere vDS Network Rollback

There are certain situations when temporarily disabling network rollback might be a good idea,  notice the use of the word temporarily as it’s not something you would want to leave disabled.

Let’s say you need to change the VLAN configuration on a network port which is used for vSphere management traffic from an access port to a trunk port. Before doing this you would need to change the vDS portgroup to tag the management traffic VLAN, however in doing this the host would lose management network access until the correct configuration was applied on the switch port.

The vDS would detect this change and attempt to rollback the configuration to untagged, obviously we don’t want this to happen in this instance so VMware allow us to disable this rollback feature.

We can do this by configuring the following settings.

  1. In the vSphere Web Client, navigate to a vCenter Server Instance.
  2. On the Manage tab, click Settings.
  3. Select Advanced Settings and click Edit.
  4. Select the config.vpxd.network.rollback key, and change the value to false. If the key is not present, you can add it and set the value to false.
  5. Click OK.
  6. Restart the vCenter Server to apply the changes.

Remember to revert the change once you are finished.

VMware removes the Enterprise edition of vSphere

download

Yesterday (10th February 2016) VMware announced on its blog that it is simplifying its line up by removing the Enterprise edition. Existing Enterprise users can either stay on the Enterprise edition and be supported until official End of Support or they can upgrade to the Enterprise Plus edition at a special promotional 50% off upgrade price.

This change has been reflected in the Pricing White paper and also on the VMware Site.

 

 

 

To TPS, or not to TPS, that is the question?

micron-LRDIMM-module

Something that crops up again and again when discussing vSphere designs with customers is whether on not they should enable (Inter-VM) TPS (Transparent Page Sharing) as since the end of 2014 VMware decided to disable TPS by default.

To understand if you should enable TPS you need to firstly understand what it does.  TPS is quite a misunderstood beast as many people contribute a 20-30% memory overcommitment too TPS where in reality it’s not even close to that. That is because TPS only comes in to effect when the ESXi host is close to memory exhaustion. TPS would be the first trick in the box that ESXi uses to try to reduce the memory pressure on the host, if it can not do this then the host would then start ballooning. This would be closely followed by compression and then finally swapping, none of which are cool guys!

So should I enable (Inter-VM) TPS?… well as that great IT saying goes… it depends!

The reason VMware disabled (Inter-VM) TPS in the first place was because of their stronger stance on security (Their Secure by Default Stance), their concern was a man in the middle type attack could be launched and shared pages compromised. So in a nutshell, you need to consider the risk of enabling (Inter-VM) TPS. If you are running and offering a public cloud solution from your vSphere environment then it may be best to leave TPS disabled as you could argue you are under a greater risk of attack; and have a greater responsibility for your customers data.

If however you are running a private vSphere compute environment then the chances are only IT admins have access to the VM’s so the risk is much less. Therefore to reduce the risk of running in to any performance issues caused by ballooning and swapping you may want to consider enabling TPS, which would help mitigate against both of these.