VMware removes the Enterprise edition of vSphere

download

Yesterday (10th February 2016) VMware announced on its blog that it is simplifying its line up by removing the Enterprise edition. Existing Enterprise users can either stay on the Enterprise edition and be supported until official End of Support or they can upgrade to the Enterprise Plus edition at a special promotional 50% off upgrade price.

This change has been reflected in the Pricing White paper and also on the VMware Site.

 

 

 

To TPS, or not to TPS, that is the question?

micron-LRDIMM-module

Something that crops up again and again when discussing vSphere designs with customers is whether on not they should enable (Inter-VM) TPS (Transparent Page Sharing) as since the end of 2014 VMware decided to disable TPS by default.

To understand if you should enable TPS you need to firstly understand what it does.  TPS is quite a misunderstood beast as many people contribute a 20-30% memory overcommitment too TPS where in reality it’s not even close to that. That is because TPS only comes in to effect when the ESXi host is close to memory exhaustion. TPS would be the first trick in the box that ESXi uses to try to reduce the memory pressure on the host, if it can not do this then the host would then start ballooning. This would be closely followed by compression and then finally swapping, none of which are cool guys!

So should I enable (Inter-VM) TPS?… well as that great IT saying goes… it depends!

The reason VMware disabled (Inter-VM) TPS in the first place was because of their stronger stance on security (Their Secure by Default Stance), their concern was a man in the middle type attack could be launched and shared pages compromised. So in a nutshell, you need to consider the risk of enabling (Inter-VM) TPS. If you are running and offering a public cloud solution from your vSphere environment then it may be best to leave TPS disabled as you could argue you are under a greater risk of attack; and have a greater responsibility for your customers data.

If however you are running a private vSphere compute environment then the chances are only IT admins have access to the VM’s so the risk is much less. Therefore to reduce the risk of running in to any performance issues caused by ballooning and swapping you may want to consider enabling TPS, which would help mitigate against both of these.

 

 

 

Migrating from a vDS to a VSS

So you need to remove a NSX enabled host from a cluster but first you need to migrate your host networking from the vDS to a VSS. There are a few ways to do this but by far the easiest is to migrate your vmkernel portgroups rather than recreate them manually Doing it this way preserves your IP config and saves you having to setup each of the vmkernel interfaces (Management, vMotion, iSCSI).

Note. This method assumes you have more than 1 NIC in your host/ vDS.

Firstly we need to identify and remove the NICs we will use to stage our portgroups during the migration (in our case we will use two nics) from the vDS and to make them available to be used in the VSS. Make sure that if you are tagging different vlans over different nics (i.e nics just for iSCSI) you know which NICs are tagged with what VLAN.

Screenshot 2015-12-23 10.42.51

We have removed NIC 0 and 2 from out vDS this can be done by finding the host you want to remove in the vSphere Client and clicking “Manage Physical Adapters”, on the next screen click remove next to the adapters you wish to remove. (It’s good practise to make sure your teaming and failover orders are set not to use the NICs you will remove)

Screenshot 2016-01-04 14.40.56

Once the NICs have been removed we need to create two Virtual Switches (in our case) ready to receive the migrated vmkernel portgroups.

Screenshot 2016-01-04 14.46.22

We can then click “Manage Virtual Adapters” next to where we clicked “Manage Physical Adapters” above.

Screenshot 2015-12-23 10.42.58

We then select the vmkernel interface we wish to migrate and click migrate.

Screenshot 2015-12-23 10.43.05

Select the switch the vmkernel portgroup will be migrated too and click next.

Screenshot 2015-12-23 10.43.10

Give the new portgroup a name i.e. Management/ vMotion etc (your existing vDs portgroup name can be used here).

Screenshot 2015-12-23 10.43.16

If you are using VLAN trunking then specify the applicable VLAN as well.

Screenshot 2015-12-23 10.43.29

Now click finish and the wizard will create the new VSS portgroup for you.

Screenshot 2015-12-23 10.43.35

Repeat this process for each portgroup you need to migrate, as you can see below I have migrated 3 portgroups.

Screenshot 2015-12-23 10.47.17

 

If you have any VM’s running on this host you can now create your VM portgroups and migrate the VM’s on to these new portgroups at this point.

If we edit one of the portgroups we can see that its using the NIC we migrated over previously.

Screenshot 2015-12-23 10.47.36

Make sure to set the MTU if this is needed as migrating the portgroups does not always migrate the previous MTU value.

Screenshot 2015-12-23 10.47.48

Once you have migrated all your vmkernel portgroups over you can now go back and remove the remaining NICs from the vDS and assign them to the necessary standard vSwitch (VSS).

Make sure to set the correct teaming and failover order, and to go back and change the teaming and failover order on your vDS if you changed it at the start of the process.

Screenshot 2015-12-23 10.50.39

You can now safely remove the vDS from the host.

Screenshot 2015-12-23 10.49.18

 

 

Demystifying The ESXi Upgrade Process

A common question which gets asked allot is; Should I upgrade my ESXi hosts or do a fresh install ?

Most people when asked that question would be inclined to say “Fresh Install” when challenged some of the excuses I hear are ” It will carry over a load of crap from the previous release”. This seems to be a mindset we have gotten ourselves in from managing other vendor OS’s (not pointing the finger).

In fact if we were to look at the process in a bit more detail, an upgraded ESXi host is almost identical to a freshly installed ESXi host as..

  • Boot disk is not repartitioned
  • Overwrites boot bank contents
  • Configuration and VMFS volume preserved

Upgrade Process Walkthrough

Step 1: Save the config (state.tgz)

Step 2: Replace VIBs (overwrite)

Step 3: Reboot

Step 4: Config re-applied on reboot

ESXi

Note. The partition layout is the same now for 6 as it was for 5.x

 

Don’t be afraid to upgrade!

 

Horizon View: USB Redirection Problems

Problem

USB redirection in View Client fails to make local devices available on the remote desktop, or some devices do not appear to be available for redirection in View Client.

Cause

  • The following are possible causes for USB redirection failing to function correctly or as expected.
  • USB redirection is not supported for Windows 2003 or Windows 2008 systems or for View desktops that are managed by Microsoft Terminal Services.
  • Webcams are not supported for redirection.
  • The redirection of USB audio devices depends on the state of the network and is not reliable. Some devices require a high data throughput even when they are idle.
  • USB redirection is not supported for boot devices. If you run View Client on a Windows system that boots from a USB device, and you redirect this device to the remote desktop, the local operating system might become unresponsive or unusable. See http://kb.vmware.com/kb/1021409.
  • By default, View Client for Windows does not allow you to select Human Interface Devices (HIDs) and Bluetooth devices that are paired with an HID for redirection. See http://kb.vmware.com/kb/1011600.
  • RDP does not support the redirection of USB HIDs for the console session, or of smart card readers. See http://kb.vmware.com/kb/1011600.
  • RDP can cause unexpected problems when using USB flash cards. See http://kb.vmware.com/kb/1019547.
  • Windows Mobile Device Center can prevent the redirection of USB devices for RDP sessions. See http://kb.vmware.com/kb/1019205.
  • For some USB HIDs, you must configure the virtual machine to update the position of the mouse pointer. See http://kb.vmware.com/kb/1022076.
  • Some audio devices might require changes to policy settings or to registry settings. See http://kb.vmware.com/kb/1023868.
  • Network latency can cause slow device interaction or cause applications to appear frozen because they are designed to interact with local devices. Very large USB disk drives might take several minutes to appear in Windows Explorer.
  • USB flash cards formatted with the FAT32 file system are slow to load. See http://kb.vmware.com/kb/1022836.
  • A process or service on the local system opened the device before you connected to the remote desktop.
  • A redirected USB device stops working if you reconnect a desktop session even if the desktop shows that the device is available.
  • USB redirection is disabled in View Administrator.
  • Missing or disabled USB redirection drivers on the guest.
  • Missing or disabled USB redirection drivers or missing or disabled drivers for the device that is being redirected on the client.

Solution

  • If a redirected device remains unavailable or stops working after a temporary disconnection, remove the device, plug it in again, and retry the redirection.
  • In View Administrator, go to Policies > Global Policies, and verify that USB access is set to Allow under View Policies.
  • Examine the log on the guest for entries of class wssm_usb, and the log on the client for entries of class wswc_usb.
  • Entries with these classes are written to the logs if a user is not an administrator, or if the USB redirection drivers are not installed or are not working.
  • Open the Device Manager on the guest, expand Universal Serial Bus controllers, and reinstall the VMware View Virtual USB Device Manager and VMware View Virtual USB Hub drivers if these drivers are missing or re-enable them if they are disabled.
  • Open the Device Manager on the client, expand Universal Serial Bus controllers, and reinstall the VMware View Generic USB Device driver and the USB driver for the redirected device if these drivers are missing or re-enable them if they are disabled.

The View virtual machine is not accessible and the View Administration console shows the virtual machine status as “Already Used”

Cause

If a desktop that is set to refresh or delete after log off is reset, the desktop goes into the Already Used state, or possibly the Agent Disabled state.

This security feature prevents any previous session data from being available during the next log in, but leaves the data intact to enable administrators to access the desktop and retrieve lost data or investigate the root cause of the reset. Administrators can then refresh or delete the desktop.

The View desktop can also go into the Already Used state if a virtual machine is powered on in another ESXi host in the cluster in response to an HA event, or if it was shut down without reporting to the broker that the user had logged out.

 

Resolution

To resolve this issue, perform a refresh of the desktop using the View Administration console. For more information, see the VMware Horizon View Administration guide relevant to your version.

Alternatively, In View 5.1.2 and later releases, you can add a View LDAP attribute, pae-DirtyVMPolicy under OU=Server Groups, DC=vdi, DC=vmware, DC=int, and set the values below for the attribute.

 

The pae-DirtyVMPolicy values provide these options for the Refresh on logoff policy:

  • pae-DirtyVMPolicy=0: Mark virtual machines that were not cleanly logged off as Already used and block user access to them. This is the default behavior in View 4.6 and later releases.
  • pae-DirtyVMPolicy=1: Allow virtual machines that were not cleanly logged off to become available without being refreshed. View Client users can access these desktops.
  • pae-DirtyVMPolicy=2: Automatically refresh virtual machines that were not cleanly logged off. View Client users can access these desktops after the refresh operation is completed

vSphere 6 Update 1 Breaks Veeam Backups

So after updating my vSphere 6 home lab to Update 1 I noticed that when I woke this morning that all my Veeam jobs had failed over night…bugger.

Screenshot 2015-09-19 08.31.51

After digging around in the logs (C:ProgramDataVeeamBackup)  I discovered the following errors.

Screenshot 2015-09-19 08.25.35

I then did a bit of searching on the internet and found the following VMware KB and Veeam Post

So I edited the /etc/vmware/config file with WinSCP and added in the vmauthd.ssl.noSSLv3 = false at the end of the file.Screenshot 2015-09-19 08.30.56

I then restarted the rhttpproxy service  by enabling SSH on the host and using putty to run the following command

/etc/init.d/rhttpproxy restart

Screenshot 2015-09-19 08.31.32

Once the service has been restarted I repeated this on the remaining on hosts in the cluster, I was then able to successfully re-run the Veeam jobs.

Screenshot 2015-09-19 08.34.11

So to summarise Gostev on the Veeam forums posted saying the following on Monday.

“As far as the original issue that created this topic, the research has now been completed.

SSL usage seems to be isolated to NFC API, and is caused by a bug in parsing the list of supported SSL/TLS protocol versions on our side. Despite TLS was added at some point, this bug went unnoticed, as things continued to work normally over SSL – until now.

The good news is that this will be very easy for us to fix – so, we are planning to provide the hotfix for v8 Update 2 soon, and also include this fix into v8 Update 3 scheduled to be released late September (with the main feature being Windows 10 support). Thanks”

 

It’s good to see Veeam reacting so quickly to this as it appears to have even caught VMware themselves off guard as it also affected View Composer connectivity as per the above VMware KB.

Veeam also have released their own KB it can be found here

Horizon View – vCentre Server (7 of 7)

VMwareHorizonView

The vCentre server is the management layer for managing the ESXi hosts. View integrates into the vCentre server to allow it to power on & off desktops. View composer talks to vCentre for any provisioning tasks that need to be ran e.g. Creating or deleting desktops and refreshing desktops at log off.

The vCentre server should be a dedicated vCentre server purely for the VDI Desktop environment and separate from any existing server environment.

Horizon View – View Client (6 of 7)

VMwareHorizonView

The View Client is the primary access method for accessing View Desktops, it can run on IOS,Android,Thin Clients, MAC and PC’s/Laptops.

cdr

With the latest release VMware have provided us with the ability to redirect client drives a feature which has been available to Citrix users for quite some time.  Check the latest Brian Madden review of this here.

 

Horizon View – View Agent (5 of 7)

VMwareHorizonView

The View Agent resides on the virtual desktop and provides features such as connection monitoring, virtual printing and access to local USB devices. The View Agent is installed by running the appropriate View Agent installer. An additional installer used to be required to add the HTML (Blast) access, this was referred to as the Feature Pack Installer but this is now included within the agent installer.