After checking this internally it turns out that vSphere Authentication is in fact supported with 2012 and 2012R2 functional levels and the KB will be updated.
Surprisingly vSphere Authentication Proxy currently has no support for domains that have a functional level of 2012 or 2012R2.
The product will install correctly and register the CAM account but will be unable to authenticate the ESXi host with the domain.
||Domain Functional Level
|Windows 2000 native
||Windows Server 2003
||Windows Server 2008
||Windows Server 2008 R2
||Windows Server 2012
||Windows Server 2012 R2
- Due to the most recent revisions of ESXi having been released before the release of the Domain Functionality level at the time of this article’s writing, the ESXi version is untested to run on the Active Directory Domain Functionality Level.
- Due to limitations in the vSphere Authentication Proxy, this version of Active Directory will not work. vSphere Authentication Proxy will only work with Windows Server 2008 R2 or lower. For more information, see Install or Upgrade vSphere Authentication Proxy section in the vSphere Installation Guide. If you are not using vSphere Authentication Proxy, this may be ignored.
- As of vSphere 5.0 Update 3, this Active Directory Domain Functionality Level is now supported.
- As of vSphere 5.1 Update 3, this Active Directory Domain Functionality Level is now supported.
- As of vSphere 5.5 Update 1, this Active Directory Domain Functionality Level is now supported.
A design consideration when using ESXi hosts which do not have local storage or a SD/USB device under 5.2GB is where should I place my scratch location?
You may be aware that if you do use Autodeploy, a SD or USB device to boot your hosts then the scratch partition runs on the Ramdisk which is lost after a host reboot for obvious reasons as its non-persistent.
Consider configuring a dedicated NFS or VMFS volume to store the scratch logs, making sure it has the IO characteristics to handle this. You also need to create a separate folder for each ESXi host.
The scratch log location can be specified by configuring the following ESXi advanced setting.
Specifying a remote syslog server allows ESXi to ship its logs to a remote location, so it can help mitigate some of the issues we have just discussed above, however the location of the syslog server is just as important as remembering to configure it in the first place.
You don’t want to place your syslog server in the same cluster as the ESXi hosts your configuring it on or on the same primary storage for that matter, consider placing it inside your management cluster or on another cluster altogether.
The syslog location can be specified by configuring the following ESXi advanced setting.
Something that crops up again and again when discussing vSphere designs with customers is whether on not they should enable (Inter-VM) TPS (Transparent Page Sharing) as since the end of 2014 VMware decided to disable TPS by default.
To understand if you should enable TPS you need to firstly understand what it does. TPS is quite a misunderstood beast as many people contribute a 20-30% memory overcommitment too TPS where in reality it’s not even close to that. That is because TPS only comes in to effect when the ESXi host is close to memory exhaustion. TPS would be the first trick in the box that ESXi uses to try to reduce the memory pressure on the host, if it can not do this then the host would then start ballooning. This would be closely followed by compression and then finally swapping, none of which are cool guys!
So should I enable (Inter-VM) TPS?… well as that great IT saying goes… it depends!
The reason VMware disabled (Inter-VM) TPS in the first place was because of their stronger stance on security (Their Secure by Default Stance), their concern was a man in the middle type attack could be launched and shared pages compromised. So in a nutshell, you need to consider the risk of enabling (Inter-VM) TPS. If you are running and offering a public cloud solution from your vSphere environment then it may be best to leave TPS disabled as you could argue you are under a greater risk of attack; and have a greater responsibility for your customers data.
If however you are running a private vSphere compute environment then the chances are only IT admins have access to the VM’s so the risk is much less. Therefore to reduce the risk of running in to any performance issues caused by ballooning and swapping you may want to consider enabling TPS, which would help mitigate against both of these.
A common question which gets asked allot is; Should I upgrade my ESXi hosts or do a fresh install ?
Most people when asked that question would be inclined to say “Fresh Install” when challenged some of the excuses I hear are ” It will carry over a load of crap from the previous release”. This seems to be a mindset we have gotten ourselves in from managing other vendor OS’s (not pointing the finger).
In fact if we were to look at the process in a bit more detail, an upgraded ESXi host is almost identical to a freshly installed ESXi host as..
- Boot disk is not repartitioned
- Overwrites boot bank contents
- Configuration and VMFS volume preserved
Upgrade Process Walkthrough
Step 1: Save the config (state.tgz)
Step 2: Replace VIBs (overwrite)
Step 3: Reboot
Step 4: Config re-applied on reboot
Note. The partition layout is the same now for 6 as it was for 5.x
Don’t be afraid to upgrade!
Check out the latest fling to hit VMware Labs it’s still in development but VMware are keen to get some user feedback on it, the article is copied below.
- VM operations (Power on, off, reset, suspend, etc).
- Creating a new VM, from scratch or from OVF/OVA (limited OVA support)
- Configuring NTP on a host
- Displaying summaries, events, tasks and notifications/alerts
- Providing a console to VMs
- Configuring host networking
- Configuring host advanced settings
- Configuring host services
A nice article has just been published by network communications news on a recent VDI and vSphere project I have just completed for Bernicia Group, I have copied the article below.
Bernicia Group, the housing organisation, has completed a major overhaul of its IT infrastructure, adopting a virtualised environment and reducing its disaster recovery (DR) period from days to less than 30 minutes. The development hopes to cut costs, speed up its processes and bolster security.
Bernicia, which has over 8,000 homes in the North East of England, worked with SITS to virtualise over 80 physical servers and switch from Microsoft Hyper-V to VMware software. The organisation’s storage architecture has been reduced from 18 rack units to three and, with a second virtual infrastructure deployed securely off-site.
SITS has also implemented a resilient Virtual Desktop Infrastructure (VDI) using VMWare Horizon View, providing a faster and universal experience for remote and in-office staff.
More than 300 users can now access software via a virtual PC operating centrally on Bernicia’s servers. Existing PCs are being converted into thin clients and are now centrally managed by IGEL’s Universal Management Suite. Horizon View software has been installed on laptops, tablets and off-site PC’s, increasingly used by Bernicia staff as the organisation expands and remote working rises.
Gary Hind, head of ICT at Bernicia, said: ‘Overall, our new technology infrastructure has allowed us to make major savings in several areas, including in licensing, power consumption and DR contracts, as well as significantly improving our productivity.’
SITS specialises in using best-of-breed products to provide a range of services, including server and desktop virtualisation, business continuity, enterprise storage, data centre facilities and health check and planning services. Earlier this year the business won the coveted Customer Choice Award from Data Protection Specialists Veeam Software.
Source : http://www.networkcommunicationsnews.co.uk/index.php/1624-virtualisation-investment-boosts-Bernicia