Open VM Tools is now supported with NSX Distributed Firewall
It got quite unnoticed, but starting in NSX 6.3.2, Open VM Tools is supported with Distributed Firewall (DFW) to do VM to IP mapping.
VMware Social Media Advocacy
VMware Announces General Availability of vSphere 6.5 Update 1
vSphere 6.5 Update 1 is the Update You’ve Been Looking For! Today, VMware is excited to announce the general availability of vSphere 6.5 Update 1. This is the first major update to the well-received vSphere 6.5 that was released in November of 2016. With this update release, VMware builds upon the already robust industry-leading virtualization The post VMware Announces General Availability of vSphere 6.5 Update 1 appeared first on VMware vSphere Blog .
Lets assume you’re wishing to deploy a vRA 7.x blueprint into an environment where NSX 6.3.x has been deployed, and the DFW default rule is set to deny. During the provisioning of the vRA VMs they will of course need firewall access for services such as Active Directory and DNS to allow them to customise successfully, and here in lies the problem.
You might assume you could go about creating your security policies and security groups as normal and simply include the security tag within the blueprint to grant access to these services. However; vRA won’t assign the security tag until after the machine has finished customizing. So that creates us a potential issue as the VM won’t have access to the applicable network resources such as AD & DNS to finish customizing successfully as the default DFW is set to deny.
So to design around this you need to consider having some shared services rules at the top of the DFW rule table which allow services such as Active Directory and DNS access to these VMs, this will allow the vRA VMs to have the necessary network access to finish deploying & customizing successfully. You could achieve is in a number of ways such as creating a security group based on OS name of “Windows” and VM Name that equals the name of your vRA VM’s. Therefore as soon as vRA creates the VM object it will be assigned to the correct shared services security group and given the correct access, you can then layer in additional services using security tags as originally intended.
Since vSphere 6.5 came along we have deprecated a PSC/SSO topology that I have seen customers deploy quite frequently, therefore I wanted to quickly explain what has changed and why.
Since vSphere 6.5 it is no longer possible to re-point a vCenter server between SSO sites – See VMware KB 2131191
This has a big knock on effect for some customers as traditionally they only deployed a single PSC (Externally) and a single vCenter at each site. In the event of a failure of the PSC at site 1 the vCenter server at site 1 could be re-pointed to the PSC in Site 2. This would then get you back up and running until you recovered the failed PSC.
However in vSphere 6.5 this is no longer possible – If you have two SSO sites lets call them Site 1 and Site 2 you cannot re-point a vCenter server between the two sites. So if you only had a single PSC (think back to the above example and vSphere 6.0) you would not be able to recover that site successfully.
So how do I get round this?
Deploy two PSC’s at site and within the same SSO site/domain (See the below diagram) you don’t even need to use a load balancer, as you can manually re-point your vCenter to the other PSC running at the same site in the event of a failure.
Therefore when deploying multiple SSO sites and PSC’s across physical sites consider the above and make sure you are deploying a supported and recoverable topology.
For a list of supported topologies in vSphere 6.5 and for further reading please see VMware KB 2147672
Using the NSX API you can set your own SSH login banner. Simply connect using your API client of choice in my example I used postmon and change the following…
Insert Your Login Banner Here
Here is an example of a really simple but cool query that can be setup in vRealize Log Insight to track accepted and failed SSH logins to Edge devices.
appname contains “sshd”
text contains “failed password” (This can be changed to “accepted password” to track accepted logins)
hostname contains “hostname”
vDS is NOT a pre-requisite for NSX Guest Introspection
This seems to be a common misconception both from customers and third party vendors, the vDS is also not licensed as part of NSX in the “NSX for Endpoint” license. Therefore a normal VSS standard vSwitch is fully supported for deploying NSX Guest Introspection to use with third party vendors i.e. for Anti Virus.
Please refer to the NSX 6.x Installation Guide on how to use the “Specified on host” option when deploying Guest Introspection
Extract from the NSX 6.x Installation Guide Below…
Initial Troubleshooting – Start with the basics in troubleshooting – Transport Network and Control Plane
Identifying Controller Deployment Issues:
Verify connectivity from NSX Manager to vCenter:
Identify EAM common issues:
NOTE: You can also access the Managed Object Browser by accessing address:
https://<VCIP or hostname>/eam/mob/
UWA’s – vsfwd or netcpa – not functioning correctly? This manifest itself as firewall showing a bad status or the control plane between hypervisor(s) and the controllers being down.
Common Deployment Issues:
1. Connecting NSX to vCenter
2. Controller Deployment
3. Host Preparation
NSX Controller CLI VXLAN Commands:
NSX Controller CLI cluster status and health:
VXLAN namespace for esxcli:
Troubleshooting Components – Understand the component interactions to narrow the problem focus
No connectivity for new VM’s, increased BUM traffic (ARP cache misses).
General NSX Controller troubleshooting steps:
Verify VTEPs have sent network information to Controllers.
User World Agent (UWA) issues:
General UWA troubleshooting steps:
Check if UWA’s are connected to NSX Manager and Controllers.
esxcli network ip connection list |grep 5671 (Message bus TCP connection)
esxcli network ip connection list |grep 1234 (Controller TCP connection)
Check the configuration file /etc/vmware/netcpa/config-by-vsm.xml on the ESXi host that has the settings under UserVars/Rmq* (In particular UserVars/RmqipAddress).
The list of UserVars needed for the message bus currently are:
NVS Issues – Limited/Intermittent connectivity for VM’s on the same Logical switch.
General VXLAN troubleshooting steps:
Verify connectivity between VTEPs:
Verify VXLAN component:
Distributed Firewall issues – Flow Monitoring provides vNIC level visibility of VM traffic flow.
Note: Add a VM to the Exclusion List to remove it from the DFW. This allows you to determine if it’s a DFW issue. If there is still a problem, then it is not DFW-related.
NSX Manager Log (collected via WEB UI)
Edge (VDR/ESG) Log (collected via WEB UI)
NSX Controller Logs
Issues and Corresponding Logs
Installation/upgrade related issues
Edge Services Gateway issues
NSX Manager issues
VXLAN data plane: /var/log/vmkernel.log.
VXLAN control plane: /var/log/netcpa.log.
Management plane: /var/log/vsfwd.log and /var/log/netcpa.log.
Distributed Firewall (DFW) issues
Implementing a multi-tenant networking platform with NSX
So we have covered the typical challenges of a multi-tenant network and designed a solution to one of these, it’s time to get down to the bones of it and do some configuration! Let’s implement it in the lab, I have set up an NSX ESG Cust_1-ESG and an NSX DLR control VM Cust_1-DLR with the below IP configuration:
I have put together a short guide on how to upgrade from vCNS to NSX please find inks to each section below.