So having done the VMware NSX 6.0 ICM course in December and having had NSX running in my lab for the last 2-3 weeks, I have just recently decided to deploy micro segmentation and thought I would share my experiences of this with you (Scroll down to the bottom for this).
Why use Micro Segmentation?
Before you ever decide to deploy micro segmentation in the datacentre you need to understand why it hasn’t been operationally feasible to have micro segmentation in the past.
Traditional firewalls implement control as physical or even virtual “choke points” on the network. As application workload traffic is directed through these control points, rules are enforced and packets are either allowed or blocked. Using the traditional firewall approach to achieve micro-segmentation quickly reaches two key operational barriers – throughput capacity and operations/change management. Therefore traditionally micro segmentation hasn’t been commonplace in the datacentre.
NSX looks like its going to change that I was able to deploy NSX manager on my existing vSphere 6 ( it could have been 5.5) platform and configure the distributed firewall with most of the rules I needed for my enviroment within 1 day. Albeit this was for my home lab and not a production environment.
So why do it ?
1) Isolation
Isolation is the foundation of most network security, whether for compliance, containment or simply keeping development, test and production environments from interacting. While manually configured and maintained routing, ACLs and/or firewall rules on physical devices have traditionally been used to establish and enforce isolation, isolation and multi-tenancy are inherent to network virtualization. Virtual networks are isolated from any other virtual network and from the underlying physical network by default, delivering the security principle of least privilege. No physical subnets, no VLANs, no ACLs, no firewall rules are required to enable this isolation. This is worth repeating…NO configuration required. Virtual networks are created in isolation and remain isolated unless specifically connected together.
2) Segmentation
Related to isolation, but applied within a multi-tier virtual network, is segmentation. Traditionally, network segmentation is a function of a physical firewall or router, designed to allow or deny traffic between network segments or tiers. For example, segmenting traffic between a web tier, application tier and database tier. Traditional processes for defining and configuring segmentation are time consuming and highly prone to human error, resulting in a large percentage of security breaches. Implementation requires deep and specific expertise in device configuration syntax, network addressing, application ports and protocols.
Network segmentation, like isolation, is a core capability of VMware NSX network virtualization platform. A virtual network can support a multi-tier network environment, meaning multiple L2 segments with L3 segmentation or micro-segmentation on a single L2 segment using distributed firewalling defined by workload security policies. As in the example above, these could represent a web tier, application tier and database tier. Physical firewalls and access control lists deliver a proven segmentation function, trusted by network security teams and compliance auditors. Confidence in this approach for cloud data centers, however, has been shaken, as more and more attacks, breaches and downtime are attributed to human error in manual network security provisioning and change management processes.
In a virtual network, network services (L2, L3, ACL, Firewall, QoS etc.) that are provisioned with a workload are programmatically created and distributed to the hypervisor vSwitch. Network services, including L3 segmentation and firewalling, are enforced at the virtual interface. Communication within a virtual network never leaves the virtual environment, removing the requirement for network segmentation to be configured and maintained in the physical network or firewall.
3) Cost
An SDDC approach leveraging VMware NSX not only makes micro-segmentation operationally feasible, it does it cost effectively. Typically, micro-segmentation designs begin by engineering east-west traffic to “hairpin” through high-capacity physical firewalls. As noted above, this approach is expensive and operationally intensive, to the point of infeasibility in most large environments. The entire NSX platform typically represents a fraction of the cost of the physical firewalls alone in these designs, and scales out linearly as customers add more workloads.
Thoughts on Deployment
It certainly takes sometime to find out what rules you need to configure before deciding to set that main rule to block or reject, because once you do anything you haven’t captured and created a rule for will be blocked.
Having a good working knowledge and understanding of your environment will make this process a lot easier. For large environments you would want to structure your VMs or applications into sections like I have below allowing you to add rules for each application or VM. You can then work on gathering the rule sets on each application/section in turn even setting the main default rule to reject and adding an allow rule for each individual application/section until you digested it down into individual ports if so wished.
One to watch out for is currently in NSX 6.0-1 if VM tools is not running then you can not reference the VM by its object name and can only use a rule by its IP. I have been told this will be fixed in the upcoming 6.2 release which is due soon. An example of this is shown below but I have the iP’s.