Introduction Today we’re introducing Project Pacific as a Technology Preview and we think it’s going to change the way you think about the cloud. Project Pacific is a re-architecture of vSphere with Kubernetes as its control plane. To a developer, Project Pacific looks like a Kubernetes cluster where they can use Kubernetes declarative syntax to The post Project Pacific – Technical Overview appeared first on VMware vSphere Blog.
Over the last decade cybersecurity spending by organisations has increased to such an extent that it now represents the largest part of many IT budgets but during that time breaches, and the costs associated with them have increased faster than the increase in spending says VMware CEO Pat Gelsinger.
Avi Networks is now part of VMware and our product is now called VMware NSX Advanced Load Balancer. You can read about it here in our press release from VMworld. But our story is far from over. The acquisition marked VMware’s official entry into the ADC (Application Delivery Controller) space. The Avi team, which remains The post Avi Networks — Same Team, Same Mission, New Home appeared first on Network Virtualization.
I have had a few folks ask about Nested ESXi support on VMware Cloud on AWS (VMC), so lets get that out of the way first. Nested ESXi is NOT supported by VMware on any of our platforms, whether that is an on-premises or a cloud environment like VMC or any 3rd party vendors that […]
In my series of blog posts on VMware vRealize Operations and the vRealize Suite, I’ve talked a lot about running vROps but all of it has been on-premises…. up until now! I must admit that I’m excited about all that is possible with VMware Cloud on AWS and to manage the VMC infrastructure, the recommended The post VMware Cloud on AWS and vRealize Operations = Better Together! – David Davis on vRealize Operations Post #49 appeared first on VMware Cloud Management .
This blogpost walks explains what VMware Cloud on AWS security policies are, and steps through setting up policies where we place multiple apps in the same subnet, but protect them with microsegmentation.
I’ll cover a recent design decision we had to make on whether or not to inject a default route from your on-premises network into VMConAWS.
This may at first glance not sound like a big deal however depending on the customer’s topology and footprint i.e. if they have existing on-premises locations then it could be something you need to consider carefully.
For example egress costs for internet connectivity directly out of AWS are charged at a higher rate then egressing across a direct connect connection. Therefore if your customer already has an on-premises presence and infrastructure in-place it may well be more cost-effective to route and egress out of that instead.
The internet breakout within VMConAWS via the IGW is also unfiltered and not inspected, we only have the NSX L4 firewalls to protect us. If you did want to egress directly out of AWS you would need to stand up a transit VPC and deploy your own Layer 7 Next-Gen firewalls to inspect that traffic for you.
We also have an AWS limitation of only being able to receive 100 routes from your network in to AWS via BGP depending on the scale and topology of your network trying to summarise that network may be very difficult and may well push you over the 100 routes. If you do exceed the 100 route limit the BGP session will be terminated and connectivity lost over the direct connect so it is something you need to design against.
This particular customer has already invested in standing up an on-premises SDDC as well as the VMConAWS SDDC’s. This meant from a connectivity perspective they had already invested in L7 firewalls and security appliances for internet connectivity. Which meant in the immediate term we would route traffic across the direct connect and out of the on-premises internet breakout. This could always be revisited in the future as the VMC instance outgrows the on-premises deployments.
The above diagram covers how this would look from a traffic flow perspective, with internet traffic originating in VMC being routed over the direct connect and out of the on-premises egress point.
However; in making that design decision it raised a question around how this might affect VPN connectivity this is especially important if you want to use a Route Based IPSEC VPN as a back connectivity method. You have to understand that internet connectivity is terminated on the IGW (Internet Gateway) inside the shadow VPC and not directly on the Tier-0 (That looks to be a future release item). Therefore if we receive a default route from on-premises we would not be able to route out to the IGW from the Tier-0 for VPN connectivity as all traffic would be sent down the direct connect.
VMware has a solution for this little issue and that is to inject a /32 route into the Tier-0 for the destination VTI (Virtual Tunnel Interface). As an example to get to the destination VTI1 which is on-premises go via the Tier-0 —> IGW rather than over say the direct connect where the default route would push you.