As part of a new hybrid cloud installation, I’ve chosen to implement some Fortinet Fortigate firewalls running on Esxi 5.1 in our on-prem data centers. It would have been nice to have chosen a vendor that had capability both in Azure (our chosen cloud provider) and on-prem however due to the lack of any real decent offerings in Azure, we needed two solutions.

Several days were spent trying to get a High Availability a-p (active/passive) cluster working in a single HP blade chasis across several hosts. The symptom appeared to be that when the two firewall guests were on a single host, clustering worked fine. When a VM was migrated to a new host they couldn’t see each other resulting in a split brain scenario.

Long story short, If you need to establish a HA heartbeat between different hosts then the standard switch created in Esxi must support both MAC address changes (default) and also promiscuous mode (non default). The Fortinet documentation only specifies this as a requirement when you are deploying the firewalls in a transparent mode not NAT. It turns out you need to enable it for both implementations. If you are also having Fortigate VM64 High Availability problems it would pay to check this!

In addition to a whole host of cool stuff announced last month as part of Azures product updates, VNet gateways now come in multiple sizes. I’ve posted previously on speed limitations through conventional VNet to VNet ipsec connections (ie less than 100mb/s) however these new gateway sizes look to address this.

As far as I can tell these cannot yet be created through the web portal (like a lot of things) however you can use the following syntax in order to provision a high performance vnet gateway in order to terminate your onprem vpn, express route, or vnet to VNet connections on.

New-AzureVNetGateway -VNetName “ExistingVnetName” -GatewaySKU HighPerformance -GatewayType DynamicRouting

It’s exciting to see that multiple interfaces are now supported on azure VM’s. This is bound to open lots of opportunities to the networking vendors that we know and love but aren’t represented in Azure (It’s still pretty much only Barracuda networks there at the moment). You’re not yet able to manipulate routing tables so this is still limited in that you couldn’t create an iptables running linux VM which had interfaces in each subnet (ie a DMZ firewall) and route other VM’s through it. It’s bound to be supported soon enough, which will open these do-it-yourself type approaches.

I’ve not yet had a chance to run through the same tests I did in this post, however I will do shortly. Hopefully we see throughput of 1Gbps and upwards.