With a large collection of traditional SQL servers, our business has started to look into the feasibility of moving our databases over to the Microsofts ‘SQL Azure’ service. The current databases are run on IaaS machines and have their own internal IP joined to our VNET’s. They have static IP’s so providing access to them from a machine protected by a NSG is as simply as adding a new rule:
Set-AzureNetworkSecurityRule -Action Allow -DestinationAddressPrefix 188.8.131.52/32 -DestinationPortRange 1433 -Name “SQL_Database” -Priority 100 -Protocol TCP -SourceAddressPrefix 10.10.10.10/32 -SourcePortRange * -Type Outbound
The PaaS SQL server is a different kettle of fish altogether. Their dynamic nature means they will not have a static IP address – If you download a list of the subnets here you will see that maintaining a list of accurate NSG’s rules would be an exercise in futility. As of writing, there are 216 subnets in the North Europe datacenter alone. These are likely to change fairly regularly as a datacenter grows in size.
I can see no other way of allowing access to these PaaS SQL instances other than adding a rule allowing access to all of the Internet.
Set-AzureNetworkSecurityRule -Action Allow -DestinationAddressPrefix INTERNET -DestinationPortRange 1433 -Name “SQL_PaaS_Database” -Priority 100 -Protocol TCP -SourceAddressPrefix 10.10.10.10/32 -SourcePortRange * -Type Outbound
With free access to any INTERNET SQL server, shouldn’t these apps now all be in a DMZ?
Microsoft have been adding networking appliances to their marketplace recently, I see firewall offerings from Checkpoint, Barracuda, Fortinet, and Cisco to name a few. Given the laborious situation I’m in where all of our NSG’s need to be updated manually by CSV files, I thought I would take a closer look at the Cisco ASAv.
The ASAv is only supported on the ARM / Azure v2 deployment model and requires a D3 as a bare minimum. This will provide 4 interfaces, one for management and 3 for joining to your inside network or DMZ’s. The basic license (effectively perpetual, you will pay for just the compute time) provides 100 connections and throughput of just 100Kbps, this will be fine for the sake of just testing it. I won’t cover the setup as this is covered in the Cisco quick start document here.
Having played around a bit with this I feel it’s not really ready for enterprise use.
- You are limited to one public address on the ASA, and even this is natted automatically to a private address before it hits the ASA. Although you can add several VIP addresses to the cloud service, the firewall isn’t aware of these. Natting between those other VIP’s and the private address on the ASA means by the time the traffic reaches the ASA you cannot distinguish what has been sent to say 184.108.40.206 and 220.127.116.11. This means that if you wanted to run two web servers behind the ASA, one would need to be on port 80, and one would need to be on port 81. Awful. Microsoft should allow these public addresses to allocated directly to the firewall.
- Clustering is not supported. In order to get Azure SLA’s you need to have two devices in an availability set. if you can’t cluster the two devices you would need to make configuration changes on each device – it’s not sensible to do this and you are likely to end up with differing configs if you’re not careful. You could use a firewall manager like CSM but this would require another machine in Azure.
- No console access. If you fat finger a config update and lock yourself out, how are you supposed to recover?
- Traffic is routed from each subnet via a user defined route table. There is nothing stopping another admin simply changing the routing table on a machine to circumnavigate your firewall! This may be an old way of thinking as separation of duty has truly become rather blurred in ‘the cloud’. In the old world a VLAN assigned to a machine would mean this could never happen.
I look forward to seeing how these networking appliances evolve, but I won’t be suggesting we change from using the native NSG’s just yet.
We’ve been steadily moving more and more of our servers into the azure cloud and have now started to look at moving servers from our DMZ environments there too. Traditional network security says put any internet accessible servers in a multi-tiered DMZ environment, where your web servers, application servers, and databases are in separate network segments. A security event would be localised to just that area of your network reducing the risk to the rest of your network.
Trying to replicate this way of thinking in Azure proves rather difficult currently. There is no concept of a network layer firewall (except for their Network Security Groups which I will get to) and the only security players in this field at the moment are Barracuda. With this solution (their NG firewall) you essentially create a big VPN mesh where each server is VPN’d to a firewall and you can write your enforcement policy there. A big problem with this approach is that you are reliant on a piece of software running on the server OS, any malicious user could simply disable the software and circumvent that entirely.
Using host OS firewalls such as Windows Firewall or Linux’s iptables carry the same risks as above. Anyone with access to your server can simply disable, or change the firewall rules to suit their needs.
Network Security Groups which were implemented only recently offer some form of protection, and these ACL’s can be applied to subnets or individual VM’s within a VNET. I have several gripes with these at the moment.
- Visibility – Traffic accepted or blocked is done so silently. Traditional firewalls offer an interface which allows you to filter logs to show you whether any particular connection was allowed or blocked, and why. Without this, troubleshooting traffic is difficult, particularly with big policies.
- No Object Groups – You cannot create an object that represents multiple hosts, or networks. For example, you cannot reference an object name ‘ActiveDirectory’ which contains all your AD servers. With most traditional firewalls you simply update that object and wherever it is referenced in your rulesets gets updated too.
- Administration – The only way to update NSG’s at the moment is through powershell. This is administratively very laborious. Each change takes anywhere between 5 and 30 seconds to apply.
Examples shown on the Azure website show a very simplified view of this. Port 80 to the webserver, and a database call to your database. The reality is there’s so much more to it than this. Each server of ours has a requirement to connect to a lot of infrastructure services. Some of these include, Active Directory, Patch Management, AntiVirus etc. Many of these services require bidirectional rules to allow the client / server connections. In a company like ours where we are moving servers to the cloud aggressively, IP addresses are frequently changing. This change simply cannot be managed with the NSG’s with this amount of change.
I’m very much hoping Cisco, Juniper, Fortigate, Checkpoint and other firewall vendors will arrive in this space as I’m worried there is no existing solution to designing a proper DMZ architecture in Azure yet.
As part of a new hybrid cloud installation, I’ve chosen to implement some Fortinet Fortigate firewalls running on Esxi 5.1 in our on-prem data centers. It would have been nice to have chosen a vendor that had capability both in Azure (our chosen cloud provider) and on-prem however due to the lack of any real decent offerings in Azure, we needed two solutions.
Several days were spent trying to get a High Availability a-p (active/passive) cluster working in a single HP blade chasis across several hosts. The symptom appeared to be that when the two firewall guests were on a single host, clustering worked fine. When a VM was migrated to a new host they couldn’t see each other resulting in a split brain scenario.
Long story short, If you need to establish a HA heartbeat between different hosts then the standard switch created in Esxi must support both MAC address changes (default) and also promiscuous mode (non default). The Fortinet documentation only specifies this as a requirement when you are deploying the firewalls in a transparent mode not NAT. It turns out you need to enable it for both implementations. If you are also having Fortigate VM64 High Availability problems it would pay to check this!
I’m currently involved in a project to move a data centre to ‘the cloud’. For commercial reasons, Azure was the chosen platform and I had been tasked with evaluating the networking capability there. While Amazon AWS has the luxury of a few years head start, and a better adoption from most networking/security players, Azure is very immature in this area. There is currently only one firewall vendor that exists in Azure and this is Barracuda.
Some of the azure networking limitations which exist as of today (06/2014):
- No network level ACL’s between guests in a single subnet. Any host in a subnet has free-for-all access to other guests in the same subnet. You cannot create VACL’s like you would in a traditional DMZ environment. If one machine is compromised, there’s a good chance others will go with it.
- There is a big reliance on guest OS firewalling. All the technical guides suggest you use some sort of firewall on the guest OS itself. Generally, iptables for Linux, Windows Firewall for Windows OS. Other vendors are don’t seem to be recommended.
- Access between virtual nets must use public endpoints. This means a public IP addresses and Natting. A pubic IP address may represent several guests within a group, so the actual source of the traffic is obfuscated. It means controlling this access is less granular.
- No role based access – your platforms team have as much access to network changes as your network team does.
- By default, guests have full bound outbound access if they are internet accessible (ie have at least one endpoint). Once again, a firewall on the guest OS must be used to restrict this.
- No gateway changes – there is no way to add a new default route to route traffic through a particular networking device ie a firewall.
- Only one NIC per guest, no internal/external NIC topology permitted.
My impression is that Azure are pretty proactive about the platform, it’s being improved constantly but the networking doesn’t seem to get much love. I’ll be doing a lot of work on this over the coming months so I’ll post more information as I discover it.
Have a look at the currently requested features, some of this stuff is networking 101 pretty much! http://feedback.azure.com/forums/217313-networking-dns-traffic-manager-vpn-vnet.