We have been running a VPN from an Asa 5510 running ASA8.2 successfully, between our on-prem network and a VNET in Azure. I now wanted to setup a VNET1 to VNET 2VPN scenario, as well as on-prem to VNET1 and VNET2 (like a big VPN mesh). I built this via the XML config, uploaded it to Azure and got the VNET to VNET working by changing the preshared key. I’m now having difficulty getting the VNET to Onprem VPN’s up and running. I have configured our ASA and run some debugging and are getting these errors:
Jun 18 14:53:48 [IKEv1]: IP = 23.100.xx.xx, Received an un-encrypted NO_PROPOSAL_CHOSEN notify message, dropping
Jun 18 14:53:48 [IKEv1]: IP = 23.100.xx.xx, Information Exchange processing failed
It looks like a Phase1/Isakmp issue however the config our end is all still the same (ie the same as when I had it working fine on-prem to one VNET). I don’t know why changing the config on the Azure end has broken this but I am a bit stumped. One pertinent change during this is changing from Static routing to dynamic routing – it needs to be dynamic for this scenario to work though.
Technically only ASA 8.3 is supported however it was working fine before, so I don’t think this is the issue.
My question is this, what does changing the routing from dynamic to static actually do as far as third party VPN devices are concerned? Is there a requirement to then change the ISAKMP properties?
–Update– Turns out Dynamic routing uses IkeV2 which is supported from ASA 8.4 onwards. Even though this is the case, Azure lists the ASA (even on the newest code) as an unsupported device. ASR’s are supported however.
I’m trying to modify the pre shared keys in an existing VNET gateway. Most user guides refer to using powershell in order to modify preshared keys (as per here)
PS C:\> Set-AzureVNetGatewayKey
Set-AzureVNetGatewayKey : The term ‘Set-AzureVNetGatewayKey’ is not recognized as the name of a cmdlet, function,
script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is
correct and try again.
At line:1 char:1
+ CategoryInfo : ObjectNotFound: (Set-AzureVNetGatewayKey:String) , CommandNotFoundException
+ FullyQualifiedErrorId : CommandNotFoundException
I get the above error suggesting the cmdlet is not available. Am I missing a powershell Module?
Turns out you need to run an update on the azure powershell tools.
I’ve been monitoring google DNS for a few months now, just as a latency/connectivity test from our head quarters. I noticed a big drop in the round trip time to 220.127.116.11 on May 7th. A ping of around 10-20 is typical from the UK to Europe so would suggest that until recently, Google DNS had no POP’s in the UK and that it was based out of Europe somewhere. This link shows a number of European locations in Germany, the Netherlands, and Belgium. The drop down to 2ms suggests that there is now a POP located in the UK – London perhaps. For anyone that continues to use their ISP’s (flaky?) DNS service, this reduced latency might be enough to persuade you to start using google DNS.
I’m currently involved in a project to move a data centre to ‘the cloud’. For commercial reasons, Azure was the chosen platform and I had been tasked with evaluating the networking capability there. While Amazon AWS has the luxury of a few years head start, and a better adoption from most networking/security players, Azure is very immature in this area. There is currently only one firewall vendor that exists in Azure and this is Barracuda.
Some of the azure networking limitations which exist as of today (06/2014):
- No network level ACL’s between guests in a single subnet. Any host in a subnet has free-for-all access to other guests in the same subnet. You cannot create VACL’s like you would in a traditional DMZ environment. If one machine is compromised, there’s a good chance others will go with it.
- There is a big reliance on guest OS firewalling. All the technical guides suggest you use some sort of firewall on the guest OS itself. Generally, iptables for Linux, Windows Firewall for Windows OS. Other vendors are don’t seem to be recommended.
- Access between virtual nets must use public endpoints. This means a public IP addresses and Natting. A pubic IP address may represent several guests within a group, so the actual source of the traffic is obfuscated. It means controlling this access is less granular.
- No role based access – your platforms team have as much access to network changes as your network team does.
- By default, guests have full bound outbound access if they are internet accessible (ie have at least one endpoint). Once again, a firewall on the guest OS must be used to restrict this.
- No gateway changes – there is no way to add a new default route to route traffic through a particular networking device ie a firewall.
- Only one NIC per guest, no internal/external NIC topology permitted.
My impression is that Azure are pretty proactive about the platform, it’s being improved constantly but the networking doesn’t seem to get much love. I’ll be doing a lot of work on this over the coming months so I’ll post more information as I discover it.
Have a look at the currently requested features, some of this stuff is networking 101 pretty much! http://feedback.azure.com/forums/217313-networking-dns-traffic-manager-vpn-vnet.