Microsoft have been adding networking appliances to their marketplace recently, I see firewall offerings from Checkpoint, Barracuda,  Fortinet, and Cisco to name a few. Given the laborious situation I’m in where all of our NSG’s need to be updated manually by CSV files, I thought I would take a closer look at the Cisco ASAv.

The ASAv is only supported on the ARM / Azure v2 deployment model and requires a D3 as a bare minimum. This will provide 4 interfaces, one for management and 3 for joining to your inside network or DMZ’s. The basic license (effectively perpetual, you will pay for just the compute time) provides 100 connections and throughput of just 100Kbps, this will be fine for the sake of just testing it. I won’t cover the setup as this is covered in the Cisco quick start document here.

Having played around a bit with this I feel it’s not really ready for enterprise use.

  • You are limited to one public address on the ASA, and even this is natted automatically to a private address before it hits the ASA. Although you can add several VIP addresses to the cloud service, the firewall isn’t aware of these. Natting between those other VIP’s and the private address on the ASA means by the time the traffic reaches the ASA you cannot distinguish what has been sent to say and This means that if you wanted to run two web servers behind the ASA, one would need to be on port 80, and one would need to be on port 81. Awful. Microsoft should allow these public addresses to allocated directly to the firewall.
  • Clustering is not supported. In order to get Azure SLA’s you need to have two devices in an availability set. if you can’t cluster the two devices you would need to make configuration changes on each device – it’s not sensible to do this and you are likely to end up with differing configs if you’re not careful. You could use a firewall manager like CSM but this would require another machine in Azure.
  • No console access. If you fat finger a config update and lock yourself out, how are you supposed to recover?
  • Traffic is routed from each subnet via a user defined route table. There is nothing stopping another admin simply changing the routing table on a machine to circumnavigate your firewall! This may be an old way of thinking as separation of duty has truly become rather blurred in ‘the cloud’. In the old world a VLAN assigned to a machine would mean this could never happen.

I look forward to seeing how these networking appliances evolve, but I won’t be suggesting we change from using the native NSG’s just yet.

I’ve used Cisco firewalls in their various forms for the last 10 years or so, from baby PIX 501’s through to the new ASA 5500X’s. While I’ve always been a big fan of the platform, one area which has always been deficient is their logging and reporting capability. There really is no comparison when you line up other vendors such as Pallo and Checkpoint along side Cisco when it comes to this.

I’ve recently started playing around with what people call the ELK stack, and have found it to be excellent at visualising cisco ASA logs. The ELK stack consists of three applications, ElasticSearch, Logstash, and Kibana.


Elasticsearch is a search server based on Lucene. It provides a distributed, multitenant-capable full-text search engine with a RESTful web interface and schema-free JSON documents


logstash is a tool for managing events and logs. You can use it to collect logs, parse them, and store them for later use (like, for searching).


Kibana is a great tool for real time data analytics.

The end result is a system which is able to turn simple syslog messages into a screen which looks like my example below. Broken down are graphs to represent the top protocols, actions (ie accept, deny), destination ports, origin countries, and source and destination IP’s. Within each of these views all of the content is dynamic, say you’re only interested in dropped traffic, you can click this in the graph and the whole filter will change to only represent this data.


Click on “Deny” as the action, and “TCP” and as the protocol shows the top sources of this traffic. It immediately becomes easy to see the usual volumes of this type of traffic over any period of time you specify. Anomalies are easy to see and can be drilled down into to see more closely.

While setting up an ELK stack is outside what I’d planned on mentioning here, digital ocean have a good guide on setting this all up. The pertinent logstash config I have used for the above is as follows:

input {
udp {
port => 10514
type => syslog

filter {
if [type] == "syslog" {
grok {
patterns_dir => "./patterns"
match => { "message" => "%{CISCOFW106014}" }
match => { "message" => "%{CISCOFW106001}" }
match => { "message" => "%{CISCOFW106023}" }
match => { "message" => "%{CISCOFW313005}" }
match => { "message" => "%{CISCOFW302020_302021}" }
match => { "message" => "%{CISCOFW302013_302014_302015_302016}" }
match => { "message" => "%{CISCOFW106015}" }
match => { "message" => "%{CISCOFW106006_106007_106010}" }
match => { "message" => "%{CISCOFW106100}" }
add_tag => [ "firewall" ]

geoip {
source => "src_ip"
target => "geoip"
database =>"/opt/logstash/vendor/geoip/GeoLiteCity.dat"
add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]

mutate {
convert => [ "[geoip][coordinates]", "float" ]
lowercase => [ "protocol" ]

output {
redis { host => ""
data_type => "list"
key => "logstash" }

The above system will listen for syslog messages on udp 10514, run them through grok to extract the pertinent parts of the message, add geo-ip data, and forward it onto a local redis cache. Another system (the ELK receiver/indexer) polls this system and retrieves the logs and finally displays them to the user through the Kibana interface.

As part of a new hybrid cloud installation, I’ve chosen to implement some Fortinet Fortigate firewalls running on Esxi 5.1 in our on-prem data centers. It would have been nice to have chosen a vendor that had capability both in Azure (our chosen cloud provider) and on-prem however due to the lack of any real decent offerings in Azure, we needed two solutions.

Several days were spent trying to get a High Availability a-p (active/passive) cluster working in a single HP blade chasis across several hosts. The symptom appeared to be that when the two firewall guests were on a single host, clustering worked fine. When a VM was migrated to a new host they couldn’t see each other resulting in a split brain scenario.

Long story short, If you need to establish a HA heartbeat between different hosts then the standard switch created in Esxi must support both MAC address changes (default) and also promiscuous mode (non default). The Fortinet documentation only specifies this as a requirement when you are deploying the firewalls in a transparent mode not NAT. It turns out you need to enable it for both implementations. If you are also having Fortigate VM64 High Availability problems it would pay to check this!

I’m currently involved in a project to move a data centre to ‘the cloud’. For commercial reasons, Azure was the chosen platform and I had been tasked with evaluating the networking capability there. While Amazon AWS has the luxury of a few years head start, and a better adoption from most networking/security players, Azure is very immature in this area. There is currently only one firewall vendor that exists in Azure and this is Barracuda.

Some of the azure networking limitations which exist as of today (06/2014):

  • No network level ACL’s between guests in a single subnet. Any host in a subnet has free-for-all access to other guests in the same subnet. You cannot create VACL’s like you would in a traditional DMZ environment. If one machine is compromised, there’s a good chance others will go with it.
  • There is a big reliance on guest OS firewalling. All the technical guides suggest you use some sort of firewall on the guest OS itself. Generally, iptables for Linux, Windows Firewall for Windows OS. Other vendors are don’t seem to be recommended.
  • Access between virtual nets must use public endpoints. This means a public IP addresses and Natting. A pubic IP address may represent several guests within a group, so the actual source of the traffic is obfuscated. It means controlling this access is less granular.
  • No role based access – your platforms team have as much access to network changes as your network team does.
  • By default, guests have full bound outbound access if they are internet accessible (ie have at least one endpoint). Once again, a firewall on the guest OS must be used to restrict this.
  • No gateway changes – there is no way to add a new default route to route traffic through a particular networking device ie a firewall.
  • Only one NIC per guest, no internal/external NIC topology permitted.

My impression is that Azure are pretty proactive about the platform, it’s being improved constantly but the networking doesn’t seem to get much love. I’ll be doing a lot of work on this over the coming months so I’ll post more information as I discover it.

Have a look at the currently requested features, some of this stuff is networking 101 pretty much!