It’s been a while since I’ve posted anything as it’s been a fairly busy period personally. I got married, moved countries, found a new place to live, and changed jobs (twice, so far).

I’m now working for a Cybersecurity company doing consultancy work, and are now based in Auckland, New Zealand. The consultancy work should bring a more varied exposure to new environments.

As most of the content here is based on what I’m working on professionally, content will change with changes in jobs. It means the focus will (probably) shift away from mostly Azure network and security stuff to more broader areas of security.

It is with a heavy heart that I must announce I’m about to turn off our Juniper IVE’s (aka SSL VPN). In reality, all we were using these for was publishing applications and presenting these as bookmarks on a landing page. It’s been a very capable, reliable product over the years so it will come with some hesitation when I have to hit the shutdown button on these later this week.

I started this blog back in 2011 and most of the posts were about the trials and tribulations on that platform. Being fully “Azured” we decided to go with Azure Remote App and move away from RSA/Juniper SSL combo. Commercially it makes a lot of sense; RSA has been replaced by Azure MFA which is offered essentially for free when you have AD premium users.

It’s more than a little unfortunate that they have now shitcanned the product  and wont offer it after August 31st, 2017.  We intend on using it until we have something that offers similar functionality. The announcement is here. They are pushing a Citrix solution so it would seem little ironic if we ended up on that given we retired all of our Citrix environment some time ago.

Gartner stopped doing a SSL Gateway Quadrant a number of years ago and had implied that there would no longer be dedicated devices for this purpose. Instead, the function would be rolled into other network infrastructure. I have seen this to be the case where products such as F5’s LTM have been modularised to include this (theirs is called APM). I had done a POC (a long time ago) on both APM and also Netscaler and found these to be capable but expensive products.

We need a device which will terminate SSL connections, present applications to users, and ideally also offer an RDP gateway (as this is quite frankly awful on RemoteApp). Netscaler have an Azure Marketplace image so I might see whether this is a worthy successor. I’m less excited about another IaaS instance to look after but I don’t know if any managed cloud services which fit this space currently.

Following on from the quick guide I did on showing ASA logs with Kibana, I thought it’d be a good idea to show off how this is also great at visualising squid logs kibana. I use squid extensively and while trawling through /var/log/squid/access.log works, having this information presented visually enables you to spot anomalies or performance and configuration problems. Questions such as “Why are there so many 404’s all of sudden?”, “what is the top URL visited in the last 24 hours?” can be displayed without the need to run the logs through dedicated software or doing a heap of awk, sed, and cat on log files.

visualising squid logs with kibana

visualising squid logs with kibana

There are a few moving parts here so I will address each individually. My ELK stack looks like the following.

Squid (Ubuntu Server) -> Logstash -> Elasticsearch. Squid talks to Logstash using the filebeat, and Logstash speaks to elasticsearch over the standard HTTP api.


In order to ship logs from the squid server to logstash you will need to download and install filebeat. This can be downloaded here, as I am using ubuntu installing this is as simple as download the Debian package, and installing it with ‘dpkg -i filebeat-5.0.2-amd64.deb’. This will create a configuration file under /etc/filebeat/ named filebeat.yml which you will need to edit. It’s pretty straightforward but there are some important changes which need to be made.

Edit Paths to point to the squid access log.

- /var/log/squid3/access.log

Edit your output section to send logs to logstash. Replace with your own logstash host. It’s recommended to have at least two logstash instances in production environments for obvious reasons.

# The Logstash hosts
hosts: ["", ""]

That’s it! Reload the daemon by typing ‘service filebeat restart’.


Next in line is Logstash. This will take the logs shipped from Filebeat, format them (or add, remove, or mutate fields) and send them into their finally resting place; elasticsearch. The same installation process can be followed to install logstash on a Debian or Ubuntu system. Logstash can be downloaded here. Install it with ‘dpkg -i logstash-5.0.2.deb’. This will create a new directory /etc/logstash/. Configuration files are stored under /etc/logstash/conf.d. You can create any file here which is pertinent to your deployment – I have gone with ‘logstash-forwarder.conf’ although it can be named anything essentially. Edit this file with your favourite text editor.Inputs. The following will create two inputs, one for syslog messages (that we used previously for ASA logs), and a second input named beats.

input {
udp {
port => 10514
type => syslog
beats {
port => 5044

Filtering. The following is some grok syntax (essentially regex) which will pickup messages and strip out the relevant fields. That will be displayed in Kibana. (sorry about the formatting, it’s doesn’t wrap nicely in wordpress). The following could probably be achieved with less lines of grok, but I’m lazy and don’t want to spend too much time playing with the various formats squid _may_ output it’s log files in (there are subtle differences between say HTTP and HTTPS connections).

filter {
grok {
match => [ "message", "%{POSINT:timestamp}.%{POSINT:timestamp_ms}\s+%{NUMBER:response_time} %{IPORHOST:src_ip} %{WORD:squid_request_status}/%{NUMBER:http_status_code} %{NUMBER:reply_size_include_header} %{WORD:http_method} %{NOTSPACE:request_url} %{NOTSPACE:user} %{WORD:squid}/%{IP:server_ip} %{NOTSPACE:content_type}" ]
match => [ "message", "%{NUMBER:timestamp}\s+%{NUMBER:response_time} %{IPORHOST:src_ip} %{WORD:squid_request_status}/%{NUMBER:http_status_code} %{NUMBER:reply_size_include_header} %{WORD:http_method} %{URI:request_url} %{USERNAME:user} %{WORD:squid_hierarchy_status}/%{IPORHOST:server_ip_or_peer_name} (?\S+\/\S+)" ]
match => [ "message", "%{NUMBER:timestamp}\s+%{NUMBER:response_time} %{IPORHOST:src_ip} %{WORD:squid_request_status}/%{NUMBER:http_status_code} %{NUMBER:reply_size_include_header} %{WORD:http_method} %{HOSTNAME:request_url}:%{NUMBER:tcp.port} %{NOTSPACE:user} %{WORD:squid}/%{GREEDYDATA:server_ip} %{NOTSPACE:content_type}" ]
match => [ "message", "%{NUMBER:timestamp}\s+%{NUMBER:response_time} %{IPORHOST:src_ip} %{WORD:squid_request_status}/%{NUMBER:http_status_code} %{NUMBER:reply_size_include_header} %{WORD:http_method} %{URI:request_url} %{NOTSPACE:user} %{WORD:squid}/%{GREEDYDATA:server_ip} %{NOTSPACE:content_type}" ]

Finally we need to tell logstash where it needs to send it’s messages. This is configured by defining an output:

output {
elasticsearch {
hosts => ["",""]

Pretty straight forward, these are your elasticsearch hosts, and once again, you should have more than one!


Finally we need to create some searches and a dashboard in Kibana. These were exported from Kibana 5 so I expect you will need to use this version or later. Download this zip file and extract it.

Under Management > Click on Saved Objects > Import. Select these files and import them one at a time. With a bit of luck, when you click ‘Dashboard’ then open you will now have a dashboard named ‘Squid Proxy’

This is a work in progress. You will see there is a section named ‘Squid Response Times’ which lists no results. I’m looking at getting this to graph average response time of proxied requests. The reason this is a little difficult is that my time_response field was originally entered as a string rather than an interger. It means kibana won’t let me graph that. I’m just looking into the best way to convert this field in order to be able to create a date histogram of this data.

That’s a wrap

Well, that’s the gist of it. Let me know if you have any questions or improvements to make.


I’ve gone a bit git mad. I’ve ‘gitted’ my powershell scripts, my network security group CSV files, and linux configuration files. Leveraging our departments visual studio online account I’ve found storing these resources in a git repository a fantastic way to keep track of how, when, and why files are changing, and by who. It also serves as a good way of backing up these files as any files can be restores simply by re-cloning the git repository.

I think with the way automation is changing the way we work it’s ever so important to understand how these tools can help. In teams that manage collaborative files, I highly recommend spending an afternoon getting the hang of git.

Microsoft recently announced that it will be possible to connect to Office 365 services via an express route connection. Leveraging an express route connection, which in most cases will offer faster speeds than your internet link, will mean a faster and more optimal flow through to this azure service. Additionally, many other azure public services have already been made available over express route by creating a special ‘public peering’. In my case, the on-prem storsimple appliance was continually using all of our internet bandwidth while doing site snapshots and this necessitated setting this up.

Simply, by setting up an azure public peering express route connection you will have two paths to azure services.

Azure-Express-Route-Public-Peering - New Page

Two paths to express route services


The documentation around how this is achieved is fairly sparse and I had some questions around how this would be achieved technically. Following on from my trial and error I have the following information which has been proven in our environment.

  • What routes are advertised from a Public Express Route BGP peer?
    Upon creating an adjacency you will receive (as of writing) 128 prefixes. These ranges from /17’s through to a couple of /29’s. If you receive a default route on your internet link, these prefixes will be more specific and therefore should be the new chosen path.
  • How should we NAT egress traffic?
    I have experimented with this a little, and have natted the traffic both behind the same addresses as our internet connection uses, and also unique addresses. I was worried that using the same PI space would cause asynchronous routing, with traffic leaving via our express route circuit, and returning via our internet (because our internet AS advertises that range). It seems however that Microsoft must mark this traffic somehow (MPLS?) as traffic returns to the same path it came from, even if the source address is the same. This is handy if you have a firewall that is natting all the traffic, it means you don’t need to change the NAT dependent on the path it takes.

This post is a work in progress and will be updated as and when new information comes to light.