Opensvp, a new tool to analyse the security of firewalls using ALGs

Following my talk at SSTIC, I’ve released a new tool called opensvp. Its aim is to cover the attacks described in this talk. It has been published to be able to determine if the firewall policy related to Application Layer Gateways is correctly implemented.

Opensvp implements two type of attacks:

  • Abusive usage of protocol commands: an protocol message can be forged to open pinhole into firewall. Opensvp currently implements message sending for IRC and FTP ALGs.
  • Spoofing attack: if anti-spooofing is not correctly setup, an attacker can send command which result in arbitrary pinhole being opened to a server.

It has been developed in Python and uses scapy to implement the spoofing attack on ALGs.

For usage and download, you can have a look at opensvp page. The attack against helpers is described in the README of opensvp.

The document “Secure use of iptables and connection tracking helpers” was published some months ago and it describes how to protect a Netfilter firewall when using connection tracking helpers (the Netfilter name for ALGs).

Transparents de ma présentation au SSTIC

Les transparents de ma présentation du SSTIC sont disponibles : Utilisation malveillante des suivis de connexions. Merci aux organisateurs du SSTIC d’avoir accepté mon papier!

Des vidéos de démonstration sont disponibles sur ce post: Playing with Network Layers to Bypass Firewalls’ Filtering Policy

L’outil de test openvsp est disponible sur cette page.

Playing with Network Layers to Bypass Firewalls’ Filtering Policy

The slides of my CansecWest talk can now be downloaded: Playing with Network Layers to Bypass Firewalls’ Filtering Policy.

The required counter-measures are described in the Secure use of iptables and connection tracking helpers document

The associated video demonstrations are available:

First video demonstrates how to use forged IRC protocol command (DCC request) to be able to open connection to a NATed client from internet.

Second video demonstrates the effect of the attack on helpers on a non protected Netfilter Firewall.

Third video demonstrates the effect of the attack on helpers on a badly configured Checkpoint firewall.

More information will come in upcoming posts.

Using AF_PACKET zero copy mode in Suricata

Victor Julien has just pushed a new feature to suricata’s git tree. It brings improvements to the AF_PACKET capture mode.

This capture mode can be used on Linux. It is the native way to capture packet. Suricata is able to use the interesting new multithreading feature provided by AF_PACKET on recent kernels: it is possible to have multiple capture threads receiving the packet of a single interface.

The commits add mmaped ring buffer support to AF_PACKET capture and also provide a zero copy mode. Mmaped ring buffer is mechanism similar to the one used by PF_RING. The kernel allocates some memory to store the packets and share this memory with the capture process. Instead of sending messages, the kernel just write to the shared memory and the process capture reads it. This is less consuming in term of CPU ressource and helps to increase the capture rate. But the main avantage of this technique is that the capture process can treat the packets without making a copy and this saves a lot of time

To activate this features, you need a Suricata compiled from latest git and you need to modify some entries in your suricata.yaml file. You have to tell suricata that you want to activate the mmap feature. For example to activate the feature on eth0, you have to add ‘use-mmap’ to your configuration:
[code]
af-packet:
– interface: eth0
use-mmap: yes
[/code]

You can then run Suricata with the command:

suricata -c suricata.yaml --af-packet=eth0

This setup will not activate the zero copy feature which is currently dependant of the running mode. You will need to activate the worker mode to enable zero copy. To do so, run Suricata with a command similar to this one:

suricata -c suricata.yaml --af-packet=eth0 --runmode=workers

This code should provide an interesting performance boost to the AF_PACKET capture system. I’ve no number to provide now but I will be happy to hear some if you make some tests

Ecosystem of Suricata

Suricata is an IDS/IPS engine. To build a complete solution, you will need to use other tools.

The following schema is a representation of a possible software setup in the case Suricata is used as IDS or IPS on the network. It only uses opensource components:

Suricata is used to sniff and analyse the traffic. To detect malicious traffic, it uses signatures (or rules). You can download a set of specialised rules from EmergingThreats.

To analyse the alerts generated by Suricata, you will need an interface such as, for example, snorby.

But this interface will need to have an access to the data. This is done by storing the alerts in a database. The feed of this database can be done by barnyard2 which take the files outputted by suricata in unified2 and convert and send them to a database (or to some other format).

If you want counter-measure to be taken (such as blocking for a moment attacker at the firewall level), you can use snortsam.

More complex setup can include interactions with a SIEM solution.

À propos de la publication de code d’EdenWall

J’ai cofondé la société INL en 2004. Renommée en 2009 EdenWall, suite à une levée de fonds et un changement de métier,
le nouveau business model de la société fut la commercialisation d’appliances de sécurité basées sur le logiciel libre NuFW
que j’avais initié en 2003. NuFW, couche logicielle ajoutant l’authentification des flux à Netfilter, est resté le
moteur technologique de la société mais n’était pas d’un accès facile car nécessitant des compétences bas niveaux pour
son déploiement. Nous avons donc distribué sous licence libre des briques complémentaires à partir de 2005. Nulog,
projet d’analyse de journaux, que j’avais commencé en 2001 et Nuface, interface de configuration de politiques de
filtrage en 2005. La conclusion de cette démarche d’ouverture a été NuFirewall, une solution autonome de pare-feu
basée sur les briques EdenWall qui a été distribuée en 2010. Il s’agissait d’une version
libre des appliances EdenWall distribuée sous forme d’une distribution indépendante publiée sous licence GPL.
L’idée des fondateurs était d’avoir une structure de produits similaires à une offre comme celle de VirtualBox avec
une distribution sous double licence : une solution libre convenant au plus grand nombre et une version avec des
fonctionnalités Entreprise.

Suite à un enchainemenent d’événements malheureux, la société EdenWall a fait faillite en courant d’année 2011.
Les différents logiciels libres se sont alors retrouvés orphelins. La FSF en relation avec une partie
des anciens développeurs de la société a décidé de coordonner les efforts pour la reprise des différents
logiciels libres. Je m’étais battu pour la publication de ses briques et solutions et je me suis donc montré
très enthousiaste quant à l’idée de remonter un site hébergeant ce qui avait été publié sous licence libre. C’est
ce que j’exprime dans la citation reprise dans le communiqué de presse de la FSF.

Cc’est toutefois la majeure partie du code
de la société EdenWall qui a été publié. Cette publication est peut-être valable au point de vue juridique comme expliqué
dans le communiqué de la FSF mais
elle ne reflète en rien mon positionnement moral personnel. Je ne cautionne donc pas cette publication globale
mais uniquement la reprise des projets NuFW et NuFirewall et de manière plus globale du code qui avait déjà été publié
sous licence libre par EdenWall.

Securing Netfilter connection tracking helpers

Following the presentation I’ve made during the 8th Netfilter Workshop, it was decided to write a document containing the best practices for a secure use of iptables and connection tracking helpers.

This document called “Secure use of iptables and connection tracking helpers” is now available on this site. It contains recommendations that should be followed carefully if you are the administrator of a Netfilter/Iptables or the developer of a Netfilter based software.

Acquisition systems and running modes evolution of Suricata

Some new features have recently reach Suricata’s git tree and will be available in the next development release. I’ve worked on some of them that I will describe here.

Multi interfaces support and new running modes

Configuration update

IDS live mode in suricata (pcap, pf_ring, af_packet) now supports the capture on multiple interfaces. The syntax of the YAML configuration file has evolved and it is now possible to set per-interface variables.

For example, it is possible to define pfring configuration with the following syntax:

pfring:
  - interface: eth4
    threads: 8
    cluster-id: 99
    cluster-type: cluster_flow
  - interface: eth1
    threads: 2
    cluster-id: 98
    cluster-type: cluster_round_robin

This set different parameters for the eth4 and eth2 interfaces. With that configuration, it the user launches suricata with

suricata -c suricata.yaml --pfring

it will be listening on eth4 and with 8 threads receiving packets on eth4 with a flow based load balancing and 2 threads on eth3.

If you want to run suricata on a single interface, simply do:

suricata -c suricata.yaml --pfring=eth4

This syntax can be used with the new AF_PACKET acquisition module describe below.

New running modes

The running modes have been extended by a new running mode available for pfring and af_packet which is called workers. This mode starts a configurable number of threads which are doing all the treatment from packet acquisition to logging.

List of running modes

Here is the list of current running modes:

  • auto: Multi threaded mode (available for all packet acquisition modules)
  • single: Single threaded mode (available in pcap, pcap file, pfring, af_packet)
  • workers: Workers mode (available in AF_PACKET and pfring)
  • autofp: Multi threaded mode. Packets from each flow are assigned to a single detect thread.

af_packet support

Suricata now supports acquisition via AF_PACKET. This linux packet acquisition socket has recently evolved and it supports now load balancing of the capture of an interface between userspace sockets. This module can be configured like show at the start of this post. It will run on almost any Linux but you will need a 3.0 kernel to be able to use the load balancing features.

suricata -c suricata.yaml --af-packet=eth4

About Suricata performance boost between 1.0 and 1.1beta2

Discovering the performance boost

When doing some coding on both 1.0 and 1.1 branch of suricata, I’ve remarked that there was a huge performance improvement of the 1.1 branch over the 1.0 branch. The parsing of a given real-life pcap file was taking 200 seconds with 1.0 but only 30 seconds with 1.1. This performance boost was huge and I decide to double check and to study how such a performance boost was possible and how it was obtained:

A git bisection shows me that the performance improvement was done in at least two main steps. I then decide to do a more systematic study on the performance improvement by iterating over the revision and by each time running the same test with the same basic and not tuned configuration:

suricata -c ~eric/builds/suricata/etc/suricata.yaml  -r benches/sandnet.pcap

and storing the log output.

Graphing the improvements

The following graph shows the evolution of treatment time by commits between suricata 1.0.2 and suricata 1.1beta2:

It is impressive to see that improvements are located over a really short period. In term of commit date, almost everything has happened between the December 1th and December 9th.

The following graph shows the same data with a zoom on the critical period:

One can see that there is two big steps and a last less noticeable phase.

Identifiying the commits

The first big step in the improvement is due to commit c61c68fd:

commit c61c68fd365bf2274325bb77c8092dfd20f6ca87
Author: Anoop Saldanha
Date:   Wed Dec 1 13:50:34 2010 +0530

    mpm and fast pattern support for http_header. Also support relative modifiers for http_header

This commit has more than double previous performance.

The second step is commit which also double performance. It is again by Anoop Saldanha:

commit 72b0fcf4197761292342254e07a8284ba04169f0
Author: Anoop Saldanha
Date:   Tue Dec 7 16:22:59 2010 +0530

    modify detection engine to carry out uri mpm run before build match array if alproto is http and if sgh has atleast one sig with uri mpm set

Other improvements were made a few hours later by Anoop who succeeded in a 20% improvements with:

commit b140ed1c9c395d7014564ce077e4dd3b4ae5304e
Author: Anoop Saldanha
Date:   Tue Dec 7 19:22:06 2010 +0530

    modify detection engine to run hhd mpm before building the match array

The motivation of this development was the fact that the developers were knowing that the match on http_headers was not optimal because it was using a single pattern search algorithm. By switching to a multi-pattern match algorithm, they know it will do a great prefilter job and increase the speed. Here’s the quote of Victor Julien’s comment and explanation:

We knew that at the time we inspected the http_headers and a few other buffers for each potential signature match over and over again using a single pattern search algorithm. We already knew this was inefficient, so moving to a multi-pattern match algorithm that would prefilter the signatures made a lot of sense even without benching it.

Finaly two days later, there is a serie of two commits which brings a other 20-30% improvements :

commit 8bd6a38318838632b019171b9710b217771e4133
Author: Anoop Saldanha
Date:   Thu Dec 9 17:37:34 2010 +0530

    support relative pcre for http header. All pcre processing for http header moved to hhd engine

commit 2b781f00d7ec118690b0e94405d80f0ff918c871
Author: Anoop Saldanha
Date:   Thu Dec 9 12:33:40 2010 +0530

    support relative pcre for client body. All pcre processing for client body moved to hcbd engine

Conclusion

It appears that all the improvements are linked with modifications on the HTTP handling. Working hard on improving HTTP feature has lead to an impressive performance boost. Thanks a lot to Anoop for this awesome work. As HTTP is now most of the trafic on internet this is a really good news for suricata users !