Thanks a lot to Christophe Brocas and Mathieu Blanc for the organisation of the security track of LSM.
Discovering the performance boost
When doing some coding on both 1.0 and 1.1 branch of suricata, I’ve remarked that there was a huge performance improvement of the 1.1 branch over the 1.0 branch. The parsing of a given real-life pcap file was taking 200 seconds with 1.0 but only 30 seconds with 1.1. This performance boost was huge and I decide to double check and to study how such a performance boost was possible and how it was obtained:
A git bisection shows me that the performance improvement was done in at least two main steps. I then decide to do a more systematic study on the performance improvement by iterating over the revision and by each time running the same test with the same basic and not tuned configuration:
suricata -c ~eric/builds/suricata/etc/suricata.yaml -r benches/sandnet.pcap
and storing the log output.
Graphing the improvements
The following graph shows the evolution of treatment time by commits between suricata 1.0.2 and suricata 1.1beta2:
It is impressive to see that improvements are located over a really short period. In term of commit date, almost everything has happened between the December 1th and December 9th.
The following graph shows the same data with a zoom on the critical period:
One can see that there is two big steps and a last less noticeable phase.
Identifiying the commits
The first big step in the improvement is due to commit c61c68fd:
commit c61c68fd365bf2274325bb77c8092dfd20f6ca87 Author: Anoop Saldanha Date: Wed Dec 1 13:50:34 2010 +0530 mpm and fast pattern support for http_header. Also support relative modifiers for http_header
This commit has more than double previous performance.
The second step is commit which also double performance. It is again by Anoop Saldanha:
commit 72b0fcf4197761292342254e07a8284ba04169f0 Author: Anoop Saldanha Date: Tue Dec 7 16:22:59 2010 +0530 modify detection engine to carry out uri mpm run before build match array if alproto is http and if sgh has atleast one sig with uri mpm set
Other improvements were made a few hours later by Anoop who succeeded in a 20% improvements with:
commit b140ed1c9c395d7014564ce077e4dd3b4ae5304e Author: Anoop Saldanha Date: Tue Dec 7 19:22:06 2010 +0530 modify detection engine to run hhd mpm before building the match array
The motivation of this development was the fact that the developers were knowing that the match on http_headers was not optimal because it was using a single pattern search algorithm. By switching to a multi-pattern match algorithm, they know it will do a great prefilter job and increase the speed. Here’s the quote of Victor Julien’s comment and explanation:
We knew that at the time we inspected the http_headers and a few other buffers for each potential signature match over and over again using a single pattern search algorithm. We already knew this was inefficient, so moving to a multi-pattern match algorithm that would prefilter the signatures made a lot of sense even without benching it.
Finaly two days later, there is a serie of two commits which brings a other 20-30% improvements :
commit 8bd6a38318838632b019171b9710b217771e4133 Author: Anoop Saldanha Date: Thu Dec 9 17:37:34 2010 +0530 support relative pcre for http header. All pcre processing for http header moved to hhd engine commit 2b781f00d7ec118690b0e94405d80f0ff918c871 Author: Anoop Saldanha Date: Thu Dec 9 12:33:40 2010 +0530 support relative pcre for client body. All pcre processing for client body moved to hcbd engine
It appears that all the improvements are linked with modifications on the HTTP handling. Working hard on improving HTTP feature has lead to an impressive performance boost. Thanks a lot to Anoop for this awesome work. As HTTP is now most of the trafic on internet this is a really good news for suricata users !
My collaboration with OISF has been announced today. This is an honor for me to join this excellent team on this wonderful project. I’ve taken a lot of pleasure in the past months contributing to the project and I’m sure the start of an official collaboration will lead to good things. The challenge is high and I will do my best to merit the trust.
A big thanks to all people who congrat me for this nomination.
Suricata 1.1beta2 has brought OpenBSD to the list of supported operating system. I’m a total newbie to OpenBSD so excuse me for the lack of respect of OpenBSD standards and usages in this documentation.
Here’s the different step, I’ve used to finalize the port starting from a fresh install of OpenBSD.
If you want to use source taken from git, you will need to install building tools:
pkg_add git libtool
automake and autoconf need to be installed to. For a OpenBSD 4.8, one can run:
pkg_add autoconf-2.61p3 automake-1.10.3
For a OpenBSD 5., one can run:
pkg_add autoconf-2.61p3 automake-1.10.3p3
For OpenBDS 5.2:
pkg_add autoconf-2.61p3 automake-1.10.3p6
Autoconf 2.61 is know to work, some other versions triggers a compilation failure.
Then you can simply clone the repository and run autogen:
git clone git://phalanx.openinfosecfoundation.org/oisf.git cd oisf export AUTOCONF_VERSION=2.61 export AUTOMAKE_VERSION=1.10 ./autogen.sh
Before running configure, you need to add the dependencies:
pkg_add gcc pcre libyaml libmagic libnet-188.8.131.52p0
Now, we’re almost done and we can run configure:
CPPFLAGS="-I/usr/local/include" CFLAGS="-L/usr/local/lib" ./configure --prefix=/opt/suricata
You can now run
make install to build and install suricata.
The IDS/IPS suricata has a native support for Netfilter queue. This brings IPS functionnalities to users running Suricata on Linux.
Suricata 1.1beta2 introduces a lot of new features related to the NFQ mode.
New stream inline mode
One of the main improvement of Suricata IPS mode is related with the new stream engine dedicated to inline. Victor Julien has a great blog post about it.
Suricata can now be started on multiple queue by using a comma separated list of queue identifier on the command line. The following syntax:
suricata -q 0 -q 1 -c /etc/suricata.yaml
will start a suricata listening to Netfilter queue 0 and 1.
The option has been added to improve performance of Suricata in NFQ mode. One of the observed limitation is the number of packets per second that can be sent to a single queue. By being able to specify multiple queues, this is possible to increase performances. The best impact is on multicore systems where it adds scalability.
This feature can be used with the queue‐balance option the NFQUEUE target which was added in Linux 2.6.31. When using
−−queue−balance x:x+n instead of
−−queue−num, packets are then balanced across the given queues and packets belonging to the same connection are put into the same nfqueue.
NFQ mode setting
One of the difficulty of IPS usage is to build an adapted firewall ruleset. Following my blog post on this issue, I’ve decided to implement most of the existing mode in Suricata.
When running in NFQ inline mode, it is possible now to use a simulated non-terminal NFQUEUE verdict by using the ‘repeat’ mode.
This permit to send all needed packet to suricata via this a rule:
iptables -I FORWARD -m mark ! --mark $MARK/$MASK -j NFQUEUE
And below, you can have your standard filtering ruleset.
Configuration and other details
To activate this mode, you need to set mode to ‘repeat’ in the nfq section.
nfq: mode: repeat repeat_mark: 1 repeat_mask: 1
This option uses the NF_REPEAT verdict instead of using a standard NF_ACCEPT verdict. The effect is that the packet is sent back at the start of the table where the queuing decision has been taken. As Suricata has put on the packet the mark
$repeat_mark with respect to the mask
$repeat_mask, when the packet reaches again the iptables NFQUEUE rules it does not match anymore because of the mark and the ruleset floowing this rule is used.
The ‘mode’ option in nfq section can have two other values:
- accept (which is default simple mode)
- route (which is little bit tricky)
The idea behind the
route option is that a program using NFQ can issues a verdict that has for effect to send the packet to an another queue. This is thus possible to chain softwares using NFQ.
To activate this option, you have to use the following syntax:
nfq: mode: route route_queue: 2
There is not much usage of this option. One mad mind could think at chaining suricata and snort in IPS mode 😉
The nfq_set_mark keyword
Effect and usage of nfq_set_mark
A new keyword
nfq_set_mark has been added to the rules option.
This is an enhancement of the NFQ mode. If a packet matches a rule using nfq_set_mark in NFQ mode, it is marked with the mark/mask specified in the option during the verdict.
The usage is really simple. In any rules, you can add the nfq_set_mark keyword to specify the mark to put on a packet and what is the the
pass tcp any any -> any 80 (msg:"TCP port 80"; sid:91000004; nfq_set_mark:0x80/0x80; rev:1;)
That’s great but what can I do with that ?
If you are not familiar with the advanced network capabilities, you may wondering how it can be used. Nefilter mark can be used to modify the handling of a packet by the network stack, ie routing, quality of service, Netfilter. Thus, the concept is the following, you can use Suricata to detect a suspect packet and decide to change the way it is handled by the network stack.
Thanks to the CONNMARK target, you can modify the way all packets of the connection are handled.
Among the possibilities:
- Degrade QoS for a connection when a suspect packet has been seen
- Trigger Netfilter logging of all subsequent packets of the connection (logging could be done in pcap for instance)
- Change routing to send the trafic to a dedicated equipment