Ecosystem of Suricata

Suricata is an IDS/IPS engine. To build a complete solution, you will need to use other tools.

The following schema is a representation of a possible software setup in the case Suricata is used as IDS or IPS on the network. It only uses opensource components:

Suricata is used to sniff and analyse the traffic. To detect malicious traffic, it uses signatures (or rules). You can download a set of specialised rules from EmergingThreats.

To analyse the alerts generated by Suricata, you will need an interface such as, for example, snorby.

But this interface will need to have an access to the data. This is done by storing the alerts in a database. The feed of this database can be done by barnyard2 which take the files outputted by suricata in unified2 and convert and send them to a database (or to some other format).

If you want counter-measure to be taken (such as blocking for a moment attacker at the firewall level), you can use snortsam.

More complex setup can include interactions with a SIEM solution.

Acquisition systems and running modes evolution of Suricata

Some new features have recently reach Suricata’s git tree and will be available in the next development release. I’ve worked on some of them that I will describe here.

Multi interfaces support and new running modes

Configuration update

IDS live mode in suricata (pcap, pf_ring, af_packet) now supports the capture on multiple interfaces. The syntax of the YAML configuration file has evolved and it is now possible to set per-interface variables.

For example, it is possible to define pfring configuration with the following syntax:

pfring:
  - interface: eth4
    threads: 8
    cluster-id: 99
    cluster-type: cluster_flow
  - interface: eth1
    threads: 2
    cluster-id: 98
    cluster-type: cluster_round_robin

This set different parameters for the eth4 and eth2 interfaces. With that configuration, it the user launches suricata with

suricata -c suricata.yaml --pfring

it will be listening on eth4 and with 8 threads receiving packets on eth4 with a flow based load balancing and 2 threads on eth3.

If you want to run suricata on a single interface, simply do:

suricata -c suricata.yaml --pfring=eth4

This syntax can be used with the new AF_PACKET acquisition module describe below.

New running modes

The running modes have been extended by a new running mode available for pfring and af_packet which is called workers. This mode starts a configurable number of threads which are doing all the treatment from packet acquisition to logging.

List of running modes

Here is the list of current running modes:

  • auto: Multi threaded mode (available for all packet acquisition modules)
  • single: Single threaded mode (available in pcap, pcap file, pfring, af_packet)
  • workers: Workers mode (available in AF_PACKET and pfring)
  • autofp: Multi threaded mode. Packets from each flow are assigned to a single detect thread.

af_packet support

Suricata now supports acquisition via AF_PACKET. This linux packet acquisition socket has recently evolved and it supports now load balancing of the capture of an interface between userspace sockets. This module can be configured like show at the start of this post. It will run on almost any Linux but you will need a 3.0 kernel to be able to use the load balancing features.

suricata -c suricata.yaml --af-packet=eth4

OISF brainstorming: planning phase 3 (take 3)

GEO IP

Idea is to add a keyword that would be used to interact with GEOIP database (free at least) and be able to use it to detect things like control canal. For example, an IRC server in an non common country is certainly a control canal.

Live ruleset swap

A must have! This is vital for critical environnement. This is very costly in memory and this should be an option to avoid exploding low memory boxes.

Qosmos integration / API for data exchange

Bringing protocol analysis is an interesting point as it will help to increase performance and accuracy of the engine. Knowing the protocol permit to only run protocol related rules to flow of that protocol. And this avoid to have false detection by running the rules on bad protocol. OpenDPI technology and Qosmos technology integration is discussed. A common API is needed to be able to use both systems.

Global shared flowvars

Global flow var will permit to change the way we build rules. Not being constrained anymore to stream variable will increase the power of rules.

Host/app/OS table import

Idea is to load host type from file to be able to tune the host setting precisely.

IPFIX support

IPFIX support as entry or output could bring some advantages.

Conclusion

Matt Jonkman and Victor Julien will now summarize the input and publish on OISF website the planned features for phase 3 based on discussion about priority of the tasks that have been held.

OISF brainstorming: planning phase 3 (take 2)

DNS fast flux/anomaly detection

The idea is to detect malware and other things by collecting the DNS request and their answer and detecting anomaly. For example, if an host is making a lot of request to a domain.

First part of the job on Suricata is to log all requests and their answer. Then analysis can occurs in the database.

File extraction

This is a work under progress linked with a third party contract. It permit to store exchanged files on disk for some application level protocol. It is possible to say: “store the file, if the content type is different from the extension”. File extraction works currently on HTTP. It focus on POST request to detect uploaded file.

Time-based counters

This aims at detection of regular behaviours which are often linked with command control connection. For example, triggering an alert if a specific DNS request is done every five minutes.

HTTP keyword improvement

HTTP keywords improvement are discussed, some specific keywords could be added to avoid the cost of pcre. A two parameters header match is suggested to support all possible keyword.

Output

Discussion is about logging the normalised application level content. Currently, only the packet triggering the lart is loggued and thus the information about why suricata has logged is lost. This is thus interesting to log the reconstructed application level message to permit the analyst to analyse the reason of the alert.

Regarding the output module, it could be interesting to adds support for CEE logging which would be able to support the resulting composite alert. For performance reason, a barnyard output of this composite alert is interesting. It may be needed to suggest some change to support this application level alert.

Oisf brainstorming: planning phase 3 (take 1)

Performance improvement

As shown by Victor’s latest work on performance counters, there is a lot of work that can be done to improve performance. They are currently good but there is place for improvement. Proposal to provide off-loading or clustering is done. This is heavily discussed but as pointed out by Victor, it will be more interesting to do this in the next phase. Phase 3 should focus in improvement of current code. This will permit to use the upcoming Suricata killing features like global flow variable.

SSL preprocessor

Following the recent certicate authority attacks, a SSL preprocessor which is able to detect blacklist certificate and other things will be really interesting. It could also detect certificate property change whan connection to host or similar things.

Decryption is not seen by the participants of the webex has important. Decryption would be performance killer without accelerator. Thus a limitation to certificate analysis is at first an interesting target

IP and DNS reputation

The idea is too exchange IP reputation between sensors to protect themself against offensive hosts as soon as it has been identified on one sensor. This requires exchange between nodes and this is already done by other projects. Doing it alone in Suricata would be to big and thinking about adding a way to interact easily with this project could be a first step.

Matt Jonkman: development avancement

Phase 2 development is almost over now. Among the completed major features:

  • Multithread
  • protocol discovery
  • smb logging
  • HTTP logging
  • flowvars

One of the advantage of Suricata over Snort is protocol discovery combined to HTTP parsing by libhtp. It provides a huge improvement over Snort as a lot of bad flow are using HTTP on non standard ports.

Victor Julien: Development status

Work has started in september 2007. The work depends on some externel library like multithread of input handling library. The main external depedency is libhtp which is initally developped by Ivan Ristic.

The development is managed in a single git repository. Victor is the only one with commit right. The review are done by Victor and cross review are made by developpers.

Work unit for developers are tasks which are written by Victor and describe a specific task to do. This task are mainly done by OISF funded developers. Some simpler task are let to the comunity and everyone can help with this.

To offload Victor’s load, subsystem mainteners are nominated:

  • Eric Leblond: packet acquisition
  • Anoop Saldanha: detection part

They will have freedom on the way to improve the subsystem they are in charge.

The development is currently done with two branches (1.0 which is bug fix only and master). Victor is currently not happy with this and would like to switch to time-based release. This is discussed as this can be difficult for company to deal with frequent updates. A funding of maintenance by companies could help to keep the current working system.

Peter Manev is in charge of the QA. There is a lot of work to do in this area. Unit test is currently good but there is a lot of work to do to improve detection of regression.

Performance has been improved in 1.1 with a focus on efficient algorithm.

CUDA support needs help. The performance is still lower with than without and thus this really need developer power!

Performance profiling have been added recently by Victor and it shows clearly that work is needed on this.

To sum up, the two main areas where help will be most than welcome are QA and performance profiling.

About Suricata performance boost between 1.0 and 1.1beta2

Discovering the performance boost

When doing some coding on both 1.0 and 1.1 branch of suricata, I’ve remarked that there was a huge performance improvement of the 1.1 branch over the 1.0 branch. The parsing of a given real-life pcap file was taking 200 seconds with 1.0 but only 30 seconds with 1.1. This performance boost was huge and I decide to double check and to study how such a performance boost was possible and how it was obtained:

A git bisection shows me that the performance improvement was done in at least two main steps. I then decide to do a more systematic study on the performance improvement by iterating over the revision and by each time running the same test with the same basic and not tuned configuration:

suricata -c ~eric/builds/suricata/etc/suricata.yaml  -r benches/sandnet.pcap

and storing the log output.

Graphing the improvements

The following graph shows the evolution of treatment time by commits between suricata 1.0.2 and suricata 1.1beta2:

It is impressive to see that improvements are located over a really short period. In term of commit date, almost everything has happened between the December 1th and December 9th.

The following graph shows the same data with a zoom on the critical period:

One can see that there is two big steps and a last less noticeable phase.

Identifiying the commits

The first big step in the improvement is due to commit c61c68fd:

commit c61c68fd365bf2274325bb77c8092dfd20f6ca87
Author: Anoop Saldanha
Date:   Wed Dec 1 13:50:34 2010 +0530

    mpm and fast pattern support for http_header. Also support relative modifiers for http_header

This commit has more than double previous performance.

The second step is commit which also double performance. It is again by Anoop Saldanha:

commit 72b0fcf4197761292342254e07a8284ba04169f0
Author: Anoop Saldanha
Date:   Tue Dec 7 16:22:59 2010 +0530

    modify detection engine to carry out uri mpm run before build match array if alproto is http and if sgh has atleast one sig with uri mpm set

Other improvements were made a few hours later by Anoop who succeeded in a 20% improvements with:

commit b140ed1c9c395d7014564ce077e4dd3b4ae5304e
Author: Anoop Saldanha
Date:   Tue Dec 7 19:22:06 2010 +0530

    modify detection engine to run hhd mpm before building the match array

The motivation of this development was the fact that the developers were knowing that the match on http_headers was not optimal because it was using a single pattern search algorithm. By switching to a multi-pattern match algorithm, they know it will do a great prefilter job and increase the speed. Here’s the quote of Victor Julien’s comment and explanation:

We knew that at the time we inspected the http_headers and a few other buffers for each potential signature match over and over again using a single pattern search algorithm. We already knew this was inefficient, so moving to a multi-pattern match algorithm that would prefilter the signatures made a lot of sense even without benching it.

Finaly two days later, there is a serie of two commits which brings a other 20-30% improvements :

commit 8bd6a38318838632b019171b9710b217771e4133
Author: Anoop Saldanha
Date:   Thu Dec 9 17:37:34 2010 +0530

    support relative pcre for http header. All pcre processing for http header moved to hhd engine

commit 2b781f00d7ec118690b0e94405d80f0ff918c871
Author: Anoop Saldanha
Date:   Thu Dec 9 12:33:40 2010 +0530

    support relative pcre for client body. All pcre processing for client body moved to hcbd engine

Conclusion

It appears that all the improvements are linked with modifications on the HTTP handling. Working hard on improving HTTP feature has lead to an impressive performance boost. Thanks a lot to Anoop for this awesome work. As HTTP is now most of the trafic on internet this is a really good news for suricata users !