A new unix command mode in Suricata

Introduction

I’ve been working for the past few days on a new Suricata feature. It is available in Suricata 1.4rc1.

Suricata can now listen to a unix socket and accept commands from the user. The exchange protocol is JSON-based and the format of the message has been done to be generic and it is described in this commit message. An example script called suricatasc is provided in the source and installed automatically when updating Suricata.

Unix socket command

By setting enabled to yes under unix-command in Suricata YAML configuration file, the creation of the socket is now activated:

unix-command:
  enabled: yes
  #filename: custom.socket # use this to specify an alternate file

This provides for now a set of really exciting commands like:

  • shutdown: this shutdown suricata
  • iface-list: list interfaces where Suricata is sniffing packets
  • iface-stat: list statistic for an interface

For example, a typical session with suricatasc will looks like:

# suricatasc 
>>> iface-list
Success: {'count': 2, 'ifaces': ['eth0', 'eth1']}
>>> iface-stat eth0
Success: {'pkts': 378, 'drop': 0, 'invalid-checksums': 0}

Useful but not really sexy for now. But the main part is the dedicated running mode.

Unix socket running mode

This mode is one of main motivation behind this code. The idea is to be able to ask to Suricata to treat different pcap files without having to restart Suricata between the files.
This provides you a huge gain in time as you don’t need to wait for the signature engine to initialize.

To use this mode, start suricata with your preferred YAML file and provide the option –unix-socket as argument:

suricata -c /etc/suricata-full-sigs.yaml --unix-socket

Then, you can use the provided script suricatasc to connect to the command socket and ask for pcap treatment:

root@tiger:~# suricatasc
>>> pcap-file /home/benches/file1.pcap /tmp/file1
Success: Successfully added file to list
>>> pcap-file /home/benches/file2.pcap /tmp/file2
Success: Successfully added file to list

Yes, you can add multiple files without waiting the result: they will be sequentially processed and the generated log/alert files will be put into the directory specified as second arguments of the pcap-file command. You need to provide absolute path to the files and directory as suricata don’t know from where the script has been run.

To know how much files are waiting to get processed, you can do:

>>> pcap-file-number
Success: 3

To get the list of queued files, do:

>>> pcap-file-list
Success: {'count': 2, 'files': ['/home/benches/file1.pcap', '/home/benches/file2.pcap']}

suritasc script is just intended to be a basic tool to send commands. The protocol has been choozen to be simple so people can easily build their own scripts.

Some words about the protocol

The client connect to the socket and sends its protocol version:

{ "version": "$VERSION_ID" }

The server sends an answer:

{ "return": "OK|NOK" }

If server returns OK, the client is now allowed to send command.

The format of command is the following:

{
   "command": "$COMMAND_NAME",
   "arguments": { $KEY1: $VAL1, ..., $KEYN: $VALN }
}

For example, to ask suricata to treat a pcap, the command is something like:

{
  "command": "pcap-file",
  "arguments": { "filename": "smtp-clean.pcap", "output-dir": "/tmp/out" }
}

The server will try to execute the “command” specified with the
(optional) provided “arguments”.
The answer by server is the following:

{
  "return": "OK|NOK",
  "message": JSON_OBJECT or information string
}

Building it

This new features use jansson the encoding/decoding JSON and it needs thus to be installed on your system before you can do the build. On Debian (testing or sid) or Ubuntu, you can install libjansson-dev

Now, you can proceed with your normal build. For example:

./autogen.sh
./configure
make
make install

You can check if the support will be build in the message at end of configure:

Suricata Configuration:
  AF_PACKET support:                       yes
...
  Unix socket enabled:                     yes

  libnss support:                          no
  libnspr support:                         no
  libjansson support:                      yes

If this is your first install, you may want to run make install-full to get a working configuration of Suricata with Emerging Threats rules being downloaded and setup.

Now, you can just do:

suricata --unix-socket

Conclusion

This new features can be really interesting for people that are using Suricata to parse a large numbers of pcap. But the unix command may be really interesting when more commands will be available. Regarding this don’t hesitate to give me some feedbacks or ideas.

Coccigrep improved func operation

Coccigrep 1.11 is now available and mainly features some improvements related to the func search. The func operation can be used to search when a structure is used as argument of a function. For example, to search where the Packet structures are freed inside Suricata project, one can run:

$ coccigrep -t Packet -a "SCFree" -o func src/
src/alert-unified2-alert.c:1156 (Packet *p):         SCFree(p);
src/alert-unified2-alert.c:1161 (Packet *p):         SCFree(p);
...
src/alert-unified2-alert.c:1368 (Packet *pkt):         SCFree(pkt);

With coccigrep 1.11, it is now possible to look for a function with a regular expression. For example, to see how a time_t is used in the print function of Suricata which are all starting by SCLog (SCLogDebug, SCLogWarning, …), you can simply run:

$ coccigrep -t time_t -a "SCLog.*" -o func src/ 
src/defrag.c:480 (time_t *dc->timeout):     SCLogDebug("\tTimeout: %"PRIuMAX, (uintmax_t)dc->timeout);

With 1.11 version, the func operation is now more useful. It is also more accurate as casted parameters or direct use of a structure (instead of usage though a pointer) are now supported.

New AF_PACKET IPS mode in Suricata

A new Suricata IPS mode

Suricata IPS capabilities are not new. It is possible to use Suricata with Netfilter or ipfw to build a state-of-the-art IPS. On Linux, this system has not the best throughput performance. Patrick McHardy’s work on netlink: memory mapped I/O should bring some real improvement but this is not yet available.

I’ve thus decided to do an implementation of IPS based on AF_PACKET (read raw socket). The idea is based on one of the snort’s running mode. It peers two network interfaces and all packets received from one interface are sent to the other interface (if a signature with drop keyword does not fired on the packet). This requires to dedicate two network interfaces for Suricata but this provide a simple bridge system. As suricata is using latest AF_PACKET features (read load balancing), it was possible to build something really promising.

Victor Julien has commited my implementation which is now available in current git tree and will be part of Suricata 1.4beta1.

Configuration

You need to dedicate two network interfaces for this mode. The configuration is made via some new configuration variable available in the description of an AF_PACKET interface.

For example, the following configuration will create a Suricata acting as IPS between interface eth0 and eth1:

af-packet:
  - interface: eth0
    threads: 1
    defrag: yes
    cluster-type: cluster_flow
    cluster-id: 98
    copy-mode: ips
    copy-iface: eth1
    buffer-size: 64535
    use-mmap: yes
  - interface: eth1
    threads: 1
    cluster-id: 97
    defrag: yes
    cluster-type: cluster_flow
    copy-mode: ips
    copy-iface: eth0
    buffer-size: 64535
    use-mmap: yes

Basically, we’ve got an af-packet configuration with two interfaces. Interface eth0 will copy all received packets to eth1 because of the copy-* configuration variable:

    copy-mode: ips
    copy-iface: eth1

The configuration on eth1 is symetric:

    copy-mode: ips
    copy-iface: eth0

There is some important facts to consider when setting up this mode:

  • The implementation of this mode is dependent of the zero copy mode of AF_PACKET. Thus you need to set use-mmap to yes on both interface.
  • MTU on both interfaces have to be equal: the copy from one interface to the other is direct and packet bigger then the MTU will be dropped by kernel.
  • Set different values of cluster-id on both interfaces to avoid conflict.

The copy-mode variable can take the following values:

  • ips: the drop keyword is honored and matching packet are dropped.
  • tap: no drop occurs, Suricata act as a bridge

Please note, that mode can be different on the peered interfaces.

Current limitation

One other important point is that the threads variable must be equal on both interface. This is due to the fact the peering is done at the socket level: each packet received on a socket is sent to a peered socket listening to the peered interface.

At last but not least, you need at least Linux kernel 3.6 (or 3.4.12) to be able to set a threads value higher than 1. There is a bug in prior kernel that causes an infinite loop (packet being sent from an interface and reread, …). If you can’t wait or can’t made a full upgrade of your kernel you can use my patch which will be part of Linux 3.6 and of Linux 3.4.12.

Running it

Running Suricata in AF_PACKET IPS mode is straight forward. Once the configuration has been updated, you can simply run:

suricata -c /etc/suricata/suricata.yaml --af-packet

Please note that is possible to have normal IDS interface running simultaneously. For example, eth3 could be added to the af-packet configuration and used a regular interface.

Some implementation details

You can find some additional technical information in the commit.

As mentioned below, the peering is done at the socket level. In Suricata, each capture thread is opening a socket and is binding it to the cluster-id of the interface.
Once a thread is started, it is peered with an thread listening to the copy-iface interface which socket will be used as a send interface.
This system permit to avoid locking issue in workers running mode as only one thread will write to the peered socket at a time because we have only one packet treated by a capture thread at a time.

We need to reuse the interface for a simple reason: if a new dedicated socket was to be used, the packet sent to this socket would be received by Suricata read socket instead of being ignored. Linux kernel implement the natural feature of avoiding to send back packet to the sending socket.

This behavior was implemented in Linux kernel for regular socket but this was not the case for clustered sockets. This is why at least Linux 3.6 or my patch is needed to be able to use a value of threads bigger than one.

Conclusion

This mode is new in Suricata and is is really promising. It has suffered limited testing in high bandwidth environment. So, as usual, feedback is more than welcome. Don’t hesitate to send me a mail or to ask question on Suricata Mailing lists.