Patrick McHardy: Oops, I did it: IPv6 NAT

Introduction

Harald Welte when asked about IPv6 NAT was answering: “it will be over my dead body”. It is now available in official kernel.

Reasons for adding IPv6 NAT

  • Dynamic IPv6 Prefixes : ISP assigning dynamic IPv6 prefixes so Internal network address change. NAT can bring you stability.
  • Easier test setup.
  • Users are asking and most operating systems have it.

To resume the arguments of NAT, Patrick McHardy used this video:

A single, clean, bug free implementation is better than a lot of incomplete non official implementations that where already available.

Release and implementation

It has been implemented in 2011 and merged in 2.6.37.

It is mainly based to on IPv4 implementation and provide from a user perspective a interface similar to IPv4.

It also features an implementation of RFC 6296: stateless network prefix translation. It operates on a packet base and is completely stateless.

It is implemented as an ip6tables target SNPT and DNPT which have to be used on both way to handle the modifications required by RFC 6296.

Current work

An implementation of NAT64 is currently in progress and should be available in the coming weeks.

Pablo Neira Ayuso: nftables, a new packet filtering framework for Netfilter

Introduction

nftable is a kernel packet filtering framework to replaces iptables. It brings no changes in the core (conntrack, hooks).

Match logic is changed: you fetch keys and once you have your key set, you make operation on them. Advanced and specialized matchs are built upon this system.

nftables vs iptables

In iptables, extension were coded in separate files and they must be put in iptables source tree. To act, they must modify on a binary array storing the ruleset and injecting it back to the kernel. So every update involve a full download and upload of the whole ruleset.

nftables is working on a message based basis (exchanged via netlink) and thus allow better handling of incremental modification.

Nftables usage

nftables will provide a high level library which can be used to manipulate ruleset in dedicated tools.

From userspace, backward compatibility is here with utilities fully compatible iptables and ip6tables. Even script wil not have been changed

But there is some new things brought by the change:

  • event notifications: can have a software listening to rules change and logging the change. This is an interesting feature as tracability is often asked in secure environment
  • Better incremental rule update support
  • Enable or disable the chains per table you want (will provide performance optimisation)

There is work in progress on a new utility nft. It will provide a new syntax that will allow to do more efficient matching. It will be possible to form couple of keys and do high speed matching on them.

Ulogd 2.0.2, my first release as maintainer

Objectives of this release

So it is my first ulogd2 release as maintainer. I’ve been in charge of the project since 2012 October 30th and this was an opportunity for me to increase my developments on the project. Roadmap was almost empty so I’ve decided to work on issues that were bothering me as a user of the project. I’ve also included two features which are connection tracking event filtering and a Graphite output module. Ulogd is available on Netfilter web site

Conntrack event filtering

When logging connections entries, there is potentially a lot of events. Filtering the events on network parameters is thus a good idea. This can now be done via a series of options:

  • accept_src_filter: log only a connection if source ip of connection belong to the specified networks. This can be a list of network for example 192.168.1.0/24,1:2::/64
  • accept_dst_filter: log only a connection if destination ip of connection belong to specified networks. This can be a list of networks too.
  • accept_proto_filter: log only connection for the specified layer 4 protocol. It can be a list for example tcp,sctp

A GRAPHITE output module

This is the sexiest part of this release. Seth Hall from the Graphite, a scalable realtime graphing solution. I was playing at the moment with the new Netfilter accounting plugin of ulogd2 and my first thought has been that it was a good idea to add a new output ulogd2 plugin to export data to a Graphite server.

You can read more about Graphite output plugin on this dedicated post.

The result was really cool as show the following dashboard:

A better command line

In case of error, ulogd was just dying and telling you to read a log file. It is now possible to add the -v flag which will redirect the output to stdout and let you see what’s going one.

If it is to verbose for you, you can also set log level from command line via the -l option.

Improved build system

I’ve made some light improvement to the build system. First of all, a configuration status is printed at the end of configure. It displays the compiled input and output plugins:

Ulogd configuration:
  Input plugins:
    NFLOG plugin:			yes
    NFCT plugin:			yes
    NFACCT plugin:			yes
  Output plugins:
    PCAP plugin:			yes
    PGSQL plugin:			yes
    MySQL plugin:			yes
    SQLITE3 plugin:			no
    DBI plugin:				no

I’ve also added configure option to disable the building of some input plugins:

  --enable-nflog          Enable nflog module [default=yes]
  --enable-nfct           Enable nfct module [default=yes]
  --enable-nfacct         Enable nfacct module [default=yes]

For example, to disable Netfilter conntrack logging, you can use:

./configure --disable-nfct

.

Visualize Netfilter accounting in Graphite

Ulogd Graphite output plugin

I’m committed a new output plugin for ulogd. The idea is to send NFACCT accounting data to a graphite server to be able to display the received data. Graphite is a web application which provide real-time visualization and storage of numeric time-series data.

Once data are sent to the graphite server, it is possible to use the web interface to setup different dashboard and graphs (including combination and mathematical operation):

Nfacct setup

One really interesting thing is that Graphite is using a tree hierarchy for data and this hierarchy is build by using a dot separator. So it really matches ulogd key system and on top of that nfacct can be used to create this hierarchy:

nfacct add ipv4.http
nfacct add ipv6.http

Once a counter is created in NFACCT it is instantly sent by ulogd to Graphite and can be used to create graph. To really use the counter, some iptables rules needs to be setup. To continue on previous example, we can use:

ip6tables -I INPUT -p tcp --sport 80 -m nfacct --nfacct-name ipv6.http
ip6tables -I OUTPUT -p tcp --dport 80 -m nfacct --nfacct-name ipv6.http
iptables -I INPUT -p tcp --sport 80 -m nfacct --nfacct-name ipv4.http
iptables -I OUTPUT -p tcp --dport 80 -m nfacct --nfacct-name ipv4.http

To save counters, you can use:

nfacct list >nfacct.dump

and you can restore them with:

nfacct restore <nfacct.dump

Ulogd setup

Ulogd setup is easy, simply add a new stack to ulogd.conf:

stack=acct1:NFACCT,graphite1:GRAPHITE

The configuration of NFACCT is simple, there is only one option which is the polling interval. The plugin will dump all nfacct counter at the given interval:

[acct1]
pollinterval = 2

The Graphite output module is easy to setup, you only need to specify the host and the port of the Graphite collector:

[graphite1]
host="127.0.0.1"
port="2003"

Some statistics about Suricata 1.4

A huge work

Suricata 1.4 has been released December 13th 2012 and it has been a huge work. The number of modifications is just impressing:

390 files changed, 25299 insertions(+), 11982 deletions(-)

The following video is using gource to display the evolution of Suricata IDS/IPS source code between version 1.3 and version 1.4. It only displays the modified files and do not show the files existing at start.

A collaborative work

A total of 11 different authors have participated to this release. The following graph generated by gitstats shows the number of lines of code by author:

Some words about activity

The activity shows that most of the work is done during week day but there is some work done on sunday:

As shown the following graph, the activity as decreased during the stabilization process:

0

0

0

0

0

1

0

0

6

7

9

10

5

11

11

11

15

51

15

35

23

22

16

24

10

29

32

26

18

16

6

8

32 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1

The defense blues

Mother Nature has been really unfair with me. It has given me two strong interests in life: building things and information security. Once that was done, my doom was sealed and I’ve become a infosec defense guy. Nowadays this is one of the worst fate possible in computer science.

Today, this burden is really hard to wear. I know some of you will try to encourage me by saying this like:

It is not that bad. You could have been a Microsoft Exchange administrator.

And they are right. I’m doing really interesting things and taking a lot of pleasure at doing them. The point is not here. It is on the way information security community is evolving. It is cheering the offensive guys and as everybody want to be loved this lead to absurd and dangerous behaviors. And this is just becoming worse every day.

My last example is a conference given at Blackhat by Antonios Atlasis, Chief/Research at Center for strategic cyberspace + security science. The talk is advertised on CSCSS website which is a first sign of the importance of infosec circus for this kind of entity. But let’s get back to the main issue. The talk is showing some results of a study made by Mr. Atlasis on security impact of IPv6 extension headers. Among the result, successful evasion of two well-known IDS: snort and Suricata. And this completely pissed me off.

I’m one of the developer of Suricata and we never have been contacted by the guy before the event. So to sum up, a guy working for a not-for-profit organization is publishing attacks on software without even having warn the editor before. That’s just insane. And this show, the current spirit in information security:

I publish vulnerability without warning editor to maximize the impact of my talk

The worse thing is that I know that a possible defense will be:

I’m a good guy cause I could have sell it as a O-day

That’s not a real excuse. O-day sellers are just blackhats in suit. Or to be more accurate as I know some of them don’t wear suit, blackhats doing public business thanks to the legal void on the selling of cyberweapons. A guy working for a not-for-profit organization has to be a whitehat. I think this is even mandatory in the USA as it seems the not-for-profit organization must act for public good.

Yes, being a defensive guy is not fair. You build huge and complex structure and all the light (and sometime the money) is for the one who demonstrate how one of the thousand engines you’ve build can be abused. And this is the climax when the guy disrespects you by not letting you a chance to fix the issue before it goes public.

About Suricata and a kernel oops in AF_PACKET

Introduction

Kernel oops have been reported by some users running Suricata with AF_PACKET multiple thread capture activated. This is due to a bug I’ve introduced in AF_PACKET when fixing an other bug.

Which kernel not to use with Suricata in AF_PACKET mode

The following kernel version will surely crash if Suricata or any other program is used with AF_PACKET capture with multiple capture threads:

  • Linux 3.2.30 to 3.2.33
  • Linux 3.4.12 to 3.4.18
  • Linux 3.5.5 to 3.5.7
  • Linux 3.6.0 to 3.6.6

If only one capture thread is used there is no risk of crash. If you are running a vulnerable kernel, your configuration should looks like:

af-packet:
  - interface: eth0
    # Number of receive threads (>1 will crash with bad kernel)
    threads: 1

Some explanations

AF_PACKET capture is one of my favorite capture method on Linux with Suricata. It is mainline and it offers really good performance with latest kernel. For example, this is deployed on one of our box and run at 10Gbps speed on non-expensive hardware.
This speed is achieved by using load-balanced capture. This feature allows to have multiple thread/process listening to the same interface. The load-balancing is made by the kernel.
This feature has been developed by David Miller and is available since Linux 3.1.

In summer 2012, I’ve worked on adding AF_PACKET IPS mode and I’ve discovered a kernel bug which was causing a packet sending loop in the IPS code. I’ve proposed a fix af_packet: don’t emit packet on orig fanout group. The patch has reached mainline with Linux 3.6. As it was fixing a real problem it was propagated to most Linux stable branches. Some distributions, like Ubuntu, have also backported the patch to their official kernel.

But the patch was buggy and some Suricata users have reported kernel oops. I’ve fixed the bug af-packet: fix oops when socket is not present and the patch will be available in Linux 3.7.
The kernel stable team has incorporated this patch in their subsequent releases so most stable branches but 3.5 don’t suffer anymore of this problem.

Note

Ubuntu Quantal has a patched kernel since at least 3.5.0-25.

Flow reconstruction and normalization in Suricata

The naive approach would consider that an IDS is just taking packet and doing a lot of matching on it. In fact, this is not at all what is happening. An IDS/IPS like Suricata is in fact rebuilding the data stream and in case of known protocols it is even normalizing the data stream and providing keyword which can be used to match on specific field of a protocol.

Let’s say, we a rule to match on a HTTP request where method is GET and the URL is “/download.php”.

But what happen inside Suricata when want to do such a match ? Let’s try to visualize this matching on a fictive example packets stream. With such a stream, we could have the following process:

Flow reconstruction by Suricata

If you click on the image, you will get access to an interactive svg showing the alerting signatures.

A series of 7 IP packets are seen. They belong to the same flow but because of fragmentation they need to be assembled by the defrag engine. The #4 packet is invalid due to an invalid checksum.
Suricata can alert on this packet if the following rule is activated:

alert ip any any -> any any (msg:"SURICATA IPv4 invalid checksum"; ipv4-csum:invalid; sid:2200073; rev:1;)

This rule is included in the provided signature file decoder-events.rules.

To match on individual TCP packet after defragmentation, one can use the following rule:

alert tcp-pkt any any -> any 80 (msg:"HTTP dl"; content:"Get /download.php"; sid:1; rev:1;)

It will try to find a per-packet match. This means that if the request is cut in two parts, there will be no detection.

At the TCP level, we’ve got three packets but one of them is invalid because of an invalid TCP windows. Suricata can alert on this by using the following rules:

alert tcp any any -> any any (msg:"SURICATA STREAM ESTABLISHED packet out of window"; stream-event:est_packet_out_of_window; sid:2210020; rev:1;)

This rule is included in the provided signature file stream-events.rules.

So at the stream level, we’ve got data made of packets #1, #2, #3 and #5. A matching rule at this level would be:

alert tcp any any -> any any (msg:"HTTP download"; flow:established,to_server; content:"Get /download.php";)

This is a case sensitive rule and this is not resistant to basic transformation on the request such as adding spaces between Get and the URI.

The data is part of an HTTP stream and it is normalized to avoid any application level manipulation that could alter the detection by signatures.
In Suricata, we can use the following rule to have a match on normalized field:

alert http any any -> any any (msg:"Download"; content: "GET"; http_method; content: "/download.php"; http_uri)

The match is made when the traffic is identified as HTTP and we want the HTTP request method to be GET and the URL to match “/download.php”. We’ve got here one of the biggest advantage of Suricata, dedicated keywords and protocol recognition allow to write rule which are almost direct expression of our thinking.

I’m sure most of you are happy not to have this job cleanly done by Suricata!

Installing Debian sid on a XPS 15

Since this morning, I’m the owner of a XPS 15 end-2012 edition. The model I have come with a hard drive and a SSD and it is pre-installed with Windows 8. As it is not a good choice for a OS you want to use the laptop for real work, I’ve installed a Debian sid on it.

On the laptop, I’ve received the 32Go SSD was formatted with a single 8Go partition which was not used by Windows. The Windows installation was made on the classic hard drive.

From Windows, I was able to change the size of the system partition on the hard drive (the big one host OS and data) and doing so, I’ve managed to create a new partition to host my Linux data.

I’ve made the installation using an USB stick prepared with unetbootin. I had to choose the HD media mode to be able to correctly run the install procedure.

To start the install, you need to go in the bios by pressing F2. You then have to activate the legacy mode which will allow you to run an OS like Debian. To switch between the different OS, you have to hit F12 at start to be able to choose between Legacy (for GNU/Linux) and UEFI (for Windows).

The major trick during the install was to delete the partition table on the SSD which was in GPT style (and thus difficultly supported by grub). I’ve created instead a standard old-style partition table on it. I was then able to put my Debian system on the SSD without any problem.

All devices seems to work fine with Debian Sid. So if we omit the optimus Nvidia card issue, this is a computer that I recommend.

Display suricata signatures in Latex

lstlisting is a convenient way to display code when using latex. It has no definition for suricata rules language and I’ve cooked one:

\lstdefinelanguage{suricata}
{morekeywords= {alert, tcp, http, tls, ip, ipv4, ipv4, drop, pass, sid, priority, rev, classtype, threshold, metadata, reference, tag, msg, content, uricontent, pcre, ack, seq, depth, distance, within, offset, replace, nocase, fast\_pattern, rawbytes, byte\_test, byte\_jump, sameip, ip\_proto, flow, window, ftpbounce, isdataat, id, rpc, dsize, flowvar, flowint, pktvar, noalert, flowbits, stream\_size, ttl, itype, icode, tos, icmp\_id, icmp\_seq, detection\_filter, ipopts, flags, fragbits, fragoffset, gid, nfq\_set\_mark, tls.version, tls.subject, tls.issuerdn, tls.fingerprint, tls.store, http\_cookie, http\_method, urilen, http\_client\_body, http\_server\_body, http\_header, http\_raw\_header, http\_uri, http\_raw\_uri, http\_stat\_msg, http\_stat\_code, http\_user\_agent, ssh.protoversion, ssh.softwareversion, ssl\_version, ssl\_state, byte\_extract, file\_data, dce\_iface, dce\_opnum, dce\_stub\_data, asn1, filename, fileext, filestore, filemagic, filemd5, filesize, l3\_proto, luajit},
otherkeywords={ipv4-csum, tcpv4-csum, tcpv6-csum, udpv4-csum, udpv6-csum, icmpv4-csum, icmpv6-csum, decode-event, app-layer-event, engine-event, stream-event},
sensitive=true,
morecomment=[l]{//},
morecomment=[s]{/*}{*/},
morestring=[b]",
}

To use it, you can simply add this code at start of your tex file and you can then use it:

\begin{lstlisting}[language=suricata]
alert tcp any 21 -> any any (msg:"Overlap data";   \
  flow:to_client; dsize:>0;                        \
  stream-event:reassembly_overlap_different_data;  \
  classtype:protocol-command-decode; sid:1; rev:1;)
\end{lstlisting}

which give you the following result:

By the way, the lst of keywords has been obtained by running the till now hidden command:

suricata --list-keywords