Tomasz Bursztyka, ConnMan usage of Netfilter: a close overview

Introduction

ConnMan is a connection manager which integrate all critical networking components. It provides a smart D-Bus API to develop an User Interface. It is plugin oriented and all different network stacks are implemented in different modules.
Connection sharing (aka tethering) is using Netfilter to setup NAT masquerading. So it is a simple usage.

Switching to nftables

Application connectivity is a more advanced part involving Netfilter as it makes a use of statistics and differenciated routing. For example, in a car, service data must be sent to manufacturer operator and not on the owner network.

To do so a session system has been implemented. Application can be modified to open a session to ConnMan. This allow to define a per-session policy for routing and accounting.

ConnMan team wanted to use a C API to do rules modification but this was difficult with iptables and xtables. This is not an official API so it is subject to bugs and change.

ConnMan team has then switch to nftables and is currently working on stabilizing nftables to ensure the acceptation of the project and of the maintainability of their solution in the long time. This work is not yet upstream but there is good chance it will be accepted.

Julien Vehent, AFW: Automating host-based firewalls with Chef

The problem

Centralized firewall design does not scale well when dealing with a lot of servers. It begins to collapse after a few thousands rules.
Furthermore, to be able to have an application A to connect to server B, it would take a workflow and possibly 3 weeks to get the opening.

From Service Oriented Architecture to Service Oriented Security

Service are autonomous. They call each other using a standard protocol. The architecture is described by a list of dependencies between services.
You can then specify security via things like ACCEPT Caching TO Frontend ON PORT 80.
But this force you to do provisioning each time a server start.

Using Chef for firewall provisioning

Chef is an open source automation server that is queried by server to get information about their configuration.

Chef maintains a centralized database of all services and so it can derived from that the security policy. So if we manage to express the contraint in term of Netfilter rules, we will be able to build a firewall policy.

AFW is just that.

AFW is using a service oriented syntax where object like destination can be specified by doing a Chef search. Thus, each time Chef is launched, the object get reevaluated and the filtering rules is updated to the current state of the network.

Custom rules can be added using traditional iptables syntax. AFW is writing outbound rules using the owner target. This allow to define a policy per service and this policy can be easily searched by the developper by simply looking user policy.

Limitations

The JSON document pushed to the chef node contains everything and it can be modified by the server. So it is possible for a server to trigger opening of ports by changing some of its parameters. A solution is to split chef in a chef for configuration and a chef for policy.

Jozsef Kadlecsik, Faster firewalling with ipset

Why ipset ?

iptables is enough sufficient but in some cases limit are found:

  • High number of rules: iptables is linear
  • Need to change the rules often

Independant study available at d(a)emonkeeper’s purgatory has shown that the performance of ipset are almost constant with respect to the number of filtered hosts:

History

The originating project was ippool featuring a a basic set and after some time it has been taken over by Jozsef and renamed ipset. A lot of type of sets are now handled.

ipset 6.x is the current version and features an impressive number of sets.

Architecture

The communication between kernel and userspace is made via netlink

It is not possible to delete a set if it is referenced in kernel by iptables. So it may be appear as a problem but it is possible to use renaming and swapping operation to fix the issue.

The set type are numerous as pointed out here: ipset feature

Using different sets, it is possible to express a global policy in a few iptables rules.

Thanks to Florian Westphal, it is possible to access to set in tc.

Future of ipset

It will soon be possible to have per-element counters and this will allow to do some interesting accounting.

Patrick McHardy: Oops, I did it: IPv6 NAT

Introduction

Harald Welte when asked about IPv6 NAT was answering: “it will be over my dead body”. It is now available in official kernel.

Reasons for adding IPv6 NAT

  • Dynamic IPv6 Prefixes : ISP assigning dynamic IPv6 prefixes so Internal network address change. NAT can bring you stability.
  • Easier test setup.
  • Users are asking and most operating systems have it.

To resume the arguments of NAT, Patrick McHardy used this video:

A single, clean, bug free implementation is better than a lot of incomplete non official implementations that where already available.

Release and implementation

It has been implemented in 2011 and merged in 2.6.37.

It is mainly based to on IPv4 implementation and provide from a user perspective a interface similar to IPv4.

It also features an implementation of RFC 6296: stateless network prefix translation. It operates on a packet base and is completely stateless.

It is implemented as an ip6tables target SNPT and DNPT which have to be used on both way to handle the modifications required by RFC 6296.

Current work

An implementation of NAT64 is currently in progress and should be available in the coming weeks.

Pablo Neira Ayuso: nftables, a new packet filtering framework for Netfilter

Introduction

nftable is a kernel packet filtering framework to replaces iptables. It brings no changes in the core (conntrack, hooks).

Match logic is changed: you fetch keys and once you have your key set, you make operation on them. Advanced and specialized matchs are built upon this system.

nftables vs iptables

In iptables, extension were coded in separate files and they must be put in iptables source tree. To act, they must modify on a binary array storing the ruleset and injecting it back to the kernel. So every update involve a full download and upload of the whole ruleset.

nftables is working on a message based basis (exchanged via netlink) and thus allow better handling of incremental modification.

Nftables usage

nftables will provide a high level library which can be used to manipulate ruleset in dedicated tools.

From userspace, backward compatibility is here with utilities fully compatible iptables and ip6tables. Even script wil not have been changed

But there is some new things brought by the change:

  • event notifications: can have a software listening to rules change and logging the change. This is an interesting feature as tracability is often asked in secure environment
  • Better incremental rule update support
  • Enable or disable the chains per table you want (will provide performance optimisation)

There is work in progress on a new utility nft. It will provide a new syntax that will allow to do more efficient matching. It will be possible to form couple of keys and do high speed matching on them.

Ulogd 2.0.2, my first release as maintainer

Objectives of this release

So it is my first ulogd2 release as maintainer. I’ve been in charge of the project since 2012 October 30th and this was an opportunity for me to increase my developments on the project. Roadmap was almost empty so I’ve decided to work on issues that were bothering me as a user of the project. I’ve also included two features which are connection tracking event filtering and a Graphite output module. Ulogd is available on Netfilter web site

Conntrack event filtering

When logging connections entries, there is potentially a lot of events. Filtering the events on network parameters is thus a good idea. This can now be done via a series of options:

  • accept_src_filter: log only a connection if source ip of connection belong to the specified networks. This can be a list of network for example 192.168.1.0/24,1:2::/64
  • accept_dst_filter: log only a connection if destination ip of connection belong to specified networks. This can be a list of networks too.
  • accept_proto_filter: log only connection for the specified layer 4 protocol. It can be a list for example tcp,sctp

A GRAPHITE output module

This is the sexiest part of this release. Seth Hall from the Graphite, a scalable realtime graphing solution. I was playing at the moment with the new Netfilter accounting plugin of ulogd2 and my first thought has been that it was a good idea to add a new output ulogd2 plugin to export data to a Graphite server.

You can read more about Graphite output plugin on this dedicated post.

The result was really cool as show the following dashboard:

A better command line

In case of error, ulogd was just dying and telling you to read a log file. It is now possible to add the -v flag which will redirect the output to stdout and let you see what’s going one.

If it is to verbose for you, you can also set log level from command line via the -l option.

Improved build system

I’ve made some light improvement to the build system. First of all, a configuration status is printed at the end of configure. It displays the compiled input and output plugins:

Ulogd configuration:
  Input plugins:
    NFLOG plugin:			yes
    NFCT plugin:			yes
    NFACCT plugin:			yes
  Output plugins:
    PCAP plugin:			yes
    PGSQL plugin:			yes
    MySQL plugin:			yes
    SQLITE3 plugin:			no
    DBI plugin:				no

I’ve also added configure option to disable the building of some input plugins:

  --enable-nflog          Enable nflog module [default=yes]
  --enable-nfct           Enable nfct module [default=yes]
  --enable-nfacct         Enable nfacct module [default=yes]

For example, to disable Netfilter conntrack logging, you can use:

./configure --disable-nfct

.

Visualize Netfilter accounting in Graphite

Ulogd Graphite output plugin

I’m committed a new output plugin for ulogd. The idea is to send NFACCT accounting data to a graphite server to be able to display the received data. Graphite is a web application which provide real-time visualization and storage of numeric time-series data.

Once data are sent to the graphite server, it is possible to use the web interface to setup different dashboard and graphs (including combination and mathematical operation):

Nfacct setup

One really interesting thing is that Graphite is using a tree hierarchy for data and this hierarchy is build by using a dot separator. So it really matches ulogd key system and on top of that nfacct can be used to create this hierarchy:

nfacct add ipv4.http
nfacct add ipv6.http

Once a counter is created in NFACCT it is instantly sent by ulogd to Graphite and can be used to create graph. To really use the counter, some iptables rules needs to be setup. To continue on previous example, we can use:

ip6tables -I INPUT -p tcp --sport 80 -m nfacct --nfacct-name ipv6.http
ip6tables -I OUTPUT -p tcp --dport 80 -m nfacct --nfacct-name ipv6.http
iptables -I INPUT -p tcp --sport 80 -m nfacct --nfacct-name ipv4.http
iptables -I OUTPUT -p tcp --dport 80 -m nfacct --nfacct-name ipv4.http

To save counters, you can use:

nfacct list >nfacct.dump

and you can restore them with:

nfacct restore <nfacct.dump

Ulogd setup

Ulogd setup is easy, simply add a new stack to ulogd.conf:

stack=acct1:NFACCT,graphite1:GRAPHITE

The configuration of NFACCT is simple, there is only one option which is the polling interval. The plugin will dump all nfacct counter at the given interval:

[acct1]
pollinterval = 2

The Graphite output module is easy to setup, you only need to specify the host and the port of the Graphite collector:

[graphite1]
host="127.0.0.1"
port="2003"

Defend your network from Microsoft Word upload with Suricata and Netfilter

Introduction

Some times ago, I’ve blogged about new IPS features in Suricata 1.1 and did not find at the time
any killer application of the nfq_set_mark keyword. When using Suricata in Netfilter IPS mode, this keyword allows you to set the Netfilter mark on the packet when a rule match.
This mark can be used by Netfilter or by other network subsystem to differentiate the treatment to apply to the packet.

It takes some time but I’ve found the idea. People around me know I don’t like Microsoft Office. One of the main reason, is that this is as awful as LibreOffice or OpenOffice.org but, on top of that, you’ve got pay for it. And, even if I don’t consider that people sending MS-Word file on the network should pay for that, I think they should at least benefit from a really slow bandwidth. I thus decided to implement this with Suricata and Netfilter.

If I’ve already told you that we will use the nfq_set_mark to mark the packet, one big task remains: how can I detect when someone upload a MS-Word file ? The answer is in the file extraction capabilities of Suricata. My preferred IDS/IPS is able to inspect HTTP traffic and to extract files that are uploaded or downloaded. You can then access to information about them. One of the new keyword is filemagic which return the same output as if you had run the unix file command on the file.

The setup

Now everything is in place. We just need to:

  • Write a signatures to mark packet relative to upload
  • Set up Suricata as IPS
  • Set up Netfilter to send all HTTP traffic to Suricata

The signature is really simple. We want HTTP traffic and a transferred file which is of type “Composite Document File V2 Document”. When this is the case, we alert and set the mark to 1:

alert http any any -> any any (msg:"Microsoft Word upload"; \
          nfq_set_mark:0x1/0x1; \
          filemagic:"Composite Document File V2 Document"; \
          sid:666 ; rev:1;)

Let suppose we’ve saved this signature into word.rules. Now we can start suricata with:

suricata -q 0 -S word.rules

This is cool, we now have a single packet that will be marked. But, as we want to slow down all the connection, we need to propagate the mark. We call Netfilter to the rescue and our friend here is CONNMARK that will transfer the mark to all packets:

iptables -A PREROUTING -t mangle -j CONNMARK --restore-mark
iptables -A POSTROUTING -t mangle -j CONNMARK --save-mark

If we are masochistic enough to try to slow yourself down, we need to add the following line to take care of OUTPUT packets:

iptables -A OUTPUT -t mangle -j CONNMARK --restore-mark

Now that the marking is done, let’s suppose that eth0 is the interface to Internet. In that case, we can setup the Quality of Service.
We create a HTB disc and create a 1kbps class for our MS-Word uploader:

tc qdisc add dev eth0 root handle 1: htb default 0
tc class add dev eth0 parent 1: classid 1:1 htb rate 1kbps ceil 1kbps

Now, we can do the link with our packet marking by assigning packet of handle 1 fw to the 1kpbs class (flowid 1:1):

tc filter add dev eth0 parent 1: protocol ip prio 1 handle 1 fw flowid 1:1

The job is almost done, last step is to send the port 80 packet to suricata:

iptables -I FORWARD -p tcp –dport 80 -j NFQUEUE
iptables -I FORWARD -p tcp –sport 80 -j NFQUEUE

If we are sadistic, we can also treat the local traffic:

iptables -I OUTPUT -p tcp –dport 80 -j NFQUEUE
iptables -I INPUT -p tcp –sport 80 -j NFQUEUE

That’s all with that setup, all MS-Word uploader share a 1kbps bandwith.

Effort should pay

Even among the MS-Word users, there is some people with brain and they will try to escape the MS-Word curse by changing the extension of their document before uploading them. Effort must pay, so
we will change the setup to provide the 10kbps for uploading. To do so, we add a signature to word.rules that will detect when a MS-Word file is uploaded with a changed extension:

alert http any any -> any any (msg:"Tricky Microsoft Word upload";
                nfq_set_mark:0x2/0x2; \
                fileext:!"doc"; \
                filemagic:"Composite Document File V2 Document";
                filestore;
                sid:667; rev:1;)

End of the task will be to send packet with mark 2 on a privileged pipe:

tc class add dev eth0 parent 1: classid 1:2 htb rate 10kbps ceil 10kbps
tc filter add dev eth0 parent 1: protocol ip prio 1 handle 2 fw flowid 1:2

I’m sure a lot of you think we have been to kind and that the little cheaters must be watch over. We’ve already used the filestore keyword in their signature to put the uploaded file on disk but that is not enough.

Keep an eye on cheaters

The traffic of the cheater must be watch over. To do so, we will send all the packets they exchange inside a pcap file. We will need multiple tools to do so. The first of them will be ipset that we will use to maintain a list of bad guys. And we will use ulogd to store their traffic into a pcap file.

To create the blacklist, we will just do:

ipset create cheaters hash:ip timeout 3600
iptables -A POSTROUTING -t mangle -m mark \
    --mark 0x2/0x2 \
    -j SET --add-set cheaters src --exists

The first line creates a set named cheaters that will contains IP. Every element of the set will stay for 1 hour in the set. The second line send all source IP that send packet which got the mark 2. If the IP is already in the set, it will see its timeout reset to 3600 thanks to the –exists option.

Now we will ask Nefilter to log all traffic of IP of the cheaters set:

iptables -A PREROUTING -t raw \
    -m set --match-set cheaters src,dst \
    -j NFLOG --nflog-group 1

The last step is to use ulogd to store the traffic in a pcap file. We need to modify a standard ulogd.conf to have the following lines not commented:

plugin="/home/eric/builds/ulogd/lib/ulogd/ulogd_output_PCAP.so"
stack=log2:NFLOG,base1:BASE,pcap1:PCAP

Now, we can start ulogd:

ulogd -c ulogd.conf

A PCAP file named /var/log/ulogd.pcap will be created and will contain all the packets of cheaters.

Conclusion

It has been hard. We’ve fight a lot. We’ve coded a lot. But now, they are done! We’re able to know more about them than a Facebook admin does!

Flow accounting with Netfilter and ulogd2

Introduction

Starting with Linux kernel 3.3, there’s a new module called nfnetlink_acct.
This new feature added by Pablo Neira brings interesting accountig capabilities to Netfilter.
Pablo has made an extensive description of the feature in the commit.

System setup

We need to build a set of tools to get all that’s necessary:

  • libmnl
  • libnetfilter_acct
  • nfacct

The build is the same for all projects:

git clone git://git.netfilter.org/PROJECT
cd PROJECT
autoreconf -i
./configure
make
sudo make install

nfacct

This command line tool is here to set up and interact with the accounting subsystem. To get the help you can use the man page or run:

# nfacct help
nfacct v1.0.0: utility for the Netfilter extended accounting infrastructure
Usage: nfacct command [parameters]...

Commands:
  list [reset]		List the accounting object table (and reset)
  add object-name	Add new accounting object to table
  delete object-name	Delete existing accounting object
  get object-name	Get existing accounting object
  flush			Flush accounting object table
  version		Display version and disclaimer
  help			Display this help message

Configuration

Objectives

Let’s have for objectives to get the following statistics:

  • IPv6 data usage
    • http on IPv6 usage
    • https on IPv6 usage
  • IPv4 data usage
    • http on IPv4 usage
    • https on IPv4 usage

Ulogd2 configuration

Let’s start by a simple output to XML. For that we will add the following stack:

stack=acct1:NFACCT,xml1:XML

and this one:

stack=acct1:NFACCT,gp1:GPRINT

nfacct setup

modprobe nfnetlink_acct
nfacct add ipv6
nfacct add http-ipv6
nfacct add https-ipv6
nfacct add ipv4
nfacct add http-ipv4
nfacct add https-ipv4

iptables setup

iptables -I INPUT -m nfacct --nfacct-name ipv4
iptables -I INPUT -p tcp --sport 80 -m nfacct --nfacct-name http-ipv4
iptables -I INPUT -p tcp --sport 443 -m nfacct --nfacct-name https-ipv4
iptables -I OUTPUT -m nfacct --nfacct-name ipv4
iptables -I OUTPUT -p tcp --dport 80 -m nfacct --nfacct-name http-ipv4
iptables -I OUTPUT -p tcp --dport 443 -m nfacct --nfacct-name https-ipv4

ip6tables setup

ip6tables -I INPUT -m nfacct --nfacct-name ipv6
ip6tables -I INPUT -p tcp --sport 80 -m nfacct --nfacct-name http-ipv6
ip6tables -I INPUT -p tcp --sport 443 -m nfacct --nfacct-name https-ipv6
ip6tables -I OUTPUT -m nfacct --nfacct-name ipv6
ip6tables -I OUTPUT -p tcp --dport 80 -m nfacct --nfacct-name http-ipv6
ip6tables -I OUTPUT -p tcp --dport 443 -m nfacct --nfacct-name https-ipv6

Starting the beast

Now we can start ulogd2:

ulogd2 -c /etc/ulogd.conf

In /var/log, we will have the following files:

  • ulogd_gprint.log: display stats for each counter on one line
  • ulogd-sum-14072012-224313.xml: XML dump
  • ulogd.log: the daemon log file to look at in case of problem

Output examples

In the ulogd_gprint.log file, the ouput for a given timestamp is something like that:

timestamp=2012/07/14-22:48:17,sum.name=http-ipv6,sum.pkts=0,sum.bytes=0
timestamp=2012/07/14-22:48:17,sum.name=https-ipv6,sum.pkts=0,sum.bytes=0
timestamp=2012/07/14-22:48:17,sum.name=ipv4,sum.pkts=20563,sum.bytes=70
timestamp=2012/07/14-22:48:17,sum.name=http-ipv4,sum.pkts=0,sum.bytes=0
timestamp=2012/07/14-22:48:17,sum.name=https-ipv4,sum.pkts=16683,sum.bytes=44
timestamp=2012/07/14-22:48:17,sum.name=ipv6,sum.pkts=0,sum.bytes=0

The XML output is the following:

<obj><name>http-ipv6</name><pkts>00000000000000000000</pkts><bytes>00000000000000000000</bytes><hour>22</hour><min>43</min><sec>15</sec><wday>7</wday><day>14</day><month>7</month><year>2012</year></obj>
<obj><name>https-ipv6</name><pkts>00000000000000000000</pkts><bytes>00000000000000000000</bytes><hour>22</hour><min>43</min><sec>15</sec><wday>7</wday><day>14</day><month>7</month><year>2012</year></obj>
<obj><name>ipv4</name><pkts>00000000000000000052</pkts><bytes>00000000000000006291</bytes><hour>22</hour><min>43</min><sec>15</sec><wday>7</wday><day>14</day><month>7</month><year>2012</year></obj>
<obj><name>http-ipv4</name><pkts>00000000000000000009</pkts><bytes>00000000000000000408</bytes><hour>22</hour><min>43</min><sec>15</sec><wday>7</wday><day>14</day><month>7</month><year>2012</year></obj>
<obj><name>https-ipv4</name><pkts>00000000000000000031</pkts><bytes>00000000000000004041</bytes><hour>22</hour><min>43</min><sec>15</sec><wday>7</wday><day>14</day><month>7</month><year>2012</year></obj>
<obj><name>ipv6</name><pkts>00000000000000000002</pkts><bytes>00000000000000000254</bytes><hour>22</hour><min>43</min><sec>15</sec><wday>7</wday><day>14</day><month>7</month><year>2012</year></obj>

Conclusion

nfacct and ulogd2 offers a easy and efficient way to gather network statistics. Some tools and glue are currently lacking to fill the data inside reporting tool but the log structure is clean and it should not be too difficult.

Opensvp, a new tool to analyse the security of firewalls using ALGs

Following my talk at SSTIC, I’ve released a new tool called opensvp. Its aim is to cover the attacks described in this talk. It has been published to be able to determine if the firewall policy related to Application Layer Gateways is correctly implemented.

Opensvp implements two type of attacks:

  • Abusive usage of protocol commands: an protocol message can be forged to open pinhole into firewall. Opensvp currently implements message sending for IRC and FTP ALGs.
  • Spoofing attack: if anti-spooofing is not correctly setup, an attacker can send command which result in arbitrary pinhole being opened to a server.

It has been developed in Python and uses scapy to implement the spoofing attack on ALGs.

For usage and download, you can have a look at opensvp page. The attack against helpers is described in the README of opensvp.

The document “Secure use of iptables and connection tracking helpers” was published some months ago and it describes how to protect a Netfilter firewall when using connection tracking helpers (the Netfilter name for ALGs).