Pablo Neira Ayuso, nftables strikes back

Introduction

This is a new kernel packet filtering framework. The only change is on iptables. Netfilter hooks, connection tracking system, NAT are unchanged.
It provides a backward compatibility. nftables was released in March 2009 by Patrick Mchardy. It has been revived in the precedent months by Pablo Neira Ayuso and other hackers.

Architecture

It uses a pseudo-state machine in kernel-space which is similar to BPF:

  • 4 registers: 4 general purpose (128 bits long each) + 1 verdict
  • provides instruction set (which can be extended)

Here’s a example of existing instructions:

reg = pkt.payload[offset, len]
reg = cmp(reg1, reg2, EQ)
reg = byteorder(reg1, NTOH)
reg = pkt.meta(mark)
reg = lookup(set, reg1)
reg = ct(reg1, state)
reg = lookup(set, data)

Assuming this complex match can be developed in userspace by using this instruction set. For example, you can check if source and destination port are equal by doing two pkt.payload operation and then do a comparison:

reg1 = pkt.payload[offset_src_port, len]
reg2 = pkt.payload[offset_dst_port, len]
reg = cmp(reg1, reg2, EQ)

This was strictly impossible to do without coding a C kernel module with iptables.
C module still needs to be implemented if they can not be expressed with the existing instructions (for example the hashlimit module can not be expressed with current instructions). In this case, the way to do will be to provide a new instruction that will allow more combination in the futur.

nftables shows a great flexibility. The rules counters are now optional. And they come in two flavors, one system-wide counter with a lock (and so with performance impact) and a per-cpu counter. You can choose which type you want when writing the rule.

Nftables operation

All communications are made through Netlink messages and this is the main change from a user perspective. This change is here to be able to do incremental changes. iptables was using a different approach where it was getting the whole ruleset from kernel in binary format making update to this binary blob and sending it back to the kernel. So the size of the ruleset has a big impact on update performance.

Rules have a generation masks attached coding if the rules is currently active, will be active in next generation or destroyed in next generation. Adding rules is starting a transaction, sending netlink messages adding or deleting rules. Then the commit operation will ask the kernel to switch the ruleset.
This allow to have the active ruleset to change at once without any transition state where only a part of the rules are deployed.

Impact of the changes for users

The iptables backward comptability will be provided and so there is no impact on users. xtables is a binary which is using the same syntax as iptables and that is provided by nftables. Linking the xtables binary to iptables and ip6tables will be enough to assure compatibility of existing scripts.

User will have more and more interest in switching due to new features that will be possible in nftables but not possible with iptables.

A high level library will be provided to be able to handle rules. It will allow easy rule writing and will have an official public API.

The main issue in the migration process will be for frontends/tools which are using the IPTC library which is not officially public but is used by some. As this library is an internal library for iptables using binary exchange, it will not work on a kernel using nftables. So, application using IPTC will need to switch to the upcoming nftables high level library.

The discussion between the attendees also came to the conclusion that providing a easy to use search operation in the library will be a good way to permit improvement of different firewall frontends. By updating their logic by doing a search and then adding a rule if needed, they will avoid conflict between them as this is for example currently the case with network manager which is destroying any masquerading policy in place by blindly inserting a masquerading rule on top when he thinks he needs one.

Release date

A lot of works is still needed but there is good chance the project will be proposed upstream before the end of 2013. The API needs to be carefully review to be sure it will be correct on a long time. The atomic rule swapping systems needs also a good review.

More and more work power is coming to the project and with luck it may be ready in around 6 months.

Simon Horman, MPLS Enlightened Open vSwitch

Open vSwitch is a multi-layer switch. It is designed to enable network automation through programmatic extension, while still supporting standard management interfaces and protocols.

Openflow is a management protocol that is supported by Open vSwitch. Openflow is has a basic support for MPLS. It features a minimum operation set to enable to configure MPLS correclty.
Openflow MPLS support is partially implemented in Open vSwitch but there is some difficulties.

SOme of the operations feature update of L3+ parameter like TTL. They must be updated in same manner in the MPLS header and in the packet header. And this is quite complicated as it supposed to decode the packet below MPLS. But MPLS header does not include the encapsulated ethernet type so it is almost impossible to access correctly to the packet structure.

A possible solution is to reinject the packet after modification to modify layer by layer in each step. This is currently a work in progress.

Victor Julien, Suricata and Netfilter

Suricata and Netfilter can be better friend as they are doing some common work like decoding packet and maintaining flow table.

In IPS mode, Suricata is receiving raw packet from libnetfilter_queue. It has to made the parsing of this packet but this kind of thing has also been done by kernel. So it should be possible to avoid to duplicate the work.

In fact Netfilter work is limited as ipheaders srtucture are used. Patrik McHardy proposed that Netfilter offset but this is not the most costly part.

The flow discussion was more interesting because conntrack is really doing a similar work as the one done by Suricata. Using the CT extension of libnetfilter_queue, Suricata will be able to get access to all the CT information. And at a first glance, it seems it contains almost all information needed. So it should be possible to remove the flow engine from suricata. The garbage operation would not be necessary as Suricata will get information via NFCT destroy event.

Jozsef Kadlecsik proposed to use Tproxy to redirect flow and provide a “socket” stream instead of individual packet to Suricata. This would change Suricata a lot but could provide a interesting alternative mode.

Pablo Neira Ayuso, Netfilter summary of changes since last workshop

Pablo Neira Ayuso has made a panorama of Netfilter changes since last workshop.

On user side, the first main change to be published after last workshop, is libnetfilter_cttimeout. It allows you to define different timeout policies and to apply them to connections by using the CT target.

An other important new “feature” is a possibility to disable to automatic helper assignment. More information on
Secure use of iptables and connection tracking helpers.

The ‘fail-open’ option of Netfilter allow to change the default behavior of kernel when packet are queued. When the maximum length of a queue is reached, the kernel is dropping by default all incoming packets. For some software, this is not the desired behavior. The ‘fail-open’ socket option allow you to change this and to have the kernel accept the packets if they can not be queued.

An other interesting feature is userspace cthelper. It is now possible to implement a connection tracking helper in userspace.

nfnetlink_queue has now CT support. This means it is possible to queue a packet with the conntrack information attached to it.

IPv6 NAT has been added by Patrick McHardy.

In october 2012, the old QUEUE target has been removed. A switch to NFQUEUE is now required.

connlabel has been added to kernel to provide more category than what was possible by using connmark.

A new bpf match has been committed but the final part in iptables part is missing.

Logging has been improved in helpers. They can reject packets and the user did not have the packet data and can not understand the reason of the drop. The nf_log_packet infrastructure can now be used to log the reason of the drop (with the packet). This should help user to understand the reason of the drops.

Martin Topholm: DDoS experiences with Linux and Netfilter

Martin is working for one.com a local ISP and is facing some DDoS. SYN cookie was implemented but the performance were too low with performance below 300kpps which is not what was expected. In fact SYN is on a slow path with a single spin lock protecting the SYN backtrack queue. So the system behave like a single core system relatively to SYN attacks.

Jesper Dangaard Brouer has proposed a patch to move the syn cookie out of the lock but it has some downside and could not be accepted. In particular, the syncookie system needs to check every type of packet to see if they belong to a previous syn cookie response and thus a central point is needed.

Alternate protection methods include using filtering in Netfilter. Regarding the performance, connection tracking is very costly as it split the packets rate by 2. With conntrack activated, the rate was 757 kpps and without conntrack it was 1738 kpps.

A Netfilter module implementing offloading of SYN cookies is proposed. The idea is to fake the SYN ACK part of the TCP handshake in the module which act as a proxy for the initiation of the connection. This would allow to treat syn cookie algorithm via a small dedicated table and will provided better performances.

David Miller: routing cache is dead, now what ?

The routing cache was maintaining a list of routing decisions. This was an hash table which was highly dynamic and was changing due to traffic. One of the major problem was the garbage collector. An other severe issue was the possibility of DoS using the increase

The routing cache has been suppressed in Linux 3.6 after a 2 years effort by David and the other Linux kernel developers. The global cache has been suppressed and some stored information have been moved to more separate resources like socket.

There was a lot of side effects following this big transformation. On user side, there is no more “neighbour cache overflow” thanks to synchronized sizes of routing and neighbour table.

Metrics were stored in the routing cache entry which has disappeared. So it has been necessary to introduce a separate TCP metrics cache. A netlink interface is available to update/delete/add entry to the cache.

A other side effect of these modifications is that, on TCP socket, xt_owner could be used on input socket but the code needs to be updated.

On security side, the Reverse path filtering has been updated. When activated it is causing up to two extra FIB lookups But when deactivated there is now no overhead at all.

Fabio Massimo Di Nitto: Kronosnet.org

Kronosnet is a “I conceived it when drunk but it works well” VPN implementation. It is using an Ether TAP for the VPN to provide a lyaer 2 vpn. To avoid reinventing the wheel, it is delegating most of the work to the kernel. It supports multilink and redundancy of servers. On multilink side, 8 links can be done per-host to help redundancy.

One of the use of this project is the creation of private network in the cloud as it can be easily setup to provide redundancy and connection for a lot of clients (64k simultaneous clients). And because a layer 2 VPN is really useful for this type of usage.

Configuration is made via a CLI comparable to the one of classical routers.

Fabio has run a demonstration on 4 servers and shows that cutting link has no impact on a ping flood thanks to the multilink system.

Eric Leblond: ulogd2, Netfilter logging reloaded

Introduction

I’ve made yesterday a presentation of ulogd2 at Open Source Days in Copenhagen. After a brief history of Netfilter logging, I’ve described the key features of ulogd2 and demonstrate two interfaces, nf3d and djedi.

The slides are available:
Ulogd2, Netfilter logging reloaded.

Screencasts

This video demonstrates some features of nf3d:

This screencast is showing some of the capabilities of djedi:

Thanks a lot to the organizers for this cool event.

Jan Engelhardt, Xtables2: Packet Filter Evolved

Introduction

Iptables duplicate work for each family and is using a socket protocol which is far too static. Xtables2 is an ongoing effort to evolve the packet filter.
It aims at providing finer frained modification (and not the whole ruleset modification).

Capabilities

  • rule packing: increase cache hit.
  • family independent: no more IPv4 and IPv6 specific code. Only the hook remains specific as they are dependant of core network.
  • xt extension support
  • atomic replace support

xtables syntax is quite similar but not the same. libxtadm is a high-level library for ruleset inspection/manipulation.

More info: xtables.de

Daniel Borkmann: Packets Sockets, BPF and Netsniff-NG

PF_PACKET introduction

This is access to raw packet inside Linux. It is used by libpcap and by other projects like Suricata.
PF_PACKET performance can be improved via dedicated features:

  • Zero-copy RX/TX
  • Socket clustering
  • Linux socket filtering (BPF)

BPF architecture looks like a small virtual machine with register and memory stores. It has different instructions and the kernel has its own kernel extensions to access to cpu number, vlan tag.

Netsniff-NG

Netsniff-ng is a set of minimal tools:

  • netsniff-ng, a high-performance zero-copy analyzer, pcap capturing and replaying tool
  • trafgen, a high-performance zero-copy network traffic generator
  • mausezahn, a packet generator and analyzer for HW/SW appliances with a Cisco-CLI
  • bpfc, a Berkeley Packet Filter (BPF) compiler with Linux extensions
  • ifpps, a top-like kernel networking and system statistics tool
  • flowtop, a top-like netfilter connection tracking tool
  • curvetun, a lightweight multiuser IP tunnel based on elliptic curve cryptography
  • astraceroute, an autonomous system (AS) trace route utility

netsniff-ng can be used to capture with advanced option like using a dedicated CPU or using a BPF filter compiled via bpfc instead of a tcpdump like expression.

trafgen can used to generate high speed traffic, using multiple CPUs, and complex configuration/setup even including fuzzing.
A description of the packet can be given where each element is built using different functions.
It can even be combined with tc (for example netem to simulate specific network condition).

The future include to find a way to utilize multicore efficiently with packet_fanout and disk writing.