Jesper Dangaard Brouer: the missing conntrack garbage collector

There is a fixed number of connection tracking entries. When reaching the maximum, new connections are simply dropped. Default maximum size is ridicully too low like using 20Mbytes oon a 12GB memory computer.

Kernel syslog message "nf_conntrack: table full, dropping, packet" is not correct because packet have just no state relatively to conntrack. Usually they get blocked by invalid rules but an adapted ruleset could let them go through.

One other problem is that adjusting the connection tracking size does not change the hash size. This results in longer search because conntrack has often to go through a list.

Mostly being out of entries is due to connection in end of life. But as the timeout are big, the number of entries can be important. Lowering the timeout when connection tracking is almost full can help to release the pressure. An automatic change of the parameters is something that could be thought of but finding a correct logic is not easy.

Destruction of non-important connection tracking entry is something that could really help, but it is necessary to find an adapted logic. Adjusting timeout dynamically requires to do a full scan of the list and this is really costly. This algorithm has also to be resistant to DoS attack. Finding a generic strategy is difficult. Pablo proposes to try a userspace solution. This could be used to experiment different policies and it could also use information taken from other subsystems or/and from configuration file.

Samir suggest to lower the nf_conntrack_tcp_timeout_syn_sent when being under attack. This could made the bad entries to disappear faster.

Leave a Reply

Your email address will not be published. Required fields are marked *