Commit 7750c41
net: speed up skb_rbtree_purge()
As measured in my prior patch ("sch_netem: faster rb tree removal"),
rbtree_postorder_for_each_entry_safe() is nice looking but much slower
than using rb_next() directly, except when tree is small enough
to fit in CPU caches (then the cost is the same)
Also note that there is not even an increase of text size :
$ size net/core/skbuff.o.before net/core/skbuff.o
text data bss dec hex filename
40711 1298 0 42009 a419 net/core/skbuff.o.before
40711 1298 0 42009 a419 net/core/skbuff.o
From: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
(cherry picked from commit 7c90584)
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>1 parent 1c44969 commit 7750c41
1 file changed
Lines changed: 7 additions & 4 deletions
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
2850 | 2850 | | |
2851 | 2851 | | |
2852 | 2852 | | |
2853 | | - | |
| 2853 | + | |
2854 | 2854 | | |
2855 | | - | |
2856 | | - | |
| 2855 | + | |
| 2856 | + | |
2857 | 2857 | | |
2858 | | - | |
| 2858 | + | |
| 2859 | + | |
| 2860 | + | |
| 2861 | + | |
2859 | 2862 | | |
2860 | 2863 | | |
2861 | 2864 | | |
| |||
0 commit comments