Skip to content

Commit 7750c41

Browse files
edumazetgregkh
authored andcommitted
net: speed up skb_rbtree_purge()
As measured in my prior patch ("sch_netem: faster rb tree removal"), rbtree_postorder_for_each_entry_safe() is nice looking but much slower than using rb_next() directly, except when tree is small enough to fit in CPU caches (then the cost is the same) Also note that there is not even an increase of text size : $ size net/core/skbuff.o.before net/core/skbuff.o text data bss dec hex filename 40711 1298 0 42009 a419 net/core/skbuff.o.before 40711 1298 0 42009 a419 net/core/skbuff.o From: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net> (cherry picked from commit 7c90584) Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
1 parent 1c44969 commit 7750c41

1 file changed

Lines changed: 7 additions & 4 deletions

File tree

net/core/skbuff.c

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -2850,12 +2850,15 @@ EXPORT_SYMBOL(skb_queue_purge);
28502850
*/
28512851
void skb_rbtree_purge(struct rb_root *root)
28522852
{
2853-
struct sk_buff *skb, *next;
2853+
struct rb_node *p = rb_first(root);
28542854

2855-
rbtree_postorder_for_each_entry_safe(skb, next, root, rbnode)
2856-
kfree_skb(skb);
2855+
while (p) {
2856+
struct sk_buff *skb = rb_entry(p, struct sk_buff, rbnode);
28572857

2858-
*root = RB_ROOT;
2858+
p = rb_next(p);
2859+
rb_erase(&skb->rbnode, root);
2860+
kfree_skb(skb);
2861+
}
28592862
}
28602863

28612864
/**

0 commit comments

Comments
 (0)