Skip to content

Commit 8166876

Browse files
Christoph Hellwigcmaiolino
authored andcommitted
xfs: don't decrement the buffer LRU count for in-use buffers
XFS buffers are added to the LRU when they are unused, but are only removed from the LRU lazily when the LRU list scan finds a used buffer. So far this only happen when the LRU counter hits 0, which is suboptimal as buffers that were added to the LRU, but are in use again still consume LRU scanning resources and are aged while actually in use. Fix this by checking for in-use buffers and removing the from the LRU before decrementing the LRU counter. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Darrick J. Wong <djwong@kernel.org> Signed-off-by: Carlos Maiolino <cem@kernel.org>
1 parent 497560b commit 8166876

1 file changed

Lines changed: 12 additions & 10 deletions

File tree

fs/xfs/xfs_buf.c

Lines changed: 12 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1524,23 +1524,25 @@ xfs_buftarg_isolate(
15241524
return LRU_SKIP;
15251525

15261526
/*
1527-
* Decrement the b_lru_ref count unless the value is already
1528-
* zero. If the value is already zero, we need to reclaim the
1529-
* buffer, otherwise it gets another trip through the LRU.
1527+
* If the buffer is in use, remove it from the LRU for now. We can't
1528+
* free it while someone is using it, and we should also not count
1529+
* eviction passed for it, just as if it hadn't been added to the LRU
1530+
* yet.
15301531
*/
1531-
if (atomic_add_unless(&bp->b_lru_ref, -1, 0)) {
1532+
if (bp->b_lockref.count > 0) {
1533+
list_lru_isolate(lru, &bp->b_lru);
15321534
spin_unlock(&bp->b_lockref.lock);
1533-
return LRU_ROTATE;
1535+
return LRU_REMOVED;
15341536
}
15351537

15361538
/*
1537-
* If the buffer is in use, remove it from the LRU for now as we can't
1538-
* free it. It will be freed when the last reference drops.
1539+
* Decrement the b_lru_ref count unless the value is already
1540+
* zero. If the value is already zero, we need to reclaim the
1541+
* buffer, otherwise it gets another trip through the LRU.
15391542
*/
1540-
if (bp->b_lockref.count > 0) {
1541-
list_lru_isolate(lru, &bp->b_lru);
1543+
if (atomic_add_unless(&bp->b_lru_ref, -1, 0)) {
15421544
spin_unlock(&bp->b_lockref.lock);
1543-
return LRU_REMOVED;
1545+
return LRU_ROTATE;
15441546
}
15451547

15461548
lockref_mark_dead(&bp->b_lockref);

0 commit comments

Comments
 (0)