Skip to content

Commit fb4ddf2

Browse files
David Hildenbrand (Red Hat)akpm00
authored andcommitted
mm/memory: handle non-split locks correctly in zap_empty_pte_table()
While we handle pte_lockptr() == pmd_lockptr() correctly in zap_pte_table_if_empty(), we don't handle it in zap_empty_pte_table(), making the spin_trylock() always fail and forcing us onto the slow path. So let's handle the scenario where pte_lockptr() == pmd_lockptr() better, which can only happen if CONFIG_SPLIT_PTE_PTLOCKS is not set. This is only relevant once we unlock CONFIG_PT_RECLAIM on architectures that are not x86-64. Link: https://lkml.kernel.org/r/20260119220708.3438514-3-david@kernel.org Signed-off-by: David Hildenbrand (Red Hat) <david@kernel.org> Reviewed-by: Qi Zheng <zhengqi.arch@bytedance.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Mike Rapoport <rppt@kernel.org> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
1 parent 4c640eb commit fb4ddf2

1 file changed

Lines changed: 6 additions & 4 deletions

File tree

mm/memory.c

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1830,16 +1830,18 @@ static bool pte_table_reclaim_possible(unsigned long start, unsigned long end,
18301830
return details && details->reclaim_pt && (end - start >= PMD_SIZE);
18311831
}
18321832

1833-
static bool zap_empty_pte_table(struct mm_struct *mm, pmd_t *pmd, pmd_t *pmdval)
1833+
static bool zap_empty_pte_table(struct mm_struct *mm, pmd_t *pmd,
1834+
spinlock_t *ptl, pmd_t *pmdval)
18341835
{
18351836
spinlock_t *pml = pmd_lockptr(mm, pmd);
18361837

1837-
if (!spin_trylock(pml))
1838+
if (ptl != pml && !spin_trylock(pml))
18381839
return false;
18391840

18401841
*pmdval = pmdp_get(pmd);
18411842
pmd_clear(pmd);
1842-
spin_unlock(pml);
1843+
if (ptl != pml)
1844+
spin_unlock(pml);
18431845
return true;
18441846
}
18451847

@@ -1931,7 +1933,7 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
19311933
* from being repopulated by another thread.
19321934
*/
19331935
if (can_reclaim_pt && direct_reclaim && addr == end)
1934-
direct_reclaim = zap_empty_pte_table(mm, pmd, &pmdval);
1936+
direct_reclaim = zap_empty_pte_table(mm, pmd, ptl, &pmdval);
19351937

19361938
add_mm_rss_vec(mm, rss);
19371939
lazy_mmu_mode_disable();

0 commit comments

Comments
 (0)