Skip to content

Commit c062048

Browse files
rpptakpm00
authored andcommitted
userfaultfd: introduce mfill_copy_folio_locked() helper
Patch series "mm, kvm: allow uffd support in guest_memfd", v4. These patches enable support for userfaultfd in guest_memfd. As the groundwork I refactored userfaultfd handling of PTE-based memory types (anonymous and shmem) and converted them to use vm_uffd_ops for allocating a folio or getting an existing folio from the page cache. shmem also implements callbacks that add a folio to the page cache after the data passed in UFFDIO_COPY was copied and remove the folio from the page cache if page table update fails. In order for guest_memfd to notify userspace about page faults, there are new VM_FAULT_UFFD_MINOR and VM_FAULT_UFFD_MISSING that a ->fault() handler can return to inform the page fault handler that it needs to call handle_userfault() to complete the fault. Nikita helped to plumb these new goodies into guest_memfd and provided basic tests to verify that guest_memfd works with userfaultfd. The handling of UFFDIO_MISSING in guest_memfd requires ability to remove a folio from page cache, the best way I could find was exporting filemap_remove_folio() to KVM. I deliberately left hugetlb out, at least for the most part. hugetlb handles acquisition of VMA and more importantly establishing of parent page table entry differently than PTE-based memory types. This is a different abstraction level than what vm_uffd_ops provides and people objected to exposing such low level APIs as a part of VMA operations. Also, to enable uffd in guest_memfd refactoring of hugetlb is not needed and I prefer to delay it until the dust settles after the changes in this set. This patch (of 4): Split copying of data when locks held from mfill_atomic_pte_copy() into a helper function mfill_copy_folio_locked(). This makes improves code readability and makes complex mfill_atomic_pte_copy() function easier to comprehend. No functional change. Link: https://lore.kernel.org/20260402041156.1377214-1-rppt@kernel.org Link: https://lore.kernel.org/20260402041156.1377214-2-rppt@kernel.org Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org> Acked-by: Peter Xu <peterx@redhat.com> Reviewed-by: David Hildenbrand (Arm) <david@kernel.org> Reviewed-by: Harry Yoo (Oracle) <harry@kernel.org> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Andrei Vagin <avagin@google.com> Cc: Axel Rasmussen <axelrasmussen@google.com> Cc: Baolin Wang <baolin.wang@linux.alibaba.com> Cc: Hugh Dickins <hughd@google.com> Cc: James Houghton <jthoughton@google.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Lorenzo Stoakes (Oracle) <ljs@kernel.org> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Michal Hocko <mhocko@suse.com> Cc: Mike Rapoport <rppt@kernel.org> Cc: Muchun Song <muchun.song@linux.dev> Cc: Oscar Salvador <osalvador@suse.de> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Sean Christopherson <seanjc@google.com> Cc: Shuah Khan <shuah@kernel.org> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Harry Yoo <harry.yoo@oracle.com> Cc: Nikita Kalyazin <kalyazin@amazon.com> Cc: David Carlier <devnexen@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
1 parent dc44f32 commit c062048

1 file changed

Lines changed: 35 additions & 24 deletions

File tree

mm/userfaultfd.c

Lines changed: 35 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -238,14 +238,47 @@ int mfill_atomic_install_pte(pmd_t *dst_pmd,
238238
return ret;
239239
}
240240

241+
static int mfill_copy_folio_locked(struct folio *folio, unsigned long src_addr)
242+
{
243+
void *kaddr;
244+
int ret;
245+
246+
kaddr = kmap_local_folio(folio, 0);
247+
/*
248+
* The read mmap_lock is held here. Despite the
249+
* mmap_lock being read recursive a deadlock is still
250+
* possible if a writer has taken a lock. For example:
251+
*
252+
* process A thread 1 takes read lock on own mmap_lock
253+
* process A thread 2 calls mmap, blocks taking write lock
254+
* process B thread 1 takes page fault, read lock on own mmap lock
255+
* process B thread 2 calls mmap, blocks taking write lock
256+
* process A thread 1 blocks taking read lock on process B
257+
* process B thread 1 blocks taking read lock on process A
258+
*
259+
* Disable page faults to prevent potential deadlock
260+
* and retry the copy outside the mmap_lock.
261+
*/
262+
pagefault_disable();
263+
ret = copy_from_user(kaddr, (const void __user *) src_addr,
264+
PAGE_SIZE);
265+
pagefault_enable();
266+
kunmap_local(kaddr);
267+
268+
if (ret)
269+
return -EFAULT;
270+
271+
flush_dcache_folio(folio);
272+
return ret;
273+
}
274+
241275
static int mfill_atomic_pte_copy(pmd_t *dst_pmd,
242276
struct vm_area_struct *dst_vma,
243277
unsigned long dst_addr,
244278
unsigned long src_addr,
245279
uffd_flags_t flags,
246280
struct folio **foliop)
247281
{
248-
void *kaddr;
249282
int ret;
250283
struct folio *folio;
251284

@@ -256,27 +289,7 @@ static int mfill_atomic_pte_copy(pmd_t *dst_pmd,
256289
if (!folio)
257290
goto out;
258291

259-
kaddr = kmap_local_folio(folio, 0);
260-
/*
261-
* The read mmap_lock is held here. Despite the
262-
* mmap_lock being read recursive a deadlock is still
263-
* possible if a writer has taken a lock. For example:
264-
*
265-
* process A thread 1 takes read lock on own mmap_lock
266-
* process A thread 2 calls mmap, blocks taking write lock
267-
* process B thread 1 takes page fault, read lock on own mmap lock
268-
* process B thread 2 calls mmap, blocks taking write lock
269-
* process A thread 1 blocks taking read lock on process B
270-
* process B thread 1 blocks taking read lock on process A
271-
*
272-
* Disable page faults to prevent potential deadlock
273-
* and retry the copy outside the mmap_lock.
274-
*/
275-
pagefault_disable();
276-
ret = copy_from_user(kaddr, (const void __user *) src_addr,
277-
PAGE_SIZE);
278-
pagefault_enable();
279-
kunmap_local(kaddr);
292+
ret = mfill_copy_folio_locked(folio, src_addr);
280293

281294
/* fallback to copy_from_user outside mmap_lock */
282295
if (unlikely(ret)) {
@@ -285,8 +298,6 @@ static int mfill_atomic_pte_copy(pmd_t *dst_pmd,
285298
/* don't free the page */
286299
goto out;
287300
}
288-
289-
flush_dcache_folio(folio);
290301
} else {
291302
folio = *foliop;
292303
*foliop = NULL;

0 commit comments

Comments
 (0)