Skip to content

Commit dec9ed9

Browse files
6eanutavpatel
authored andcommitted
RISC-V: KVM: Fix use-after-free in kvm_riscv_gstage_get_leaf()
While fuzzing KVM on RISC-V, a use-after-free was observed in kvm_riscv_gstage_get_leaf(), where ptep_get() dereferences a freed gstage page table page during gfn unmap. The crash manifests as: use-after-free in ptep_get include/linux/pgtable.h:340 [inline] use-after-free in kvm_riscv_gstage_get_leaf arch/riscv/kvm/gstage.c:89 Call Trace: ptep_get include/linux/pgtable.h:340 [inline] kvm_riscv_gstage_get_leaf+0x2ea/0x358 arch/riscv/kvm/gstage.c:89 kvm_riscv_gstage_unmap_range+0xf0/0x308 arch/riscv/kvm/gstage.c:265 kvm_unmap_gfn_range+0x168/0x1fc arch/riscv/kvm/mmu.c:256 kvm_mmu_unmap_gfn_range virt/kvm/kvm_main.c:724 [inline] page last free pid 808 tgid 808 stack trace: kvm_riscv_mmu_free_pgd+0x1b6/0x26a arch/riscv/kvm/mmu.c:457 kvm_arch_flush_shadow_all+0x1a/0x24 arch/riscv/kvm/mmu.c:134 kvm_flush_shadow_all virt/kvm/kvm_main.c:344 [inline] The UAF is caused by gstage page table walks running concurrently with gstage pgd teardown. In particular, kvm_unmap_gfn_range() can traverse gstage page tables while kvm_arch_flush_shadow_all() frees the pgd, leading to use-after-free of page table pages. Fix the issue by serializing gstage unmap and pgd teardown with kvm->mmu_lock. Holding mmu_lock ensures that gstage page tables remain valid for the duration of unmap operations and prevents concurrent frees. This matches existing RISC-V KVM usage of mmu_lock to protect gstage map/unmap operations, e.g. kvm_riscv_mmu_iounmap. Fixes: dd82e35 ("RISC-V: KVM: Factor-out g-stage page table management") Signed-off-by: Jiakai Xu <xujiakai2025@iscas.ac.cn> Signed-off-by: Jiakai Xu <jiakaiPeanut@gmail.com> Reviewed-by: Anup Patel <anup@brainfault.org> Link: https://lore.kernel.org/r/20260202040059.1801167-1-xujiakai2025@iscas.ac.cn Signed-off-by: Anup Patel <anup@brainfault.org>
1 parent 8565617 commit dec9ed9

1 file changed

Lines changed: 4 additions & 0 deletions

File tree

arch/riscv/kvm/mmu.c

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -245,6 +245,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
245245
bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range)
246246
{
247247
struct kvm_gstage gstage;
248+
bool mmu_locked;
248249

249250
if (!kvm->arch.pgd)
250251
return false;
@@ -253,9 +254,12 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range)
253254
gstage.flags = 0;
254255
gstage.vmid = READ_ONCE(kvm->arch.vmid.vmid);
255256
gstage.pgd = kvm->arch.pgd;
257+
mmu_locked = spin_trylock(&kvm->mmu_lock);
256258
kvm_riscv_gstage_unmap_range(&gstage, range->start << PAGE_SHIFT,
257259
(range->end - range->start) << PAGE_SHIFT,
258260
range->may_block);
261+
if (mmu_locked)
262+
spin_unlock(&kvm->mmu_lock);
259263
return false;
260264
}
261265

0 commit comments

Comments
 (0)