Skip to content

Commit 2371bd8

Browse files
mrutland-armctmarinas
authored andcommitted
arm64: entry: Don't preempt with SError or Debug masked
On arm64, involuntary kernel preemption has been subtly broken since the move to the generic irqentry code. When preemption occurs, the new task may run with SError and Debug exceptions masked unexpectedly, leading to a loss of RAS events, breakpoints, watchpoints, and single-step exceptions. Prior to moving to the generic irqentry code, involuntary preemption of kernel mode would only occur when returning from regular interrupts, in a state where interrupts were masked and all other arm64-specific exceptions (SError, Debug, and pseudo-NMI) were unmasked. This is the only state in which it is valid to switch tasks. As part of moving to the generic irqentry code, the involuntary preemption logic was moved such that involuntary preemption could occur when returning from any (non-NMI) exception. As most exception handlers mask all arm64-specific exceptions before this point, preemption could occur in a state where arm64-specific exceptions were masked. This is not a valid state to switch tasks, and resulted in the loss of exceptions described above. As a temporary bodge, avoid the loss of exceptions by avoiding involuntary preemption when SError and/or Debug exceptions are masked. Practically speaking this means that involuntary preemption will only occur when returning from regular interrupts, as was the case before moving to the generic irqentry code. Fixes: 99eb057 ("arm64: entry: Move arm64_preempt_schedule_irq() into __exit_to_kernel_mode()") Reported-by: Ada Couprie Diaz <ada.coupriediaz@arm.com> Reported-by: Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Jinjie Ruan <ruanjinjie@huawei.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@kernel.org> Cc: Will Deacon <will@kernel.org> Reviewed-by: Jinjie Ruan <ruanjinjie@huawei.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
1 parent 041aa7a commit 2371bd8

1 file changed

Lines changed: 13 additions & 8 deletions

File tree

arch/arm64/include/asm/entry-common.h

Lines changed: 13 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -29,14 +29,19 @@ static __always_inline void arch_exit_to_user_mode_work(struct pt_regs *regs,
2929

3030
static inline bool arch_irqentry_exit_need_resched(void)
3131
{
32-
/*
33-
* DAIF.DA are cleared at the start of IRQ/FIQ handling, and when GIC
34-
* priority masking is used the GIC irqchip driver will clear DAIF.IF
35-
* using gic_arch_enable_irqs() for normal IRQs. If anything is set in
36-
* DAIF we must have handled an NMI, so skip preemption.
37-
*/
38-
if (system_uses_irq_prio_masking() && read_sysreg(daif))
39-
return false;
32+
if (system_uses_irq_prio_masking()) {
33+
/*
34+
* DAIF.DA are cleared at the start of IRQ/FIQ handling, and when GIC
35+
* priority masking is used the GIC irqchip driver will clear DAIF.IF
36+
* using gic_arch_enable_irqs() for normal IRQs. If anything is set in
37+
* DAIF we must have handled an NMI, so skip preemption.
38+
*/
39+
if (read_sysreg(daif))
40+
return false;
41+
} else {
42+
if (read_sysreg(daif) & (PSR_D_BIT | PSR_A_BIT))
43+
return false;
44+
}
4045

4146
/*
4247
* Preempting a task from an IRQ means we leave copies of PSTATE

0 commit comments

Comments
 (0)