Skip to content

Commit 20ad8b0

Browse files
vdonnefortrostedt
authored andcommitted
ring-buffer: Enforce read ordering of trace_buffer cpumask and buffers
On CPU hotplug, if it is the first time a trace_buffer sees a CPU, a ring_buffer_per_cpu will be allocated and its corresponding bit toggled in the cpumask. Many readers check this cpumask to know if they can safely read the ring_buffer_per_cpu but they are doing so without memory ordering and may observe the cpumask bit set while having NULL buffer pointer. Enforce the memory read ordering by sending an IPI to all online CPUs. The hotplug path is a slow-path anyway and it saves us from adding read barriers in numerous call sites. Link: https://patch.msgid.link/20260401053659.3458961-1-vdonnefort@google.com Signed-off-by: Vincent Donnefort <vdonnefort@google.com> Suggested-by: Steven Rostedt (Google) <rostedt@goodmis.org> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
1 parent 23d1cfc commit 20ad8b0

1 file changed

Lines changed: 18 additions & 1 deletion

File tree

kernel/trace/ring_buffer.c

Lines changed: 18 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7722,6 +7722,12 @@ int ring_buffer_map_get_reader(struct trace_buffer *buffer, int cpu)
77227722
return 0;
77237723
}
77247724

7725+
static void rb_cpu_sync(void *data)
7726+
{
7727+
/* Not really needed, but documents what is happening */
7728+
smp_rmb();
7729+
}
7730+
77257731
/*
77267732
* We only allocate new buffers, never free them if the CPU goes down.
77277733
* If we were to free the buffer, then the user would lose any trace that was in
@@ -7760,7 +7766,18 @@ int trace_rb_cpu_prepare(unsigned int cpu, struct hlist_node *node)
77607766
cpu);
77617767
return -ENOMEM;
77627768
}
7763-
smp_wmb();
7769+
7770+
/*
7771+
* Ensure trace_buffer readers observe the newly allocated
7772+
* ring_buffer_per_cpu before they check the cpumask. Instead of using a
7773+
* read barrier for all readers, send an IPI.
7774+
*/
7775+
if (unlikely(system_state == SYSTEM_RUNNING)) {
7776+
on_each_cpu(rb_cpu_sync, NULL, 1);
7777+
/* Not really needed, but documents what is happening */
7778+
smp_wmb();
7779+
}
7780+
77647781
cpumask_set_cpu(cpu, buffer->cpumask);
77657782
return 0;
77667783
}

0 commit comments

Comments
 (0)