Skip to content

Commit 394aa57

Browse files
committed
printk_ringbuffer: Create a helper function to decide whether more space is needed
The decision whether some more space is needed is tricky in the printk ring buffer code: 1. The given lpos values might overflow. A subtraction must be used instead of a simple "lower than" check. 2. Another CPU might reuse the space in the mean time. It can be detected when the subtraction is bigger than DATA_SIZE(data_ring). 3. There is exactly enough space when the result of the subtraction is zero. But more space is needed when the result is exactly DATA_SIZE(data_ring). Add a helper function to make sure that the check is done correctly in all situations. Also it helps to make the code consistent and better documented. Suggested-by: John Ogness <john.ogness@linutronix.de> Link: https://lore.kernel.org/r/87tsz7iea2.fsf@jogness.linutronix.de Reviewed-by: John Ogness <john.ogness@linutronix.de> Link: https://patch.msgid.link/20251107194720.1231457-3-pmladek@suse.com [pmladek@suse.com: Updated wording as suggested by John] Signed-off-by: Petr Mladek <pmladek@suse.com>
1 parent cc3bad1 commit 394aa57

1 file changed

Lines changed: 28 additions & 4 deletions

File tree

kernel/printk/printk_ringbuffer.c

Lines changed: 28 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -411,6 +411,23 @@ static bool data_check_size(struct prb_data_ring *data_ring, unsigned int size)
411411
return to_blk_size(size) <= DATA_SIZE(data_ring) / 2;
412412
}
413413

414+
/*
415+
* Compare the current and requested logical position and decide
416+
* whether more space is needed.
417+
*
418+
* Return false when @lpos_current is already at or beyond @lpos_target.
419+
*
420+
* Also return false when the difference between the positions is bigger
421+
* than the size of the data buffer. It might happen only when the caller
422+
* raced with another CPU(s) which already made and used the space.
423+
*/
424+
static bool need_more_space(struct prb_data_ring *data_ring,
425+
unsigned long lpos_current,
426+
unsigned long lpos_target)
427+
{
428+
return lpos_target - lpos_current - 1 < DATA_SIZE(data_ring);
429+
}
430+
414431
/* Query the state of a descriptor. */
415432
static enum desc_state get_desc_state(unsigned long id,
416433
unsigned long state_val)
@@ -577,7 +594,7 @@ static bool data_make_reusable(struct printk_ringbuffer *rb,
577594
unsigned long id;
578595

579596
/* Loop until @lpos_begin has advanced to or beyond @lpos_end. */
580-
while ((lpos_end - lpos_begin) - 1 < DATA_SIZE(data_ring)) {
597+
while (need_more_space(data_ring, lpos_begin, lpos_end)) {
581598
blk = to_block(data_ring, lpos_begin);
582599

583600
/*
@@ -668,7 +685,7 @@ static bool data_push_tail(struct printk_ringbuffer *rb, unsigned long lpos)
668685
* sees the new tail lpos, any descriptor states that transitioned to
669686
* the reusable state must already be visible.
670687
*/
671-
while ((lpos - tail_lpos) - 1 < DATA_SIZE(data_ring)) {
688+
while (need_more_space(data_ring, tail_lpos, lpos)) {
672689
/*
673690
* Make all descriptors reusable that are associated with
674691
* data blocks before @lpos.
@@ -1148,8 +1165,15 @@ static char *data_realloc(struct printk_ringbuffer *rb, unsigned int size,
11481165

11491166
next_lpos = get_next_lpos(data_ring, blk_lpos->begin, size);
11501167

1151-
/* If the data block does not increase, there is nothing to do. */
1152-
if (head_lpos - next_lpos < DATA_SIZE(data_ring)) {
1168+
/*
1169+
* Use the current data block when the size does not increase, i.e.
1170+
* when @head_lpos is already able to accommodate the new @next_lpos.
1171+
*
1172+
* Note that need_more_space() could never return false here because
1173+
* the difference between the positions was bigger than the data
1174+
* buffer size. The data block is reopened and can't get reused.
1175+
*/
1176+
if (!need_more_space(data_ring, head_lpos, next_lpos)) {
11531177
if (wrapped)
11541178
blk = to_block(data_ring, 0);
11551179
else

0 commit comments

Comments
 (0)