Skip to content

Commit 3a5e96d

Browse files
calebsanderaxboe
authored andcommitted
io_uring: count CQEs in io_iopoll_check()
A subsequent commit will allow uring_cmds that don't use iopoll on IORING_SETUP_IOPOLL io_urings. As a result, CQEs can be posted without setting the iopoll_completed flag for a request in iopoll_list or going through task work. For example, a UBLK_U_IO_FETCH_IO_CMDS command could call io_uring_mshot_cmd_post_cqe() to directly post a CQE. The io_iopoll_check() loop currently only counts completions posted in io_do_iopoll() when determining whether the min_events threshold has been met. It also exits early if there are any existing CQEs before polling, or if any CQEs are posted while running task work. CQEs posted via io_uring_mshot_cmd_post_cqe() or other mechanisms won't be counted against min_events. Explicitly check the available CQEs in each io_iopoll_check() loop iteration to account for CQEs posted in any fashion. Signed-off-by: Caleb Sander Mateos <csander@purestorage.com> Link: https://patch.msgid.link/20260302172914.2488599-4-csander@purestorage.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
1 parent 7995be4 commit 3a5e96d

1 file changed

Lines changed: 2 additions & 7 deletions

File tree

io_uring/io_uring.c

Lines changed: 2 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1186,7 +1186,6 @@ __cold void io_iopoll_try_reap_events(struct io_ring_ctx *ctx)
11861186

11871187
static int io_iopoll_check(struct io_ring_ctx *ctx, unsigned int min_events)
11881188
{
1189-
unsigned int nr_events = 0;
11901189
unsigned long check_cq;
11911190

11921191
min_events = min(min_events, ctx->cq_entries);
@@ -1229,8 +1228,6 @@ static int io_iopoll_check(struct io_ring_ctx *ctx, unsigned int min_events)
12291228
* very same mutex.
12301229
*/
12311230
if (list_empty(&ctx->iopoll_list) || io_task_work_pending(ctx)) {
1232-
u32 tail = ctx->cached_cq_tail;
1233-
12341231
(void) io_run_local_work_locked(ctx, min_events);
12351232

12361233
if (task_work_pending(current) || list_empty(&ctx->iopoll_list)) {
@@ -1239,7 +1236,7 @@ static int io_iopoll_check(struct io_ring_ctx *ctx, unsigned int min_events)
12391236
mutex_lock(&ctx->uring_lock);
12401237
}
12411238
/* some requests don't go through iopoll_list */
1242-
if (tail != ctx->cached_cq_tail || list_empty(&ctx->iopoll_list))
1239+
if (list_empty(&ctx->iopoll_list))
12431240
break;
12441241
}
12451242
ret = io_do_iopoll(ctx, !min_events);
@@ -1250,9 +1247,7 @@ static int io_iopoll_check(struct io_ring_ctx *ctx, unsigned int min_events)
12501247
return -EINTR;
12511248
if (need_resched())
12521249
break;
1253-
1254-
nr_events += ret;
1255-
} while (nr_events < min_events);
1250+
} while (io_cqring_events(ctx) < min_events);
12561251

12571252
return 0;
12581253
}

0 commit comments

Comments
 (0)