Skip to content

Commit 51f4e09

Browse files
LivelyCarpet87kuba-moo
authored andcommitted
net: stmmac: fix integer underflow in chain mode
The jumbo_frm() chain-mode implementation unconditionally computes len = nopaged_len - bmax; where nopaged_len = skb_headlen(skb) (linear bytes only) and bmax is BUF_SIZE_8KiB or BUF_SIZE_2KiB. However, the caller stmmac_xmit() decides to invoke jumbo_frm() based on skb->len (total length including page fragments): is_jumbo = stmmac_is_jumbo_frm(priv, skb->len, enh_desc); When a packet has a small linear portion (nopaged_len <= bmax) but a large total length due to page fragments (skb->len > bmax), the subtraction wraps as an unsigned integer, producing a huge len value (~0xFFFFxxxx). This causes the while (len != 0) loop to execute hundreds of thousands of iterations, passing skb->data + bmax * i pointers far beyond the skb buffer to dma_map_single(). On IOMMU-less SoCs (the typical deployment for stmmac), this maps arbitrary kernel memory to the DMA engine, constituting a kernel memory disclosure and potential memory corruption from hardware. Fix this by introducing a buf_len local variable clamped to min(nopaged_len, bmax). Computing len = nopaged_len - buf_len is then always safe: it is zero when the linear portion fits within a single descriptor, causing the while (len != 0) loop to be skipped naturally, and the fragment loop in stmmac_xmit() handles page fragments afterward. Fixes: 286a837 ("stmmac: add CHAINED descriptor mode support (V4)") Cc: stable@vger.kernel.org Signed-off-by: Tyllis Xu <LivelyCarpet87@gmail.com> Link: https://patch.msgid.link/20260401044708.1386919-1-LivelyCarpet87@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
1 parent 6dede39 commit 51f4e09

1 file changed

Lines changed: 6 additions & 5 deletions

File tree

drivers/net/ethernet/stmicro/stmmac/chain_mode.c

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ static int jumbo_frm(struct stmmac_tx_queue *tx_q, struct sk_buff *skb,
2020
unsigned int nopaged_len = skb_headlen(skb);
2121
struct stmmac_priv *priv = tx_q->priv_data;
2222
unsigned int entry = tx_q->cur_tx;
23-
unsigned int bmax, des2;
23+
unsigned int bmax, buf_len, des2;
2424
unsigned int i = 1, len;
2525
struct dma_desc *desc;
2626

@@ -31,17 +31,18 @@ static int jumbo_frm(struct stmmac_tx_queue *tx_q, struct sk_buff *skb,
3131
else
3232
bmax = BUF_SIZE_2KiB;
3333

34-
len = nopaged_len - bmax;
34+
buf_len = min_t(unsigned int, nopaged_len, bmax);
35+
len = nopaged_len - buf_len;
3536

3637
des2 = dma_map_single(priv->device, skb->data,
37-
bmax, DMA_TO_DEVICE);
38+
buf_len, DMA_TO_DEVICE);
3839
desc->des2 = cpu_to_le32(des2);
3940
if (dma_mapping_error(priv->device, des2))
4041
return -1;
4142
tx_q->tx_skbuff_dma[entry].buf = des2;
42-
tx_q->tx_skbuff_dma[entry].len = bmax;
43+
tx_q->tx_skbuff_dma[entry].len = buf_len;
4344
/* do not close the descriptor and do not set own bit */
44-
stmmac_prepare_tx_desc(priv, desc, 1, bmax, csum, STMMAC_CHAIN_MODE,
45+
stmmac_prepare_tx_desc(priv, desc, 1, buf_len, csum, STMMAC_CHAIN_MODE,
4546
0, false, skb->len);
4647

4748
while (len != 0) {

0 commit comments

Comments
 (0)