Skip to content

Commit c55bf90

Browse files
kuba-mooisilence
authored andcommitted
eth: bnxt: adjust the fill level of agg queues with larger buffers
The driver tries to provision more agg buffers than header buffers since multiple agg segments can reuse the same header. The calculation / heuristic tries to provide enough pages for 65k of data for each header (or 4 frags per header if the result is too big). This calculation is currently global to the adapter. If we increase the buffer sizes 8x we don't want 8x the amount of memory sitting on the rings. Luckily we don't have to fill the rings completely, adjust the fill level dynamically in case particular queue has buffers larger than the global size. Signed-off-by: Jakub Kicinski <kuba@kernel.org> [pavel: rebase on top of agg_size_fac] Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
1 parent f57efb3 commit c55bf90

1 file changed

Lines changed: 21 additions & 4 deletions

File tree

  • drivers/net/ethernet/broadcom/bnxt

drivers/net/ethernet/broadcom/bnxt/bnxt.c

Lines changed: 21 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -3825,16 +3825,31 @@ static void bnxt_free_rx_rings(struct bnxt *bp)
38253825
}
38263826
}
38273827

3828+
static int bnxt_rx_agg_ring_fill_level(struct bnxt *bp,
3829+
struct bnxt_rx_ring_info *rxr)
3830+
{
3831+
/* User may have chosen larger than default rx_page_size,
3832+
* we keep the ring sizes uniform and also want uniform amount
3833+
* of bytes consumed per ring, so cap how much of the rings we fill.
3834+
*/
3835+
int fill_level = bp->rx_agg_ring_size;
3836+
3837+
if (rxr->rx_page_size > BNXT_RX_PAGE_SIZE)
3838+
fill_level /= rxr->rx_page_size / BNXT_RX_PAGE_SIZE;
3839+
3840+
return fill_level;
3841+
}
3842+
38283843
static int bnxt_alloc_rx_page_pool(struct bnxt *bp,
38293844
struct bnxt_rx_ring_info *rxr,
38303845
int numa_node)
38313846
{
3832-
const unsigned int agg_size_fac = PAGE_SIZE / BNXT_RX_PAGE_SIZE;
3847+
unsigned int agg_size_fac = rxr->rx_page_size / BNXT_RX_PAGE_SIZE;
38333848
const unsigned int rx_size_fac = PAGE_SIZE / SZ_4K;
38343849
struct page_pool_params pp = { 0 };
38353850
struct page_pool *pool;
38363851

3837-
pp.pool_size = bp->rx_agg_ring_size / agg_size_fac;
3852+
pp.pool_size = bnxt_rx_agg_ring_fill_level(bp, rxr) / agg_size_fac;
38383853
if (BNXT_RX_PAGE_MODE(bp))
38393854
pp.pool_size += bp->rx_ring_size / rx_size_fac;
38403855

@@ -4412,11 +4427,13 @@ static void bnxt_alloc_one_rx_ring_netmem(struct bnxt *bp,
44124427
struct bnxt_rx_ring_info *rxr,
44134428
int ring_nr)
44144429
{
4430+
int fill_level, i;
44154431
u32 prod;
4416-
int i;
4432+
4433+
fill_level = bnxt_rx_agg_ring_fill_level(bp, rxr);
44174434

44184435
prod = rxr->rx_agg_prod;
4419-
for (i = 0; i < bp->rx_agg_ring_size; i++) {
4436+
for (i = 0; i < fill_level; i++) {
44204437
if (bnxt_alloc_rx_netmem(bp, rxr, prod, GFP_KERNEL)) {
44214438
netdev_warn(bp->dev, "init'ed rx ring %d with %d/%d pages only\n",
44224439
ring_nr, i, bp->rx_agg_ring_size);

0 commit comments

Comments
 (0)