Skip to content

Commit 77c368f

Browse files
songmuchunakpm00
authored andcommitted
mm/sparse: fix comment for section map alignment
The comment in mmzone.h currently details exhaustive per-architecture bit-width lists and explains alignment using min(PAGE_SHIFT, PFN_SECTION_SHIFT). Such details risk falling out of date over time and may inadvertently be left un-updated. We always expect a single section to cover full pages. Therefore, we can safely assume that PFN_SECTION_SHIFT is large enough to accommodate SECTION_MAP_LAST_BIT. We use BUILD_BUG_ON() to ensure this. Update the comment to accurately reflect this consensus, making it clear that we rely on a single section covering full pages. Link: https://lore.kernel.org/20260402102320.3617578-1-songmuchun@bytedance.com Signed-off-by: Muchun Song <songmuchun@bytedance.com> Acked-by: David Hildenbrand (Arm) <david@kernel.org> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Lorenzo Stoakes (Oracle) <ljs@kernel.org> Cc: Michal Hocko <mhocko@suse.com> Cc: Mike Rapoport <rppt@kernel.org> Cc: Petr Tesarik <ptesarik@suse.com> Cc: Suren Baghdasaryan <surenb@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
1 parent df620ec commit 77c368f

1 file changed

Lines changed: 10 additions & 15 deletions

File tree

include/linux/mmzone.h

Lines changed: 10 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -2068,21 +2068,16 @@ static inline struct mem_section *__nr_to_section(unsigned long nr)
20682068
extern size_t mem_section_usage_size(void);
20692069

20702070
/*
2071-
* We use the lower bits of the mem_map pointer to store
2072-
* a little bit of information. The pointer is calculated
2073-
* as mem_map - section_nr_to_pfn(pnum). The result is
2074-
* aligned to the minimum alignment of the two values:
2075-
* 1. All mem_map arrays are page-aligned.
2076-
* 2. section_nr_to_pfn() always clears PFN_SECTION_SHIFT
2077-
* lowest bits. PFN_SECTION_SHIFT is arch-specific
2078-
* (equal SECTION_SIZE_BITS - PAGE_SHIFT), and the
2079-
* worst combination is powerpc with 256k pages,
2080-
* which results in PFN_SECTION_SHIFT equal 6.
2081-
* To sum it up, at least 6 bits are available on all architectures.
2082-
* However, we can exceed 6 bits on some other architectures except
2083-
* powerpc (e.g. 15 bits are available on x86_64, 13 bits are available
2084-
* with the worst case of 64K pages on arm64) if we make sure the
2085-
* exceeded bit is not applicable to powerpc.
2071+
* We use the lower bits of the mem_map pointer to store a little bit of
2072+
* information. The pointer is calculated as mem_map - section_nr_to_pfn().
2073+
* The result is aligned to the minimum alignment of the two values:
2074+
*
2075+
* 1. All mem_map arrays are page-aligned.
2076+
* 2. section_nr_to_pfn() always clears PFN_SECTION_SHIFT lowest bits.
2077+
*
2078+
* We always expect a single section to cover full pages. Therefore,
2079+
* we can safely assume that PFN_SECTION_SHIFT is large enough to
2080+
* accommodate SECTION_MAP_LAST_BIT. We use BUILD_BUG_ON() to ensure this.
20862081
*/
20872082
enum {
20882083
SECTION_MARKED_PRESENT_BIT,

0 commit comments

Comments
 (0)