Skip to content

Commit 057bbd8

Browse files
willdeaconctmarinas
authored andcommitted
arm64: mm: Simplify __TLBI_RANGE_NUM() macro
Since commit e2768b7 ("arm64/mm: Modify range-based tlbi to decrement scale"), we don't need to clamp the 'pages' argument to fit the range for the specified 'scale' as we know that the upper bits will have been processed in a prior iteration. Drop the clamping and simplify the __TLBI_RANGE_NUM() macro. Signed-off-by: Will Deacon <will@kernel.org> Reviewed-by: Ryan Roberts <ryan.roberts@arm.com> Reviewed-by: Dev Jain <dev.jain@arm.com> Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
1 parent 5e63b73 commit 057bbd8

1 file changed

Lines changed: 1 addition & 5 deletions

File tree

arch/arm64/include/asm/tlbflush.h

Lines changed: 1 addition & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -199,11 +199,7 @@ static inline void __tlbi_level(tlbi_op op, u64 addr, u32 level)
199199
* range.
200200
*/
201201
#define __TLBI_RANGE_NUM(pages, scale) \
202-
({ \
203-
int __pages = min((pages), \
204-
__TLBI_RANGE_PAGES(31, (scale))); \
205-
(__pages >> (5 * (scale) + 1)) - 1; \
206-
})
202+
(((pages) >> (5 * (scale) + 1)) - 1)
207203

208204
#define __repeat_tlbi_sync(op, arg...) \
209205
do { \

0 commit comments

Comments
 (0)