Skip to content

Commit 83929cc

Browse files
Mike Galbraithingomolnar
authored andcommitted
sched/autogroup: Fix 64-bit kernel nice level adjustment
Michael Kerrisk reported: > Regarding the previous paragraph... My tests indicate > that writing *any* value to the autogroup [nice priority level] > file causes the task group to get a lower priority. Because autogroup didn't call the then meaningless scale_load()... Autogroup nice level adjustment has been broken ever since load resolution was increased for 64-bit kernels. Use scale_load() to scale group weight. Michael Kerrisk tested this patch to fix the problem: > Applied and tested against 4.9-rc6 on an Intel u7 (4 cores). > Test setup: > > Terminal window 1: running 40 CPU burner jobs > Terminal window 2: running 40 CPU burner jobs > Terminal window 1: running 1 CPU burner job > > Demonstrated that: > * Writing "0" to the autogroup file for TW1 now causes no change > to the rate at which the process on the terminal consume CPU. > * Writing -20 to the autogroup file for TW1 caused those processes > to get the lion's share of CPU while TW2 TW3 get a tiny amount. > * Writing -20 to the autogroup files for TW1 and TW3 allowed the > process on TW3 to get as much CPU as it was getting as when > the autogroup nice values for both terminals were 0. Reported-by: Michael Kerrisk <mtk.manpages@gmail.com> Tested-by: Michael Kerrisk <mtk.manpages@gmail.com> Signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-man <linux-man@vger.kernel.org> Cc: stable@vger.kernel.org Link: http://lkml.kernel.org/r/1479897217.4306.6.camel@gmx.de Signed-off-by: Ingo Molnar <mingo@kernel.org>
1 parent 10b9dd5 commit 83929cc

1 file changed

Lines changed: 3 additions & 1 deletion

File tree

kernel/sched/auto_group.c

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -212,6 +212,7 @@ int proc_sched_autogroup_set_nice(struct task_struct *p, int nice)
212212
{
213213
static unsigned long next = INITIAL_JIFFIES;
214214
struct autogroup *ag;
215+
unsigned long shares;
215216
int err;
216217

217218
if (nice < MIN_NICE || nice > MAX_NICE)
@@ -230,9 +231,10 @@ int proc_sched_autogroup_set_nice(struct task_struct *p, int nice)
230231

231232
next = HZ / 10 + jiffies;
232233
ag = autogroup_task_get(p);
234+
shares = scale_load(sched_prio_to_weight[nice + 20]);
233235

234236
down_write(&ag->lock);
235-
err = sched_group_set_shares(ag->tg, sched_prio_to_weight[nice + 20]);
237+
err = sched_group_set_shares(ag->tg, shares);
236238
if (!err)
237239
ag->nice = nice;
238240
up_write(&ag->lock);

0 commit comments

Comments
 (0)