BACKPORT: sched/fair: Fix overutilized update in enqueue_task_fair()

[ Upstream commit 8e1ac4299a6e8726de42310d9c1379f188140c71 ]

enqueue_task_fair() attempts to skip the overutilized update for new
tasks as their util_avg is not accurate yet. However, the flag we check
to do so is overwritten earlier on in the function, which makes the
condition pretty much a nop.

Fix this by saving the flag early on.

Fixes: 2802bf3cd936 ("sched/fair: Add over-utilization/tipping point indicator")
Reported-by: Rick Yiu <rickyiu@google.com>
Signed-off-by: Quentin Perret <qperret@google.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
Reviewed-by: Valentin Schneider <valentin.schneider@arm.com>
Link: https://lkml.kernel.org/r/20201112111201.2081902-1-qperret@google.com
Change-Id: I04a99c7db2d0559e838343762a928ac6caa1a9c4
This commit is contained in:
Quentin Perret 2020-11-12 11:12:01 +00:00 committed by spakkkk
parent 569cba1415
commit 1ff58f0fe0

View File

@ -5574,6 +5574,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
struct sched_entity *se = &p->se; struct sched_entity *se = &p->se;
bool prefer_idle = sched_feat(EAS_PREFER_IDLE) ? bool prefer_idle = sched_feat(EAS_PREFER_IDLE) ?
(schedtune_prefer_idle(p) > 0) : 0; (schedtune_prefer_idle(p) > 0) : 0;
int task_new = !(flags & ENQUEUE_WAKEUP);
/* /*
* The code below (indirectly) updates schedutil which looks at * The code below (indirectly) updates schedutil which looks at
@ -5666,7 +5667,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
* overutilized. Hopefully the cpu util will be back to * overutilized. Hopefully the cpu util will be back to
* normal before next overutilized check. * normal before next overutilized check.
*/ */
if ((flags & ENQUEUE_WAKEUP) && if ((!task_new) &&
!(prefer_idle && rq->nr_running == 1)) !(prefer_idle && rq->nr_running == 1))
update_overutilized_status(rq); update_overutilized_status(rq);
} }