For compilation issues for !CONFIG_SCHED_WALT of the following two
commits:
commit a80cf2007d ("sched: Add support to spread tasks")
Bug: 154086870
Bug: 153823050
Signed-off-by: lucaswei <lucaswei@google.com>
Change-Id: I89e224e18f6700ea2abcd162a5b9f3f938a7ad92
This field is necessary to maintain ABI compatibility with ACK. Add it
back, but leave it unused.
Bug: 153905799
Change-Id: Ic9ef5640fa77c3aada023843658e7e4de3bada82
Signed-off-by: Saravana Kannan <saravanak@google.com>
The push_task field is a WALT related field that shouldn't be needed
since we run PELT. So conditionally compile in the field only when WALT
is enabled. Also add #ifdefs around all the uses of this field.
Bug: 153905799
Change-Id: I12edd3f2180ebab14719ba2548e83519beffacc2
Signed-off-by: Saravana Kannan <saravanak@google.com>
Currently iowait doesn't distinguish background/foreground tasks and we
have seen cases where a device run to high frequency unnecessarily when
running some background I/O. This patch limits iowait boost to tasks with
prefer_idle only. Specifically, on Pixel, those are foreground and top
app tasks.
Bug: 130308826
Bug: 144961757
Test: Boot and trace
Change-Id: I2d892beeb4b12b7e8f0fb2848c23982148648a10
Signed-off-by: Wei Wang <wvw@google.com>
Add a new tracepoint sched_capacity_update when capacity value
updated.
Bug: 144177658
Bug: 144961676
Test: Boot and grab trace to check
Change-Id: I30ee55bfcc2fb5a92dd448ad364768ee428f3cc4
Signed-off-by: Wei Wang <wvw@google.com>
Signed-off-by: Jimmy Shiu <jimmyshiu@google.com>
This is a forward port of pa/890483 with modifications from the original
patch due to changes in sched/softirq.c which applies the same logic.
We're finding audio glitches caused by audio-producing RT tasks
that are either interrupted to handle softirq's or that are
scheduled onto cpu's that are handling softirq's.
In a previous patch, we attempted to catch many cases of the
latter problem, but it's clear that we are still losing
significant numbers of races in some apps.
This patch attempts to address the following problem::
It attempts to reduce the most common windows in which
we lose the race between scheduling an RT task on a remote
core and starting to handle softirq's on that core.
We still lose some races, but we lose significantly fewer.
(And we don't want to introduce any heavyweight forms
of synchronization on these paths.)
Bug: 64912585
Bug: 136771796
Bug: 144961676
Change-Id: Ida89a903be0f1965552dd0e84e67ef1d3158c7d8
Signed-off-by: Miguel de Dios <migueldedios@google.com>
Signed-off-by: Jimmy Shiu <jimmyshiu@google.com>
Current cpu util includes util of runnable tasks plus the recent
utilization of currently non-runnable tasks, so it may return a non-zero
value even there is no task running on a cpu. When scheduler is selecting
a cpu for a task, it will check if cpu util is over its capacity, what
could happen is that it will skip a cpu even it is idle, so let scheduler
skip util checking if the task perfers idle cpu and the cpu is idle.
Bug: 133284637
Bug: 144961676
Test: cpu selected as expected
Change-Id: I2c15d6b79b1cc83c72e84add70962a8e74c178b8
Signed-off-by: Rick Yiu <rickyiu@google.com>
Signed-off-by: Jimmy Shiu <jimmyshiu@google.com>
Currently when a boosted task is scheduled we use prefer_idle to try and
get it to an idle core. Once it's scheduled, there is a possibility we
can schedule a non-boosted task on the same core where the boosted task
is running on. This change aims to mitigate that possibility by checking
if the core we're targeting has a boosted task and if so, use the next
best idle core instead.
Bug: 131626264
Bug: 144961676
Change-Id: I3d321e1c71f96526f55f7f3a56e32db411311aa2
Signed-off-by: Miguel de Dios <migueldedios@google.com>
Signed-off-by: Jimmy Shiu <jimmyshiu@google.com>
For a top-app task, if it fits in mid cluster, then prefer the first
cpu in this cluster whenever possible, which happens to be the top-app
exclusive cpu in our cpuset design.
Bug: 128477368
Bug: 144961676
Test: top-app tasks assigned as expected
Change-Id: Ifdd0614f6c8c03edde4ed674c4193f4ba31aac16
Signed-off-by: Rick Yiu <rickyiu@google.com>
Signed-off-by: Jimmy Shiu <jimmyshiu@google.com>
None of these functions does what its name implies when
CONFIG_SCHED_WALT=n. While all are currently unused, future patches
could introduce subtle bugs by calling any of them from non WALT
specific code. Delete the functions so it's obvious if new callers are
added.
Bug: 144961676
Test: build kernel
Change-Id: Ib7552afb5668b48fe2ae56307016e98716e00e63
Signed-off-by: Connor O'Brien <connoro@google.com>
Signed-off-by: Jimmy Shiu <jimmyshiu@google.com>
With CONFIG_SCHED_WALT disabled, is_min_capacity_cpu() is defined to
always return true, which breaks the intended behavior of
task_fits_max(). Revise is_min_capacity_cpu() to return correct
results.
An earlier version of this patch failed to handle the case when
min_cap_orig_cpu == -1 while sched domains are being updated due to
hotplug. Add a check for this case.
Test: trace shows increased top-app placement on medium cores
Bug: 117499098
Bug: 128477368
Bug: 130756111
Bug: 144961676
Change-Id: Ia2b41aa7c57f071c997bcd0e9cdfd0808f6a2bf9
Signed-off-by: Connor O'Brien <connoro@google.com>
Signed-off-by: Jimmy Shiu <jimmyshiu@google.com>
Refine some changes from AU90. One is to allow boosted task run on min
capacity cpu if it fits in. The other is to check fast exit for
prefer-idle task first.
Bug: 128477368
Bug: 130576120
Bug: 144961676
Test: task rq selection behavior is as expected
Change-Id: Ied57b37a361ed137d10167f0346f52a149d08cd6
Signed-off-by: Rick Yiu <rickyiu@google.com>
Signed-off-by: Jimmy Shiu <jimmyshiu@google.com>
If the sync flag is ignored because the current cpu is not in
the affinity mask for the target of a sync wakeup (usually
binder call), prefer to place in the mid cluster if
possible. The main case is a "top-app" task waking a
"foreground" task when the top-app task is running on
a CPU that is not in the foreground cpuset. This patch
causes the search order from mid capacity cpu be used
when the sync flag failed.
backport from commit 98ae57d9eaf7
("ANDROID: sched/fair: if sync flag ignored, try to place in same cluster")
Bug: 117438867
Bug: 144961676
Test: boot to home, operation normal
Change-Id: I68d0cc05db1bc2cb02d4445c71b02215209e8c04
Signed-off-by: Rick Yiu <rickyiu@google.com>
Signed-off-by: Jimmy Shiu <jimmyshiu@google.com>
Part of the fix from commit d86ab9cff8 ("cpufreq: schedutil: use now
as reference when aggregating shared policy requests") is reversed in
commit 05d2ca242067 ("cpufreq: schedutil: Ignore CPU load older than
WALT window size") due to a porting mistake. Restore it while keeping
the relevant change from the latter patch.
Bug: 117438867
Bug: 144961676
Test: build & boot
Change-Id: I21399be760d7c8e2fff6c158368a285dc6261647
Signed-off-by: Connor O'Brien <connoro@google.com>
Signed-off-by: Jimmy Shiu <jimmyshiu@google.com>
CONFIG_LOCK_STAT shows warnings in move_queued_task() for releasing a
pinned lock. The warnings are due to the calls to
double_unlock_balance() added to snapshot WALT. Lets disable them if
not building with SCHED_WALT.
Bug: 123720375
Bug: 148940637
Change-Id: I8bff8550c4f79ca535556f6ec626f17ff5fce637
Signed-off-by: Miguel de Dios <migueldedios@google.com>
Signed-off-by: Jimmy Shiu <jimmyshiu@google.com>
CONFIG_LOCK_STAT shows warnings in detach_task() for releasing a
pinned lock. The warnings are due to the calls to
double_unlock_balance() added to snapshot WALT. Lets disable them if
not building with SCHED_WALT.
Bug: 123720375
Bug: 148940637
Change-Id: Ibfa28b1434fa6006fa0117fd2df1a3eadb321568
Signed-off-by: Miguel de Dios <migueldedios@google.com>
Signed-off-by: Jimmy Shiu <jimmyshiu@google.com>
Since CONFIG_SCHED_WALT is disabled, we need another way to boost
perf as sched_boost does, and skipping EAS has similar effect. We
use powerhal to handle it. Also apply sync wake-up so that pure
CFS path (when skipping EAS) can benefit from it.
(Combine the following two commits
2d21560126cb sched/fair: apply sync wake-up to pure CFS path
9917d5335479 sched/fair: refine check for sync wake-up)
Bug: 119932121
Bug: 117438867
Bug: 144961676
Test: boot to home, operation normal
Change-Id: I970852540839881a926b7e7da5f70ef7e0185349
Signed-off-by: Rick Yiu <rickyiu@google.com>
Signed-off-by: Jimmy Shiu <jimmyshiu@google.com>
This reverts commit 63c27502786646271b4c4ba32268b727e294bbb2.
Bug: 117438867
Bug: 144961676
Test: Tracing confirms EAS is no longer always used
Change-Id: If321547a86592527438ac21c3734a9f4decda712
Signed-off-by: Connor O'Brien <connoro@google.com>
Signed-off-by: Jimmy Shiu <jimmyshiu@google.com>
Important threads can get forced to little cpu's
when the sync or prev_bias hints are followed
blindly. This patch adds a check to see whether
those paths are forcing the task to a cpu that
has less capacity than other cpu's available for
the task. If so, we ignore the sync and prev_bias
and allow the scheduler to make a free decision.
Bug: 117438867
Bug: 144961676
Change-Id: Ie5a99f9a8b65ba9382a8d0de2ae0aad843e558d1
Signed-off-by: Miguel de Dios <migueldedios@google.com>
Signed-off-by: Jimmy Shiu <jimmyshiu@google.com>
Introduce a new sysctl for this option, 'sched_cstate_aware'.
When this is enabled, select_idle_sibling in CFS is modified to
choose the idle CPU in the sibling group which has the lowest
idle state index - idle state indexes are assumed to increase
as sleep depth and hence wakeup latency increase. In this way,
we attempt to minimise wakeup latency when an idle CPU is
required.
Signed-off-by: Srinath Sridharan <srinathsr@google.com>
Includes:
sched: EAS: fix select_idle_sibling
when sysctl_sched_cstate_aware is enabled, best_idle cpu will not be chosen
in the original flow because it will goto done directly
Bug: 30107557
Bug: 144961676
Change-Id: Ie09c2e3960cafbb976f8d472747faefab3b4d6ac
Signed-off-by: martin_liu <martin_liu@htc.com>
Signed-off-by: Andres Oportus <andresoportus@google.com>
[refactored and fixed conflicts]
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Jimmy Shiu <jimmyshiu@google.com>
All cpus are running at max freq, the reason is that in the check of
sugov_up_down_rate_limit() in cpufreq_schedutil.c, the time passed in
is always 0, so the check is always true and hence the freq will not
be updated. It is caused by sched_ktime_clock() will return 0 if
CONFIG_SCHED_WALT is not set. Fix it by replacing sched_ktime_clock()
with ktime_get_ns().
Bug: 119932718
Test: cpu freq could change after fix
Change-Id: I62a0b35208dcd7a1d23da27f909cce3e59208d1f
Signed-off-by: Rick Yiu <rickyiu@google.com>
For compilation issues for !CONFIG_SCHED_WALT of the following two
commits:
commit fbd15b6297 ("sched/fair: Avoid force newly idle load balance if have
iowait task")
commit 5250fc5df0 ("sched/fair: Force gold cpus to do idle lb when silver has
big tasks")
Bug: 148766738
Test: build pass and boot to home
Change-Id: Ia2c6ed57d1385a8105bbd7f0aefad6efd7d76c01
Signed-off-by: Jimmy Shiu <jimmyshiu@google.com>
update_cpu_capacity will update cpu_capacity_orig capped with
thermal_cap, in non-WALT case, thermal_cap is previous
cpu_capacity_orig. This caused cpu_capacity_orig being capped
incorrectly.
Test: Build
Bug: 144143594
Change-Id: I1ff9d9c87554c2d2395d46b215276b7ab50585c0
Signed-off-by: Wei Wang <wvw@google.com>
(cherry picked from commit dac65a5a494f8d0c80101acc5d482d94cda6f158)
Signed-off-by: Danny Lin <danny@kdrag0n.dev>
The current code sets per-cpu variable sd_asym_cpucapacity while
building sched domains even when there are no asymmetric CPUs.
This is done to make sure that EAS remains enabled on a b.L system
after hotplugging out all big/LITTLE CPUs. However it is causing
the below warning during CPU hotplug.
[13988.932604] pc : static_key_slow_dec_cpuslocked+0xe8/0x150
[13988.932608] lr : static_key_slow_dec_cpuslocked+0xe8/0x150
[13988.932610] sp : ffffffc010333c00
[13988.932612] x29: ffffffc010333c00 x28: ffffff8138d88088
[13988.932615] x27: 0000000000000000 x26: 0000000000000081
[13988.932618] x25: ffffff80917efc80 x24: ffffffc010333c60
[13988.932621] x23: ffffffd32bf09c58 x22: 0000000000000000
[13988.932623] x21: 0000000000000000 x20: ffffff80917efc80
[13988.932626] x19: ffffffd32bf0a3e0 x18: ffffff8138039c38
[13988.932628] x17: ffffffd32bf2b000 x16: 0000000000000050
[13988.932631] x15: 0000000000000050 x14: 0000000000040000
[13988.932633] x13: 0000000000000178 x12: 0000000000000001
[13988.932636] x11: 16a9ca5426841300 x10: 16a9ca5426841300
[13988.932639] x9 : 16a9ca5426841300 x8 : 16a9ca5426841300
[13988.932641] x7 : 0000000000000000 x6 : ffffff813f4edadb
[13988.932643] x5 : 0000000000000000 x4 : 0000000000000004
[13988.932646] x3 : ffffffc010333880 x2 : ffffffd32a683a2c
[13988.932648] x1 : ffffffd329355498 x0 : 000000000000001b
[13988.932651] Call trace:
[13988.932656] static_key_slow_dec_cpuslocked+0xe8/0x150
[13988.932660] partition_sched_domains_locked+0x1f8/0x80c
[13988.932666] sched_cpu_deactivate+0x9c/0x13c
[13988.932670] cpuhp_invoke_callback+0x6ac/0xa8c
[13988.932675] cpuhp_thread_fun+0x158/0x1ac
[13988.932678] smpboot_thread_fn+0x244/0x3e4
[13988.932681] kthread+0x168/0x178
[13988.932685] ret_from_fork+0x10/0x18
The mismatch between increment/decrement of sched_asym_cpucapacity
static key is resulting in the above warning. It is due to
the fact that the increment happens only when the system really
has asymmetric capacity CPUs. This check is done by going through
each CPU capacity. So when system becomes SMP during hotplug,
the increment never happens. However the decrement of this static
key is done when any of the currently online CPU has per-cpu variable
sd_asym_cpucapacity value as non-NULL. Since we always set this
variable, we run into this issue.
Our goal was to enable EAS on SMP. To achieve that enable EAS and
build perf domains (required for compute energy) irrespective
of per-cpu variable sd_asym_cpucapacity value. In this way we
no longer have to enable sched_asym_cpucapacity feature on SMP
to enable EAS.
Change-Id: Id46f2b80350b742c75195ad6939b814d4695eb07
Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
*switch to 300Hz timer frequency
*disable cgroup freezer
*disable fair group sched
*disable cpu isolation
*disable WALT
*disable corectl
*disable cpu-boost
*disable msm-performance
*disable sched_autogroup
We have proper names for regulators, so use them instead of meaningless
numbers generated at boot that make it hard to identify regulators
based on their device names alone.
Signed-off-by: Danny Lin <danny@kdrag0n.dev>
Useful when we need to set a node RO to avoid Android over-riding
the custom set values.
Change-Id: Iad8cf81504d55b8ed75e6b5563f7cf397595ec1a
Signed-off-by: Panchajanya1999 <rsk52959@gmail.com>
Android sets the value to 50ms via vold's IdleMaint service. Since
500ms is too long for GC to colllect all invalid segments in time
which results in performance degradation.
On un-encrypted device, vold fails to set this value to 50ms thus
degrades the performance over time.
Based on [1].
[1] https://github.com/topjohnwu/Magisk/pull/5462
Signed-off-by: Panchajanya1999 <rsk52959@gmail.com>
Change-Id: I80f2c29558393d726d5e696aaf285096c8108b23
Signed-off-by: Panchajanya1999 <rsk52959@gmail.com>
On high fs utilization, congestion is hit quite frequently and waiting for a
whooping 20ms is too expensive, especially on critical paths.
Reduce it to an amount that is unlikely to affect UI rendering paths.
The new times are as follows:
100 Hz => 1 jiffy (effective: 10 ms)
250 Hz => 2 jiffies (effective: 8 ms)
300 Hz => 2 jiffies (effective: 6 ms)
1000 Hz => 6 jiffies (effective: 6 ms)
Co-authored-by: Danny Lin <danny@kdrag0n.dev>
Signed-off-by: Park Ju Hyung <qkrwngud825@gmail.com>
Signed-off-by: Adam W. Willis <return.of.octobot@gmail.com>
Signed-off-by: LibXZR <xzr467706992@163.com>
We don't want the background GC work causing UI jitter should it ever
collide with periods of user activity.
Signed-off-by: Danny Lin <danny@kdrag0n.dev>
GC should run conservatively as possible to reduce latency spikes to the user.
Setting ioprio to idle class will allow the kernel to schedule GC thread's I/O
to not affect any other processes' I/O requests.
Signed-off-by: Park Ju Hyung <qkrwngud825@gmail.com>
Signed-off-by: Danny Lin <danny@kdrag0n.dev>