Commit Graph

31062 Commits

Author SHA1 Message Date
mikairyuu
ae82a71adf Merge branch 'android-4.19-stable' of https://android.googlesource.com/kernel/common into 神速 2022-11-19 11:03:02 +10:00
Sultan Alsawaf
5a7dc5ffbc Revert "qos: Don't allow userspace to impose restrictions on CPU idle levels"
This reverts commit b45b40f737237a158530d755785926884a88730e.
2022-11-13 14:13:50 +10:00
Greg Kroah-Hartman
9e134db9c7 This is the 4.19.265 stable release
-----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmNtKxsACgkQONu9yGCS
 aT7dlg//Zg6wOfIWvDGZyh7aKUbnLYDnonOYAMXZcSkPHx9DOSVPYvfqmM1eHhaG
 kr5qz5rghzGHmLEK40LK2Fc7XQYK+qIhowfG01eviKGw13MqPbgkYKSSKoZ2Xoss
 2cJyi+QUYEFpr2W8FcNFzH1tHFbDKi90T9eIZ82uc2kZBY5M3ZQgBAaUmJsmjg/l
 niBsGRHwgHG1NYS04p49vy9EKa71/wVjYRVCMl/grVtorLsR3EcyBMFe/CgVtMhL
 pp/R1ogEc+6QGVMpyCDqDH5SE5J0QBLrryHfpkPp9UXgItyYBen2vZ5oCETWh5kP
 Kn3sKDlUJRFTqWuvUK+V7wbuSycbu+P8bPDLz9ENIx+mJMow48f4mTGcghWRPcHN
 +XV3ViubTHBVKEJYFVnGaDln5vvGXeVm5cfMWPu1WFl5N1HH5T9WhPRbWcAicfjM
 uHhV/0t2i2LPsSB478JD+x6Lz6+OSG+j2QjWhHelpPdfNUHTiXpuGu8bYu4gtRWI
 ZwVfHEz/TJwyqgwnAJWuIZ42u2fVlGtfPpevxpybiqWcGLILzkR8AmyrGcQNMKVb
 w9QJlp877ujWJj10mAZa7mjDAk6Rx1jHLFKX6almtwCRYUzKDaBAdMAewlZK2klv
 hApmlYOS3PJHar+Uflt60BwHjAbBBQ42641cWqAUjDJ+eaKfW8g=
 =jN0a
 -----END PGP SIGNATURE-----

Merge 4.19.265 into android-4.19-stable

Changes in 4.19.265
	NFSv4.1: Handle RECLAIM_COMPLETE trunking errors
	NFSv4.1: We must always send RECLAIM_COMPLETE after a reboot
	nfs4: Fix kmemleak when allocate slot failed
	net: dsa: Fix possible memory leaks in dsa_loop_init()
	RDMA/qedr: clean up work queue on failure in qedr_alloc_resources()
	nfc: s3fwrn5: Fix potential memory leak in s3fwrn5_nci_send()
	nfc: nfcmrvl: Fix potential memory leak in nfcmrvl_i2c_nci_send()
	net: fec: fix improper use of NETDEV_TX_BUSY
	ata: pata_legacy: fix pdc20230_set_piomode()
	net: sched: Fix use after free in red_enqueue()
	net: tun: fix bugs for oversize packet when napi frags enabled
	ipvs: use explicitly signed chars
	ipvs: fix WARNING in __ip_vs_cleanup_batch()
	ipvs: fix WARNING in ip_vs_app_net_cleanup()
	rose: Fix NULL pointer dereference in rose_send_frame()
	mISDN: fix possible memory leak in mISDN_register_device()
	isdn: mISDN: netjet: fix wrong check of device registration
	btrfs: fix inode list leak during backref walking at resolve_indirect_refs()
	btrfs: fix ulist leaks in error paths of qgroup self tests
	Bluetooth: L2CAP: Fix use-after-free caused by l2cap_reassemble_sdu
	Bluetooth: L2CAP: fix use-after-free in l2cap_conn_del()
	net: mdio: fix undefined behavior in bit shift for __mdiobus_register
	net, neigh: Fix null-ptr-deref in neigh_table_clear()
	ipv6: fix WARNING in ip6_route_net_exit_late()
	media: s5p_cec: limit msg.len to CEC_MAX_MSG_SIZE
	media: cros-ec-cec: limit msg.len to CEC_MAX_MSG_SIZE
	media: dvb-frontends/drxk: initialize err to 0
	HID: saitek: add madcatz variant of MMO7 mouse device ID
	i2c: xiic: Add platform module alias
	Bluetooth: L2CAP: Fix attempting to access uninitialized memory
	block, bfq: protect 'bfqd->queued' by 'bfqd->lock'
	btrfs: fix type of parameter generation in btrfs_get_dentry
	tcp/udp: Make early_demux back namespacified.
	kprobe: reverse kp->flags when arm_kprobe failed
	tracing/histogram: Update document for KEYS_MAX size
	capabilities: fix potential memleak on error path from vfs_getxattr_alloc()
	ALSA: usb-audio: Add quirks for MacroSilicon MS2100/MS2106 devices
	efi: random: reduce seed size to 32 bytes
	parisc: Make 8250_gsc driver dependend on CONFIG_PARISC
	parisc: Export iosapic_serial_irq() symbol for serial port driver
	parisc: Avoid printing the hardware path twice
	ext4: fix warning in 'ext4_da_release_space'
	KVM: x86: Mask off reserved bits in CPUID.80000008H
	KVM: x86: emulator: em_sysexit should update ctxt->mode
	KVM: x86: emulator: introduce emulator_recalc_and_set_mode
	KVM: x86: emulator: update the emulation mode after CR0 write
	linux/bits.h: make BIT(), GENMASK(), and friends available in assembly
	wifi: brcmfmac: Fix potential buffer overflow in brcmf_fweh_event_worker()
	Linux 4.19.265

Change-Id: Ic909280cdb4170c18dbcd7fcfceaa653d8dc18bd
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
2022-11-12 14:08:30 +00:00
Guanglei Li
0abd7ac721 ANDROID: sched/fair: correct pelt load information in sched-pelt.h
With the following commit:

cb22d9159761 ("sched/fair: add support to tune PELT ramp/decay timings)

PELT introduced 16ms/8ms for load/utilization half-life decayed.
Precomputed load information inclued in sched-pelt.h is generated by
Documentation/scheduler/sched-pelt.c.

With this commit, runnable_avg_yN_sum[]/LOAD_AVG_MAX/LOAD_AVG_MAX_N is
precomputed wrong for 16ms/8ms half-life.

Bug: 120440300
Change-Id: I83d90963b714449ec8036423ce8bc25f0b4cd6b9
Signed-off-by: Guanglei Li <guanglei.li@unisoc.com>
Signed-off-by: Ke Wang <ke.wang@unisoc.com>
[kdrag0n: Regenerated for android-4.19]
Signed-off-by: Danny Lin <danny@kdrag0n.dev>
2022-11-12 11:25:07 +00:00
Patrick Bellasi
9bbe3c433e FROMLIST: sched/fair: add support to tune PELT ramp/decay timings
The PELT half-life is the time [ms] required by the PELT signal to build
up a 50% load/utilization, starting from zero. This time is currently
hardcoded to be 32ms, a value which seems to make sense for most of the
workloads.

However, 32ms has been verified to be too long for certain classes of
workloads. For example, in the mobile space many tasks affecting the
user-experience run with a 16ms or 8ms cadence, since they need to match
the common 60Hz or 120Hz refresh rate of the graphics pipeline.
This contributed so fare to the idea that "PELT is too slow" to properly
track the utilization of interactive mobile workloads, especially
compared to alternative load tracking solutions which provides a
better representation of tasks demand in the range of 10-20ms.

A faster PELT ramp-up time could give some advantages to speed-up the
time required for the signal to stabilize and thus to better represent
task demands in the mobile space. As a downside, it also reduces the
decay time, and thus we forget the load/utilization of sleeping tasks
(or idle CPUs) faster.

Fortunately, since the integration of the utilization estimation
support in mainline kernel:

   commit 7f65ea42eb ("sched/fair: Add util_est on top of PELT")

a fast decay time is no longer an issue for tasks utilization estimation.
Although estimated utilization does not slow down the decay of blocked
utilization on idle CPUs, for mobile workloads this seems not to be a
major concern compared to the benefits in interactivity responsiveness.

Let's add a compile time option to choose the PELT speed which better
fits for a specific system. By default the current 32ms half-life is
used, but we can also compile a kernel to use a faster ramp-up time of
either 16ms or 8ms. These two configurations have been verified to give
PELT a further improvement in performance, compared to other out-of-tree
load tracking solutions, when it comes to track interactive workloads
thus better supporting both tasks placements and frequencies selections.

Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Paul Turner <pjt@google.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Joel Fernandes <joelaf@google.com>
Cc: Morten Rasmussen <morten.rasmussen@arm.com>
Cc: linux-doc@vger.kernel.org
Cc: linux-kernel@vger.kernel.org

[
 backport from LKML:
 Message-ID: <20180409165134.707-1-patrick.bellasi@arm.com>
]
Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com>
Change-Id: I50569748918b799ac4bf4e7d2b387253080a0fd2
[kdrag0n: Forward-ported from kernel/common android-4.14 to
          android-4.19]
Signed-off-by: Danny Lin <danny@kdrag0n.dev>
2022-11-12 11:25:07 +00:00
Rick Yiu
7b8fa2379f sched/fair: Fix kernel warning
Fix idle_get_state_idx() callded without holding rcu read lock.

Bug: 189064175
Signed-off-by: Rick Yiu <rickyiu@google.com>
Change-Id: I719b2917ad45a9a03e8f2aa16f6a4cb356c117be
2022-11-12 11:25:07 +00:00
Quentin Perret
1ff58f0fe0 BACKPORT: sched/fair: Fix overutilized update in enqueue_task_fair()
[ Upstream commit 8e1ac4299a6e8726de42310d9c1379f188140c71 ]

enqueue_task_fair() attempts to skip the overutilized update for new
tasks as their util_avg is not accurate yet. However, the flag we check
to do so is overwritten earlier on in the function, which makes the
condition pretty much a nop.

Fix this by saving the flag early on.

Fixes: 2802bf3cd936 ("sched/fair: Add over-utilization/tipping point indicator")
Reported-by: Rick Yiu <rickyiu@google.com>
Signed-off-by: Quentin Perret <qperret@google.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
Reviewed-by: Valentin Schneider <valentin.schneider@arm.com>
Link: https://lkml.kernel.org/r/20201112111201.2081902-1-qperret@google.com
Change-Id: I04a99c7db2d0559e838343762a928ac6caa1a9c4
2022-11-12 11:25:07 +00:00
Rick Yiu
569cba1415 kernel/sched: spread big prefer_idle tasks to little cores
Spreading those prefer_idle tasks that are supposed to run on big
cores to little cores would help reduce task runnable time. It may
help performance in some cases.

Bug: 166626809
Test: app launch/uibench/cuj test
Change-Id: If496a29128798639db71946fe64847954ff533ca
Signed-off-by: Rick Yiu <rickyiu@google.com>
2022-11-12 11:25:06 +00:00
Daniel Bristot de Oliveira
fdab3bbf27 UPSTREAM: sched/rt: Disable RT_RUNTIME_SHARE by default
The RT_RUNTIME_SHARE sched feature enables the sharing of rt_runtime
between CPUs, allowing a CPU to run a real-time task up to 100% of the
time while leaving more space for non-real-time tasks to run on the CPU
that lend rt_runtime.

The problem is that a CPU can easily borrow enough rt_runtime to allow
a spinning rt-task to run forever, starving per-cpu tasks like kworkers,
which are non-real-time by design.

This patch disables RT_RUNTIME_SHARE by default, avoiding this problem.
The feature will still be present for users that want to enable it,
though.

Signed-off-by: Daniel Bristot de Oliveira <bristot@redhat.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Tested-by: Wei Wang <wvw@google.com>
Link: https://lkml.kernel.org/r/b776ab46817e3db5d8ef79175fa0d71073c051c7.1600697903.git.bristot@redhat.com
(cherry picked from commit 2586af1ac187f6b3a50930a4e33497074e81762d)
Change-Id: Ibb1b185d512130783ac9f0a29f0e20e9828c86fd

Bug: 169673278
Test: build, boot and check the trace with RT task
Signed-off-by: Kyle Lin <kylelin@google.com>
Change-Id: Iffede8107863b02ad4a0cb902fc8119416931bdb
2022-11-12 11:25:06 +00:00
Rick Yiu
a1460afbc4 sched: refine code for computing energy
When computing energy of each cpu, we should simulate the boost
margin if the task is enqueued on that cpu. This is to sync with
ACK change - aosp/1346105.

Bug: 158637636
Test: build pass
Change-Id: If5cbec9ac04fea46830c32f797c6b09dce1ea1a2
Signed-off-by: Rick Yiu <rickyiu@google.com>
2022-11-12 11:25:06 +00:00
Rick Yiu
da27e9854a sched/fair: refine can_migrate_boosted_task
Ideally only one task which prefer high cap and prio <= 120 will be
put to each big core. However, sometime there will be race condition
that more than one cpus are waking up such kind of tasks at the same
time, or there are no idle cpus to be selected at the moment, so
that more than one such tasks are put to the same big core. In this
case, we should allow little cores to pull such task.

Bug: 160663228
Test: build pass
Signed-off-by: Rick Yiu <rickyiu@google.com>
Change-Id: Id68a2a831e7c85aa167fcb05891ca085ebf3e81f
2022-11-12 11:25:06 +00:00
Jimmy Shiu
6e37e97e35 sched/core: fix userspace affining threads incorrectly by task name.
To identify certain apps which request max cpu freq to affine its
tasks to specific cpus, besides checking its lib name, task name is
also a factor that we can identify the suspcious task.

Test: build and test the 'perfect kick 2' game.
Bug: 163293825
Bug: 161324271
Change-Id: I4359859db743b4c9122e9df40af0b109370e8f1f
Signed-off-by: Jimmy Shiu <jimmyshiu@google.com>
2022-11-12 11:25:06 +00:00
Kyle Lin
f2ae9c47a3 kernel: sched: account for real time utilization
PELT doesn't account for real time task utilization in cpu_util().
As the result a CPU busy running RT task is considered as low
utilization by the scheduler. Fix this by adding real time loading
in to account.

Bug: 147385228
Bug: 160663228
Bug: 162213449
Bug: 160663228
Test: boot to home and run audio test

Change-Id: Ie4412b186608b9a618f0d35cee9a7310db481f7c
Signed-off-by: Kyle Lin <kylelin@google.com>
(cherry picked from commit 8bc7fc013391af48aea7d0556bacb144a7328c30)
2022-11-12 11:25:05 +00:00
Pavankumar Kondeti
d31aa4ff20 sched: Improve the scheduler
This change is for general scheduler improvement.

Change-Id: I50d41aa3338803cbd45ff6314b2bb3978c59282b
Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
2022-11-12 11:25:05 +00:00
Pavankumar Kondeti
029925a787 sched/core: Fix use after free issue in is_sched_lib_based_app()
is_sched_lib_based_app() function introduced by 'commit d43b69c4ad2a
("sched/core: fix userspace affining threads incorrectly")' traverses
all the executable VMA regions of a task for which the affinity change
is requested by the userspace. The mm->mmap_sem lock is acquired to
lock the VMA regions, however the task mm itself can go away when
the task is exited. The get_task_struct() does not prevent this from
happening. Add protection by incrementing task's mm reference count.

Change-Id: I39d835a8d7d53d9b9eca90baf73d3fcfad9d164b
Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
2022-11-12 11:25:05 +00:00
Abhijeet Dharmapurikar
b6f796997f sched/core: fix userspace affining threads incorrectly
Certain userspace applications, to achieve max performance, affines its
threads to cpus that run the fastest. This is not always the
correct strategy. For e.g. in certain architectures all the
cores have the same max freq but few of them have a bigger
cache. Affining to the cpus that have bigger cache is advantageous
but such an application would end up affining them to all the cores.
Similarly if an architecture has just one cpu that runs at max freq,
it ends up crowding all its thread on that single core, which is
detrimental for performance.

To address this issue, we need to detect a suspicious looking affinity
request from userspace and check if it links in a particular library.
The latter can easily be detected by traversing executable vm areas
that map a file and checking for that library name.
When such a affinity request is found, change it to use a proper
affinity. The suspicious affinity request, the proper affinity request
and the library name can be configured by the userspace.

Change-Id: I6bb8c310ca54c03261cc721f28dfd6023ab5591a
Signed-off-by: Abhijeet Dharmapurikar <adharmap@codeaurora.org>
2022-11-12 11:25:05 +00:00
Rick Yiu
71b7e54063 sched: fine tune task placement for prioritized tasks
The prioritized task is defined as prefer high capacity with prio <=
120. We found on the chipset with 2 big cores only, when one big core
is occupied by busy non-prioritized tasks, prioritized tasks could be
scheduled to the other big core. This may affect UX because the
prioritized tasks are mostly UI related. Address this problem by put
at least one prioritized task to the cpu which has no prioritized task
running on it. Besides, if both big cores have prioritized tasks on them
already, then schedule arriving prioritized tasks to idle little core if
there is in the hope of not being blocked by existing prioritized tasks.

Bug: 161190988
Bug: 160663228
Test: tasks scheduled as expected
Signed-off-by: Rick Yiu <rickyiu@google.com>
Change-Id: I880e3a68a5c8fbfd0902e1b571a83bfd88ce1b96
2022-11-12 11:25:05 +00:00
Wei Wang
1a8e5a8f3c sched: fair: placement optimization for heavy load
Previously we have used pure CFS wakeup in overutilized case. This is a
tweaked version to activate the path only for important tasks.

Bug: 161190988
Bug: 160883639
Test: boot and systrace
Signed-off-by: Wei Wang <wvw@google.com>
Change-Id: I2a27f241b3ba32a04cf6f88deb483d6636440dcf
2022-11-12 11:25:04 +00:00
Rick Yiu
245daa0d15 Revert "sched: fine tune task placement for prioritized tasks"
This reverts commit 5682519ea38479e69cbb4a5445d811ca3d6cf67e.

Bug: 162300017
Test: ITS pass
Signed-off-by: Rick Yiu <rickyiu@google.com>
Change-Id: I29f8e84118cf62b0bf4d8b5556df828a8f648781
2022-11-12 11:25:04 +00:00
Rick Yiu
aa57fb3819 sched: fine tune task placement for prioritized tasks
The prioritized task is defined as prefer high capacity with prio <=
120. We found on the chipset with 2 big cores only, when one big core
is occupied by busy non-prioritized tasks, prioritized tasks could be
scheduled to the other big core. This may affect UX because the
prioritized tasks are mostly UI related. Address this problem by put
at least one prioritized task to the cpu which has no prioritized task
running on it. Besides, if both big cores have prioritized tasks on them
already, then schedule arriving prioritized tasks to idle little core if
there is in the hope of not being blocked by existing prioritized tasks.

Bug: 161190988
Bug: 160663228
Test: tasks scheduled as expected
Signed-off-by: Rick Yiu <rickyiu@google.com>
Change-Id: I880e3a68a5c8fbfd0902e1b571a83bfd88ce1b96
2022-11-12 11:25:04 +00:00
Rick Yiu
dcbacf46fa sched/fair: schedule lower priority tasks from little cores
With scheduler placement hint, there still could be several boosted
tasks contending for big cores. On chipset with fewer big cores, it
might cause problems like jank. To improve it, schedule tasks of prio
>= DEFAULT_PRIO from little cores if they could fit, even for tasks
that prefer high capacity cpus, since such prio means they are less
important.

Bug: 158936596
Test: tasks scheduled as expected
Signed-off-by: Rick Yiu <rickyiu@google.com>
Change-Id: Ic0cc06461818944e3e97ec0493c0d9c9f1a5e217
2022-11-12 11:25:04 +00:00
Rick Yiu
c65251c440 sched/fair: do not use boosted margin for prefer_high_cap case
For prefer_high_cap case, it will start from mid/max cpu already,
so there is no need to use boosted margin for task placement.

Bug: 160082718
Test: tasks scheduled as expected
Signed-off-by: Rick Yiu <rickyiu@google.com>
Change-Id: I4df27b1e468484f5d9aedfa57ee444f397a8da81
2022-11-12 11:25:04 +00:00
Rick Yiu
d674064463 sched/fair: use actual cpu capacity to calculate boosted util
Currently when calculating boosted util for a cpu, it uses a fixed
value of 1024 for calculation. So when top-app tasks moved to LC,
which has much lower capacity than BC, the freq calculated will be
high even the cpu util is low. This results in higher power
consumption, especially on arch which has more little cores than
big cores. By replacing the fixed value of 1024 with actual cpu
capacity will reduce the freq calculated on LC.

Bug: 152925197
Test: boosted util reduced on little cores
Signed-off-by: Rick Yiu <rickyiu@google.com>
Change-Id: I80cdd08a2c7fa5e674c43bfc132584d85c14622b
2022-11-12 11:25:03 +00:00
Rick Yiu
1c86bcc138 sched: separate capacity margin for boosted tasks
With the introduction of placement hint patch, boosted tasks will not
scheduled from big cores. We tune capacity margin to let important
boosted tasks get scheduled on big cores. However, the capacity margin
affects all group of tasks, so that non-boosted tasks get more chances
to be scheduled on big cores, too. This could be solved by separating
capacity margin for boosted tasks.

Bug: 152925197
Test: margin set correctly
Signed-off-by: Rick Yiu <rickyiu@google.com>
Change-Id: I0e059c56efa9bc8513f0ef4b0f6ab8f5d04a592a
2022-11-12 11:25:03 +00:00
Wei Wang
b8399d8efc sched: separate boost signal from placement hint
Test: build and boot
Bug: 144451857
Bug: 147785606
Bug: 152925197
Change-Id: Ib2d86a72cad12971a99c7105813473211a7fbd76
Signed-off-by: Wei Wang <wvw@google.com>
2022-11-12 11:25:03 +00:00
spakkkk
3d6af0d7a9 kernel: sched: merge changes from LA.UM.9.12.R2.10.00.00.685.011
56acc710a6
2022-11-12 11:25:03 +00:00
Jimmy Shiu
20053398a9 Use find_best_target to select cpu for a zero-util task
Always choosing the prev_cpu for a zero-utilization task
might lead tasks competing for the same cpu and increase
the overall task execution time.
Instead, selecting cpu with find_best_target to share the
loading onto other cpus.

Bug: 143857473
Test: https://paste.googleplex.com/5570415529295872
Signed-off-by: Jimmy Shiu <jimmyshiu@google.com>
Change-Id: Ibeb766957d2dea5fee85c798d8a9f7b62c2c1a09
2022-11-12 11:25:02 +00:00
lucaswei
574d524746 sched/fair: Fix compilation issues for !CONFIG_SCHED_WALT
For compilation issues for !CONFIG_SCHED_WALT of the following two
commits:

commit a80cf2007d ("sched: Add support to spread tasks")

Bug: 154086870
Bug: 153823050
Signed-off-by: lucaswei <lucaswei@google.com>
Change-Id: I89e224e18f6700ea2abcd162a5b9f3f938a7ad92
2022-11-12 11:25:02 +00:00
Saravana Kannan
b80407dfe1 GKI: sched: Add back the root_domain.overutilized field
This field is necessary to maintain ABI compatibility with ACK. Add it
back, but leave it unused.

Bug: 153905799
Change-Id: Ic9ef5640fa77c3aada023843658e7e4de3bada82
Signed-off-by: Saravana Kannan <saravanak@google.com>
2022-11-12 11:25:02 +00:00
Saravana Kannan
9c949d2746 GKI: sched: Compile out push_task field in struct rq
The push_task field is a WALT related field that shouldn't be needed
since we run PELT. So conditionally compile in the field only when WALT
is enabled. Also add #ifdefs around all the uses of this field.

Bug: 153905799
Change-Id: I12edd3f2180ebab14719ba2548e83519beffacc2
Signed-off-by: Saravana Kannan <saravanak@google.com>
2022-11-12 11:25:02 +00:00
Wei Wang
d8903199a6 sched: restrict iowait boost to tasks with prefer_idle
Currently iowait doesn't distinguish background/foreground tasks and we
have seen cases where a device run to high frequency unnecessarily when
running some background I/O. This patch limits iowait boost to tasks with
prefer_idle only. Specifically, on Pixel, those are foreground and top
app tasks.

Bug: 130308826
Bug: 144961757
Test: Boot and trace
Change-Id: I2d892beeb4b12b7e8f0fb2848c23982148648a10
Signed-off-by: Wei Wang <wvw@google.com>
2022-11-12 11:25:02 +00:00
Wei Wang
ef3ff76413 trace: sched: add capacity change tracing
Add a new tracepoint sched_capacity_update when capacity value
updated.

Bug: 144177658
Bug: 144961676
Test: Boot and grab trace to check
Change-Id: I30ee55bfcc2fb5a92dd448ad364768ee428f3cc4
Signed-off-by: Wei Wang <wvw@google.com>
Signed-off-by: Jimmy Shiu <jimmyshiu@google.com>
2022-11-12 11:25:01 +00:00
Miguel de Dios
ca3ca78250 sched: reduce softirq conflicts with RT
This is a forward port of pa/890483 with modifications from the original
patch due to changes in sched/softirq.c which applies the same logic.

We're finding audio glitches caused by audio-producing RT tasks
that are either interrupted to handle softirq's or that are
scheduled onto cpu's that are handling softirq's.
In a previous patch, we attempted to catch many cases of the
latter problem, but it's clear that we are still losing
significant numbers of races in some apps.

This patch attempts to address the following problem::
   It attempts to reduce the most common windows in which
   we lose the race between scheduling an RT task on a remote
   core and starting to handle softirq's on that core.
   We still lose some races, but we lose significantly fewer.
   (And we don't want to introduce any heavyweight forms
   of synchronization on these paths.)

Bug: 64912585
Bug: 136771796
Bug: 144961676
Change-Id: Ida89a903be0f1965552dd0e84e67ef1d3158c7d8
Signed-off-by: Miguel de Dios <migueldedios@google.com>
Signed-off-by: Jimmy Shiu <jimmyshiu@google.com>
2022-11-12 11:25:01 +00:00
Rick Yiu
060d70c387 sched/fair: let scheduler skip util checking if cpu is idle
Current cpu util includes util of runnable tasks plus the recent
utilization of currently non-runnable tasks, so it may return a non-zero
value even there is no task running on a cpu. When scheduler is selecting
a cpu for a task, it will check if cpu util is over its capacity, what
could happen is that it will skip a cpu even it is idle, so let scheduler
skip util checking if the task perfers idle cpu and the cpu is idle.

Bug: 133284637
Bug: 144961676
Test: cpu selected as expected
Change-Id: I2c15d6b79b1cc83c72e84add70962a8e74c178b8
Signed-off-by: Rick Yiu <rickyiu@google.com>
Signed-off-by: Jimmy Shiu <jimmyshiu@google.com>
2022-11-12 11:25:01 +00:00
Miguel de Dios
da4cf7a00a kernel: sched: Mitigate non-boosted tasks preempting boosted tasks
Currently when a boosted task is scheduled we use prefer_idle to try and
get it to an idle core. Once it's scheduled, there is a possibility we
can schedule a non-boosted task on the same core where the boosted task
is running on. This change aims to mitigate that possibility by checking
if the core we're targeting has a boosted task and if so, use the next
best idle core instead.

Bug: 131626264
Bug: 144961676
Change-Id: I3d321e1c71f96526f55f7f3a56e32db411311aa2
Signed-off-by: Miguel de Dios <migueldedios@google.com>
Signed-off-by: Jimmy Shiu <jimmyshiu@google.com>
2022-11-12 11:25:01 +00:00
Wei Wang
34fc5a101d Revert "sched/core: fix userspace affining threads incorrectly"
This reverts commit d43b69c4ad2a977406c84d47fe8a5261e0099e78.

Bug: 133481659
Bug: 144961676
Test: build
Change-Id: I615023c611c4de1eb334e4374af7306991f4216b
Signed-off-by: Wei Wang <wvw@google.com>
Signed-off-by: Jimmy Shiu <jimmyshiu@google.com>
2022-11-12 11:25:01 +00:00
Wei Wang
a756d68b85 Revert "sched/core: Fix use after free issue in is_sched_lib_based_app()"
This reverts commit 0e6ca1640cec57004d702e5e7c3e59ba77541e2f.

Bug: 133481659
Bug: 144961676
Test: build
Change-Id: Ie6a0b5e46386c98882614be19dedc61ffd3870e5
Signed-off-by: Wei Wang <wvw@google.com>
Signed-off-by: Jimmy Shiu <jimmyshiu@google.com>
2022-11-12 11:25:00 +00:00
Wei Wang
79e8a318d9 Revert "sched: Improve the scheduler"
This reverts commit a3dd94a1bb80ec98924070f28ba80d93a4d559a6.

Bug: 133481659
Bug: 144961676
Test: build
Change-Id: Ib23609315f3446223521612621fe54469537c172
Signed-off-by: Wei Wang <wvw@google.com>
Signed-off-by: Jimmy Shiu <jimmyshiu@google.com>
2022-11-12 11:25:00 +00:00
Rick Yiu
1de73888bd sched/fair: prefer exclusive mid cluster cpu for top-app task
For a top-app task, if it fits in mid cluster, then prefer the first
cpu in this cluster whenever possible, which happens to be the top-app
exclusive cpu in our cpuset design.

Bug: 128477368
Bug: 144961676
Test: top-app tasks assigned as expected
Change-Id: Ifdd0614f6c8c03edde4ed674c4193f4ba31aac16
Signed-off-by: Rick Yiu <rickyiu@google.com>
Signed-off-by: Jimmy Shiu <jimmyshiu@google.com>
2022-11-12 11:25:00 +00:00
Connor O'Brien
ca35ac869e sched: delete unused & buggy function definitions
None of these functions does what its name implies when
CONFIG_SCHED_WALT=n. While all are currently unused, future patches
could introduce subtle bugs by calling any of them from non WALT
specific code. Delete the functions so it's obvious if new callers are
added.

Bug: 144961676
Test: build kernel
Change-Id: Ib7552afb5668b48fe2ae56307016e98716e00e63
Signed-off-by: Connor O'Brien <connoro@google.com>
Signed-off-by: Jimmy Shiu <jimmyshiu@google.com>
2022-11-12 11:25:00 +00:00
Connor O'Brien
d2be4585d5 sched/fair: fix implementation of is_min_capacity_cpu()
With CONFIG_SCHED_WALT disabled, is_min_capacity_cpu() is defined to
always return true, which breaks the intended behavior of
task_fits_max(). Revise is_min_capacity_cpu() to return correct
results.

An earlier version of this patch failed to handle the case when
min_cap_orig_cpu == -1 while sched domains are being updated due to
hotplug. Add a check for this case.

Test: trace shows increased top-app placement on medium cores
Bug: 117499098
Bug: 128477368
Bug: 130756111
Bug: 144961676
Change-Id: Ia2b41aa7c57f071c997bcd0e9cdfd0808f6a2bf9
Signed-off-by: Connor O'Brien <connoro@google.com>
Signed-off-by: Jimmy Shiu <jimmyshiu@google.com>
2022-11-12 11:25:00 +00:00
Jimmy Shiu
d11b3619dc sched/fair: refine some scheduler changes from AU drop
Refine some changes from AU90. One is to allow boosted task run on min
capacity cpu if it fits in. The other is to check fast exit for
prefer-idle task first.

Bug: 128477368
Bug: 130576120
Bug: 144961676
Test: task rq selection behavior is as expected

Change-Id: Ied57b37a361ed137d10167f0346f52a149d08cd6
Signed-off-by: Rick Yiu <rickyiu@google.com>
Signed-off-by: Jimmy Shiu <jimmyshiu@google.com>
2022-11-12 11:24:59 +00:00
Rick Yiu
51456080e3 BACKPORT: sched/fair: if sync flag ignored, try to place in mid cluster
If the sync flag is ignored because the current cpu is not in
the affinity mask for the target of a sync wakeup (usually
binder call), prefer to place in the mid cluster if
possible. The main case is a "top-app" task waking a
"foreground" task when the top-app task is running on
a CPU that is not in the foreground cpuset. This patch
causes the search order from mid capacity cpu be used
when the sync flag failed.

backport from commit 98ae57d9eaf7
("ANDROID: sched/fair: if sync flag ignored, try to place in same cluster")

Bug: 117438867
Bug: 144961676
Test: boot to home, operation normal
Change-Id: I68d0cc05db1bc2cb02d4445c71b02215209e8c04
Signed-off-by: Rick Yiu <rickyiu@google.com>
Signed-off-by: Jimmy Shiu <jimmyshiu@google.com>
2022-11-12 11:24:59 +00:00
spakkkk
90313858c4 Revert "sched/walt: Improve the scheduler"
This reverts commit aa790dea926ec08b236cf754bf36e3b1673f4efd.
2022-11-12 11:24:59 +00:00
Connor O'Brien
1a4353af5d cpufreq: schedutil: fix check for stale utilization values
Part of the fix from commit d86ab9cff8 ("cpufreq: schedutil: use now
as reference when aggregating shared policy requests") is reversed in
commit 05d2ca242067 ("cpufreq: schedutil: Ignore CPU load older than
WALT window size") due to a porting mistake. Restore it while keeping
the relevant change from the latter patch.

Bug: 117438867
Bug: 144961676
Test: build & boot
Change-Id: I21399be760d7c8e2fff6c158368a285dc6261647
Signed-off-by: Connor O'Brien <connoro@google.com>
Signed-off-by: Jimmy Shiu <jimmyshiu@google.com>
2022-11-12 11:24:59 +00:00
Miguel de Dios
8399535f1b sched: core: Disable double lock/unlock balance in move_queued_task()
CONFIG_LOCK_STAT shows warnings in move_queued_task() for releasing a
pinned lock. The warnings are due to the calls to
double_unlock_balance() added to snapshot WALT. Lets disable them if
not building with SCHED_WALT.

Bug: 123720375
Bug: 148940637
Change-Id: I8bff8550c4f79ca535556f6ec626f17ff5fce637
Signed-off-by: Miguel de Dios <migueldedios@google.com>
Signed-off-by: Jimmy Shiu <jimmyshiu@google.com>
2022-11-12 11:24:59 +00:00
Miguel de Dios
96c3f64cd4 sched: fair: Disable double lock/unlock balance in detach_task()
CONFIG_LOCK_STAT shows warnings in detach_task() for releasing a
pinned lock. The warnings are due to the calls to
double_unlock_balance() added to snapshot WALT. Lets disable them if
not building with SCHED_WALT.

Bug: 123720375
Bug: 148940637
Change-Id: Ibfa28b1434fa6006fa0117fd2df1a3eadb321568
Signed-off-by: Miguel de Dios <migueldedios@google.com>
Signed-off-by: Jimmy Shiu <jimmyshiu@google.com>
2022-11-12 11:24:58 +00:00
Rick Yiu
7de89e1d8b sched/fair: apply sync wake-up to pure CFS path
Since CONFIG_SCHED_WALT is disabled, we need another way to boost
perf as sched_boost does, and skipping EAS has similar effect. We
use powerhal to handle it. Also apply sync wake-up so that pure
CFS path (when skipping EAS) can benefit from it.

(Combine the following two commits
  2d21560126cb sched/fair: apply sync wake-up to pure CFS path
  9917d5335479 sched/fair: refine check for sync wake-up)

Bug: 119932121
Bug: 117438867
Bug: 144961676
Test: boot to home, operation normal
Change-Id: I970852540839881a926b7e7da5f70ef7e0185349
Signed-off-by: Rick Yiu <rickyiu@google.com>
Signed-off-by: Jimmy Shiu <jimmyshiu@google.com>
2022-11-12 11:24:58 +00:00
Connor O'Brien
63e23634b7 Revert "sched: fair: Always try to use energy efficient cpu for wakeups"
This reverts commit 63c27502786646271b4c4ba32268b727e294bbb2.

Bug: 117438867
Bug: 144961676
Test: Tracing confirms EAS is no longer always used
Change-Id: If321547a86592527438ac21c3734a9f4decda712
Signed-off-by: Connor O'Brien <connoro@google.com>
Signed-off-by: Jimmy Shiu <jimmyshiu@google.com>
2022-11-12 11:24:58 +00:00
Jimmy Shiu
7d9ac85ec1 sched: fair: avoid little cpus due to sync, prev bias
Important threads can get forced to little cpu's
when the sync or prev_bias hints are followed
blindly. This patch adds a check to see whether
those paths are forcing the task to a cpu that
has less capacity than other cpu's available for
the task. If so, we ignore the sync and prev_bias
and allow the scheduler to make a free decision.

Bug: 117438867
Bug: 144961676
Change-Id: Ie5a99f9a8b65ba9382a8d0de2ae0aad843e558d1
Signed-off-by: Miguel de Dios <migueldedios@google.com>
Signed-off-by: Jimmy Shiu <jimmyshiu@google.com>
2022-11-12 11:24:58 +00:00