In case if the file already has underlying blocks/extents allocated
then we don't need to start a journal txn and can directly return
the underlying mapping. Currently ext4_iomap_begin() is used by
both DAX & DIO path. We can check if the write request is an
overwrite & then directly return the mapping information.
This could give a significant perf boost for multi-threaded writes
specially random overwrites.
On PPC64 VM with simulated pmem(DAX) device, ~10x perf improvement
could be seen in random writes (overwrite). Also bcoz this optimizes
away the spinlock contention during jbd2 slab cache allocation
(jbd2_journal_handle). On x86 VM, ~2x perf improvement was observed.
Reported-by: Dan Williams <dan.j.williams@intel.com>
Reviewed-by: Jan Kara <jack@suse.cz>
Signed-off-by: Ritesh Harjani <riteshh@linux.ibm.com>
Link: https://lore.kernel.org/r/88e795d8a4d5cd22165c7ebe857ba91d68d8813e.1600401668.git.riteshh@linux.ibm.com
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Most allocations done here are rather small and can fit on the stack,
eliminating the need to allocate them dynamically. Reserve a 1024B
stack buffer for this purpose to avoid the overhead of dynamic
memory allocation.
1024B covers most use cases, and higher values were observed to cause
stack corruptions.
Co-authored-by: Sultan Alsawaf <sultan@kerneltoast.com>
Signed-off-by: Park Ju Hyung <qkrwngud825@gmail.com>
These get allocated and freed millions of times on this kernel tree.
Use a dedicated kmem_cache pool and avoid costly dynamic memory allocations.
Signed-off-by: Park Ju Hyung <qkrwngud825@gmail.com>
struct kthread_create_info is small enough to fit perfectly under
the stack space.
Signed-off-by: Park Ju Hyung <qkrwngud825@gmail.com>
Signed-off-by: Adam W. Willis <return.of.octobot@gmail.com>
struct linux_binprm isn't big and is safe to use from the stack space
Signed-off-by: Park Ju Hyung <qkrwngud825@gmail.com>
[@0ctobot: Adapted for 4.19]
Signed-off-by: Adam W. Willis <return.of.octobot@gmail.com>
These are allocated extremely frequently.
Allocate them with CONFIG_NR_CPUS upon struct ravg's allocation.
This will break walt debug tracings.
Signed-off-by: Park Ju Hyung <qkrwngud825@gmail.com>
Signed-off-by: Adam W. Willis <return.of.octobot@gmail.com>
healthd queries this extremely frequently and attrname is allocated
and de-allocated repeatedly.
Use the stack space instead.
Signed-off-by: Park Ju Hyung <qkrwngud825@gmail.com>
Signed-off-by: Adam W. Willis <return.of.octobot@gmail.com>
These get allocated and freed millions of times on this kernel tree.
Use a dedicated kmem_cache pool and avoid costly dynamic memory allocations.
Most allocations' size is:
(sizeof(struct dma_buf) + sizeof(struct reservation_object)).
Put those under kmem_cache pool and distinguish them with dmabuf->from_kmem
flag.
Signed-off-by: Park Ju Hyung <qkrwngud825@gmail.com>
[@0ctobot: Adapted for 4.19]
Signed-off-by: Adam W. Willis <return.of.octobot@gmail.com>
These get allocated and freed millions of times on this kernel tree.
Use a dedicated kmem_cache pool and avoid costly dynamic memory allocations.
Signed-off-by: Park Ju Hyung <qkrwngud825@gmail.com>
Signed-off-by: Adam W. Willis <return.of.octobot@gmail.com>
These get allocated and freed millions of times on this kernel tree.
Use a dedicated kmem_cache pool and avoid costly dynamic memory allocations.
Signed-off-by: Park Ju Hyung <qkrwngud825@gmail.com>
Signed-off-by: Adam W. Willis <return.of.octobot@gmail.com>
Most dentry allocations exceed 32B.
Increase it by 192 bytes to accommodate larger allocation requests.
This still ensures 64 bytes cacheline alignments.
Signed-off-by: Park Ju Hyung <qkrwngud825@gmail.com>
Signed-off-by: Adam W. Willis <return.of.octobot@gmail.com>
These get allocated and freed millions of times on this kernel tree.
Use a dedicated kmem_cache pool and avoid costly dynamic memory allocations.
Signed-off-by: Park Ju Hyung <qkrwngud825@gmail.com>
[@0ctobot: Adapted for 4.19]
Signed-off-by: Adam W. Willis <return.of.octobot@gmail.com>
These get allocated and freed millions of times on this kernel tree.
Use a dedicated kmem_cache pool and avoid costly dynamic memory allocations.
Signed-off-by: Park Ju Hyung <qkrwngud825@gmail.com>
Signed-off-by: Adam W. Willis <return.of.octobot@gmail.com>
These get allocated and freed millions of times on this kernel tree.
Use a dedicated kmem_cache pool and avoid costly dynamic memory allocations.
Signed-off-by: Park Ju Hyung <qkrwngud825@gmail.com>
Signed-off-by: Adam W. Willis <return.of.octobot@gmail.com>
These get allocated and freed millions of times on Android.
Use a dedicated kmem_cache pool and avoid costly dynamic memory allocations.
Signed-off-by: Park Ju Hyung <qkrwngud825@gmail.com>
Signed-off-by: Adam W. Willis <return.of.octobot@gmail.com>
* Poorly made kernel trees often use trace_printk() without
properly guarding them in a #ifdef macro.
* Such usage of trace_printk() causes a warning at
boot and additional memory allocation.
This option serves to disable those all at once with ease.
Change-Id: I3edd80bdc0cc6763c7184017f8c0a15de06952bb
Signed-off-by: starlight5234 <starlight5234@protonmail.ch>
https://android-review.googlesource.com/c/platform/system/core/+/938362
Hardcode this and make /proc/sys/vm/dirty_expire_centisecs a no-op.
Signed-off-by: Park Ju Hyung <qkrwngud825@gmail.com>
(cherry picked from commit b2c3e47759aade606691d5c67f250005a0f2fe1c)
(cherry picked from commit b95894f5fa7a8f8df1d14977b04ca8f255cfa373)
(cherry picked from commit bc361a6fd917533e6bb00eb8dc313d8d88aad044)
(cherry picked from commit 89385d7e3e4f1c141667ee9756f7b916c1548830)
(cherry picked from commit faa16d5fa01d646155ac8c2e749b746eca653dc2)
(cherry picked from commit 9d0a6443278bb12df5bd36dd77ca12cca50b992f)
(cherry picked from commit 6f18d30a6241074b0f4c0a602f74e9abc35898e8)
(cherry picked from commit 3a619a8c02a6b082f6b5a40de34b3c0a6a939b5c)
(cherry picked from commit a4ede9b9dc48c513858a3b465432891b4012b182)
(cherry picked from commit 18f18de6613a63b2ff2dc657fee8e472d090dca8)
(cherry picked from commit 684b4fa2175183a7424280f270aa4a30e5c818ad)
(cherry picked from commit 417a0b93c3616a26d5c4e5272b628c6e5860c53d)
We want to reduce the lock contention so replace the global lock with
atomic.
bug: 127722781
Change-Id: I08ed3d55bf6bf17f31f4017c82c998fb513bad3e
Signed-off-by: Kyle Lin <kylelin@google.com>
Signed-off-by: Danny Lin <danny@kdrag0n.dev>
* This is not something time critical. The load of the workqueues can
sometimes be very high, especially when unbound workqueues are restricted
to small cluster, bringing notable lags to userspace. Limit it with
max_active = 1 to reduce the instantaneous load.
* This fixes UI lags after running faceunlock on OOS.
Signed-off-by: LibXZR <xzr467706992@163.com>
Add delay in smp2p_sleepstate suspend path to prevent wakeup loop with
non wakeup streaming from sensors. This delay should be the maximum
time for sensors samples in flight to be drained.
Change-Id: I79f944caa2ccdd65dc1649aef8d6359f44612479
Signed-off-by: Chris Lew <clew@codeaurora.org>
Signed-off-by: Ananth Raghavan Subramanian <sananth@codeaurora.org>
Signed-off-by: Kelly Rossmoyer <krossmo@google.com>
Bug: 123377615
Bug: 131260677
Sometimes remnet_ipa fails to suspend with the following trace:
NETDEV WATCHDOG: rmnet_ipa0 (): transmit queue 0 timed out
Signed-off-by: celtare21 <celtare21@gmail.com>
Not using alarm timer for dfc powersave check and eliminate the need
for a wakelock. This allows AP to go to suspend quicker.
Change-Id: I7153055d0231a65125ad88808db9e1d0032f24d9
Signed-off-by: Weiyi Chen <quic_weiyic@quicinc.com>
Signed-off-by: Juhyung Park <qkrwngud825@gmail.com>
Signed-off-by: Adithya R <gh0strider.2k18.reborn@gmail.com>
* Oneplus is using a dirty list and userspace nodes to override restart_level
for some subsystems. Why not just ignore all RESET_SOC and use a soft reset
instead? Nobody wanna to see their device goes randomly reboot sometimes.
* Note that commit 578bed2 is also needed to fix broken irq free when soft
reset the modem.
Signed-off-by: LibXZR <xzr467706992@163.com>
These can get stuck sometimes and prevert system from sleeping.
Signed-off-by: Yaroslav Furman <yaro330@gmail.com>
Signed-off-by: alk3pInjection <webmaster@raspii.tech>
Signed-off-by: Forenche <prahul2003@gmail.com>
Signed-off-by: Jebaitedneko <Jebaitedneko@gmail.com>
This is now the default for all connections in iOS 11+, and we have
RFC 3168 ECN fallback to detect and disable ECN for broken flows.
Signed-off-by: Danny Lin <danny@kdrag0n.dev>
This adds a great deal of latency to block requests due to disabling
preemption temporarily during statistics collection.
Signed-off-by: Tyler Nijmeh <tylernij@gmail.com>
• armv8.1-a: Includes armv8-a, +crc, +lse, +rdma by default.
• fp16: Enables FP16 extension. This also enables floating-point instructions.
• rcpc: Enables the RcPc extension. This is passed on to the assembler,
enabling inline asm statements to use instructions from the RcPc extension.
Lookup: https://gcc.gnu.org/onlinedocs/gcc-10.1.0/gcc/AArch64-Options.html
Test: A successful build with both Clang & GCC. Kernel booted just fine.
Signed-off-by: Tashfin Shakeer Rhythm <tashfinshakeerrhythm@gmail.com>
Increased inlining can provide an improvement in performance, but also
increases the size of the final kernel image. This config peovides a bit
more control on how much is inlined to further optimise the kernel for
certain workloads.
Signed-off-by: Diab Neiroukh <lazerl0rd@thezest.dev>
* Ensure -O3 is always set no matter linker (doesn't impact release
builds).
* Enable fwhole-program-vtables with LTO for better inlining decisions
(0.00489045% binary size decrease).
* Set import-instr-limit to 40:
* Decreases output size by 10.0308%, and also where measurable
performance changes stop occurring. Chromium found 10 was a good
limit for performance/binary size, and AOSP found 5 was a good
compromise. However we're a kernel, and a bit different.
import-instr-limit tests (compared to no limit):
import-instr-limit=10: 15.1171% binary size decrease.
import-instr-limit=20: 15.1025% binary size decrease.
import-instr-limit=30: 10.0455% binary size decrease.
import-instr-limit=40: 10.0308% binary size decrease.
import-instr-limit=50: 5.01785% binary size decrease.
import-instr-limit=60: 5.01296%% binary size decrease.
Makefile: re address lto tweaks
Subsequent to 28d40c3798
After additional clean testing, it was found 20 is the reasonable limit
before any measurable performance loss occurs.
Makefile: re address lto tweaks
All previous testing was embarrassingly flawed.
Since further investigation, the upstream determined 5 is a good fit.
Polly is able to optimize various loops throughout the kernel for cache
locality. A mathematical representation of the program, based on
polyhedra, is analysed to find opportunistic optimisations in memory
access patterns which then leads to loop transformations.
I generally see static Kconfig entries being created to enable
these flags, which inevitably breaks builds and emits misleading
errors for those trying to compile the kernel with standard Clang
toolchains (AOSP, Snapdragon, etc.) which do not natively support
Polly.
Let's instead take advantage of Kconfig compile time checks and
determine compatibility dynamically, similarly to how RELR
relocations and other compiler specific features are handled on
4.19.
[0ctobot: Based on kdrag0n/proton_bluecross@0537f23]
Change-Id: I8c8e4e62f54dc4f84b043030b75d745039c786e8
Signed-off-by: Adam W. Willis <return.of.octobot@gmail.com>
Co-authored-by: Diab Neiroukh <lazerl0rd@thezest.dev>
Makefile: Restore support for -polly-run-dce
This flag was initially omitted due to incompatibility with Clang
13 development builds, which has since been resolved.
Change-Id: I8f75c6498df1d3e2c7886da9d0c15446a971edc4
Signed-off-by: Adam W. Willis <return.of.octobot@gmail.com>
Signed-off-by: UtsavBalar1231 <utsavbalar1231@gmail.com>
This selection being the default behavior for Clang LTO has been
reverted upstream over concerns that the use of -gc-sections
carries a greater potential risk of breakage.
That being said, as someone who has been using these features in
tandem for several years to no ill effect, this is a risk that I
am willing to take in order to trim the fat from LTO's thick
kernel images and potentially reduce boot times.
Signed-off-by: Adam W. Willis <return.of.octobot@gmail.com>