Commit Graph

869148 Commits

Author SHA1 Message Date
Dave Rodgman
336c720bf4 lib/lzo: fix ambiguous encoding bug in lzo-rle
In some rare cases, for input data over 32 KB, lzo-rle could encode two
different inputs to the same compressed representation, so that
decompression is then ambiguous (i.e.  data may be corrupted - although
zram is not affected because it operates over 4 KB pages).

This modifies the compressor without changing the decompressor or the
bitstream format, such that:

 - there is no change to how data produced by the old compressor is
   decompressed

 - an old decompressor will correctly decode data from the updated
   compressor

 - performance and compression ratio are not affected

 - we avoid introducing a new bitstream format

In testing over 12.8M real-world files totalling 903 GB, three files
were affected by this bug.  I also constructed 37M semi-random 64 KB
files totalling 2.27 TB, and saw no affected files.  Finally I tested
over files constructed to contain each of the ~1024 possible bad input
sequences; for all of these cases, updated lzo-rle worked correctly.

There is no significant impact to performance or compression ratio.

Signed-off-by: Dave Rodgman <dave.rodgman@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Dave Rodgman <dave.rodgman@arm.com>
Cc: Willy Tarreau <w@1wt.eu>
Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com>
Cc: Markus F.X.J. Oberhumer <markus@oberhumer.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Nitin Gupta <ngupta@vflare.org>
Cc: Chao Yu <yuchao0@huawei.com>
Cc: <stable@vger.kernel.org>
Link: http://lkml.kernel.org/r/20200507100203.29785-1-dave.rodgman@arm.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: UtsavBalar1231 <utsavbalar1231@gmail.com>
2022-11-12 11:19:25 +00:00
Dave Rodgman
2300784f85 lib/lzo/lzo1x_compress.c: fix alignment bug in lzo-rle
Fix an unaligned access which breaks on platforms where this is not
permitted (e.g., Sparc).

Link: http://lkml.kernel.org/r/20190912145502.35229-1-dave.rodgman@arm.com
Signed-off-by: Dave Rodgman <dave.rodgman@arm.com>
Cc: Dave Rodgman <dave.rodgman@arm.com>
Cc: Markus F.X.J. Oberhumer <markus@oberhumer.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: UtsavBalar1231 <utsavbalar1231@gmail.com>
2022-11-12 11:19:24 +00:00
Dave Rodgman
2b22d2fa08 lib/lzo: fix bugs for very short or empty input
For very short input data (0 - 1 bytes), lzo-rle was not behaving
correctly.  Fix this behaviour and update documentation accordingly.

For zero-length input, lzo v0 outputs an end-of-stream marker only,
which was misinterpreted by lzo-rle as a bitstream version number.
Ensure bitstream versions > 0 require a minimum stream length of 5.

Also fixes a bug in handling the tail for very short inputs when a
bitstream version is present.

Link: http://lkml.kernel.org/r/20190326165857.34613-1-dave.rodgman@arm.com
Signed-off-by: Dave Rodgman <dave.rodgman@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: UtsavBalar1231 <utsavbalar1231@gmail.com>
2022-11-12 11:19:24 +00:00
Dave Rodgman
7f8ccaadd2 lib/lzo: separate lzo-rle from lzo
To prevent any issues with persistent data, separate lzo-rle from lzo so
that it is treated as a separate algorithm, and lzo is still available.

Link: http://lkml.kernel.org/r/20190205155944.16007-3-dave.rodgman@arm.com
Signed-off-by: Dave Rodgman <dave.rodgman@arm.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Markus F.X.J. Oberhumer <markus@oberhumer.com>
Cc: Matt Sealey <matt.sealey@arm.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Nitin Gupta <nitingupta910@gmail.com>
Cc: Richard Purdie <rpurdie@openedhand.com>
Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com>
Cc: Sonny Rao <sonnyrao@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: UtsavBalar1231 <utsavbalar1231@gmail.com>
Change-Id: I81e4c20bf8b5eced3de188fb994045161a8c75f8
2022-11-12 11:19:24 +00:00
Dave Rodgman
a12af21644 lib/lzo: implement run-length encoding
Patch series "lib/lzo: run-length encoding support", v5.

Following on from the previous lzo-rle patchset:

  https://lkml.org/lkml/2018/11/30/972

This patchset contains only the RLE patches, and should be applied on
top of the non-RLE patches ( https://lkml.org/lkml/2019/2/5/366 ).

Previously, some questions were raised around the RLE patches.  I've
done some additional benchmarking to answer these questions.  In short:

 - RLE offers significant additional performance (data-dependent)

 - I didn't measure any regressions that were clearly outside the noise

One concern with this patchset was around performance - specifically,
measuring RLE impact separately from Matt Sealey's patches (CTZ & fast
copy).  I have done some additional benchmarking which I hope clarifies
the benefits of each part of the patchset.

Firstly, I've captured some memory via /dev/fmem from a Chromebook with
many tabs open which is starting to swap, and then split this into 4178
4k pages.  I've excluded the all-zero pages (as zram does), and also the
no-zero pages (which won't tell us anything about RLE performance).
This should give a realistic test dataset for zram.  What I found was
that the data is VERY bimodal: 44% of pages in this dataset contain 5%
or fewer zeros, and 44% contain over 90% zeros (30% if you include the
no-zero pages).  This supports the idea of special-casing zeros in zram.

Next, I've benchmarked four variants of lzo on these pages (on 64-bit
Arm at max frequency): baseline LZO; baseline + Matt Sealey's patches
(aka MS); baseline + RLE only; baseline + MS + RLE.  Numbers are for
weighted roundtrip throughput (the weighting reflects that zram does
more compression than decompression).

  https://drive.google.com/file/d/1VLtLjRVxgUNuWFOxaGPwJYhl_hMQXpHe/view?usp=sharing

Matt's patches help in all cases for Arm (and no effect on Intel), as
expected.

RLE also behaves as expected: with few zeros present, it makes no
difference; above ~75%, it gives a good improvement (50 - 300 MB/s on
top of the benefit from Matt's patches).

Best performance is seen with both MS and RLE patches.

Finally, I have benchmarked the same dataset on an x86-64 device.  Here,
the MS patches make no difference (as expected); RLE helps, similarly as
on Arm.  There were no definite regressions; allowing for observational
error, 0.1% (3/4178) of cases had a regression > 1 standard deviation,
of which the largest was 4.6% (1.2 standard deviations).  I think this
is probably within the noise.

  https://drive.google.com/file/d/1xCUVwmiGD0heEMx5gcVEmLBI4eLaageV/view?usp=sharing

One point to note is that the graphs show RLE appears to help very
slightly with no zeros present! This is because the extra code causes
the clang optimiser to change code layout in a way that happens to have
a significant benefit.  Taking baseline LZO and adding a do-nothing line
like "__builtin_prefetch(out_len);" immediately before the "goto next"
has the same effect.  So this is a real, but basically spurious effect -
it's small enough not to upset the overall findings.

This patch (of 3):

When using zram, we frequently encounter long runs of zero bytes.  This
adds a special case which identifies runs of zeros and encodes them
using run-length encoding.

This is faster for both compression and decompresion.  For high-entropy
data which doesn't hit this case, impact is minimal.

Compression ratio is within a few percent in all cases.

This modifies the bitstream in a way which is backwards compatible
(i.e., we can decompress old bitstreams, but old versions of lzo cannot
decompress new bitstreams).

Link: http://lkml.kernel.org/r/20190205155944.16007-2-dave.rodgman@arm.com
Signed-off-by: Dave Rodgman <dave.rodgman@arm.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Markus F.X.J. Oberhumer <markus@oberhumer.com>
Cc: Matt Sealey <matt.sealey@arm.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Nitin Gupta <nitingupta910@gmail.com>
Cc: Richard Purdie <rpurdie@openedhand.com>
Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com>
Cc: Sonny Rao <sonnyrao@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: UtsavBalar1231 <utsavbalar1231@gmail.com>
2022-11-12 11:19:24 +00:00
Matt Sealey
fed1fe5c52 lib/lzo: fast 8-byte copy on arm64
Enable faster 8-byte copies on arm64.

Link: http://lkml.kernel.org/r/20181127161913.23863-6-dave.rodgman@arm.com
Link: http://lkml.kernel.org/r/20190205141950.9058-4-dave.rodgman@arm.com
Signed-off-by: Matt Sealey <matt.sealey@arm.com>
Signed-off-by: Dave Rodgman <dave.rodgman@arm.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Markus F.X.J. Oberhumer <markus@oberhumer.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Nitin Gupta <nitingupta910@gmail.com>
Cc: Richard Purdie <rpurdie@openedhand.com>
Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com>
Cc: Sonny Rao <sonnyrao@google.com>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: UtsavBalar1231 <utsavbalar1231@gmail.com>
2022-11-12 11:19:24 +00:00
Matt Sealey
cafded9c7f lib/lzo: 64-bit CTZ on arm64
LZO leaves some performance on the table by not realising that arm64 can
optimize count-trailing-zeros bit operations.

Add CONFIG_ARM64 to the checked definitions alongside CONFIG_X86_64 to
enable the use of rbit/clz instructions on full 64-bit quantities.

Link: http://lkml.kernel.org/r/20181127161913.23863-5-dave.rodgman@arm.com
Link: http://lkml.kernel.org/r/20190205141950.9058-3-dave.rodgman@arm.com
Signed-off-by: Matt Sealey <matt.sealey@arm.com>
Signed-off-by: Dave Rodgman <dave.rodgman@arm.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Markus F.X.J. Oberhumer <markus@oberhumer.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Nitin Gupta <nitingupta910@gmail.com>
Cc: Richard Purdie <rpurdie@openedhand.com>
Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com>
Cc: Sonny Rao <sonnyrao@google.com>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: UtsavBalar1231 <utsavbalar1231@gmail.com>
2022-11-12 11:19:23 +00:00
Dave Rodgman
8cfd6c9aa4 lib/lzo: tidy-up ifdefs
Patch series "lib/lzo: performance improvements", v5.

This patch (of 3):

Modify the ifdefs in lzodefs.h to be more consistent with normal kernel
macros (e.g., change __aarch64__ to CONFIG_ARM64).

Link: http://lkml.kernel.org/r/20190205141950.9058-2-dave.rodgman@arm.com
Signed-off-by: Dave Rodgman <dave.rodgman@arm.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: David S. Miller <davem@davemloft.net>
Cc: Nitin Gupta <nitingupta910@gmail.com>
Cc: Richard Purdie <rpurdie@openedhand.com>
Cc: Markus F.X.J. Oberhumer <markus@oberhumer.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com>
Cc: Sonny Rao <sonnyrao@google.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Matt Sealey <matt.sealey@arm.com>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: UtsavBalar1231 <utsavbalar1231@gmail.com>
2022-11-12 11:19:23 +00:00
Gao Xiang
5a3019f979 lib/lz4: explicitly support in-place decompression
LZ4 final literal copy could be overlapped when doing
in-place decompression, so it's unsafe to just use memcpy()
on an optimized memcpy approach but memmove() instead.

Upstream LZ4 has updated this years ago [1] (and the impact
is non-sensible [2] plus only a few bytes remain), this commit
just synchronizes LZ4 upstream code to the kernel side as well.

It can be observed as EROFS in-place decompression failure
on specific files when X86_FEATURE_ERMS is unsupported,
memcpy() optimization of commit 59daa706fb ("x86, mem:
Optimize memcpy by avoiding memory false dependece") will
be enabled then.

Currently most modern x86-CPUs support ERMS, these CPUs just
use "rep movsb" approach so no problem at all. However, it can
still be verified with forcely disabling ERMS feature...

arch/x86/lib/memcpy_64.S:
        ALTERNATIVE_2 "jmp memcpy_orig", "", X86_FEATURE_REP_GOOD, \
-                     "jmp memcpy_erms", X86_FEATURE_ERMS
+                     "jmp memcpy_orig", X86_FEATURE_ERMS

We didn't observe any strange on arm64/arm/x86 platform before
since most memcpy() would behave in an increasing address order
("copy upwards" [3]) and it's the correct order of in-place
decompression but it really needs an update to memmove() for sure
considering it's an undefined behavior according to the standard
and some unique optimization already exists in the kernel.

[1] 33cb8518ac
[2] https://github.com/lz4/lz4/pull/717#issuecomment-497818921
[3] https://sourceware.org/bugzilla/show_bug.cgi?id=12518

Link: https://lkml.kernel.org/r/20201122030749.2698994-1-hsiangkao@redhat.com
Signed-off-by: Gao Xiang <hsiangkao@redhat.com>
Reviewed-by: Nick Terrell <terrelln@fb.com>
Cc: Yann Collet <yann.collet.73@gmail.com>
Cc: Miao Xie <miaoxie@huawei.com>
Cc: Chao Yu <yuchao0@huawei.com>
Cc: Li Guifu <bluce.liguifu@huawei.com>
Cc: Guo Xuenan <guoxuenan@huawei.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: UtsavBalar1231 <utsavbalar1231@gmail.com>
2022-11-12 11:19:23 +00:00
Nick Terrell
1e3013fcb8 lz4: fix kernel decompression speed
This patch replaces all memcpy() calls with LZ4_memcpy() which calls
__builtin_memcpy() so the compiler can inline it.

LZ4 relies heavily on memcpy() with a constant size being inlined.  In x86
and i386 pre-boot environments memcpy() cannot be inlined because memcpy()
doesn't get defined as __builtin_memcpy().

An equivalent patch has been applied upstream so that the next import
won't lose this change [1].

I've measured the kernel decompression speed using QEMU before and after
this patch for the x86_64 and i386 architectures.  The speed-up is about
10x as shown below.

Code	Arch	Kernel Size	Time	Speed
v5.8	x86_64	11504832 B	148 ms	 79 MB/s
patch	x86_64	11503872 B	 13 ms	885 MB/s
v5.8	i386	 9621216 B	 91 ms	106 MB/s
patch	i386	 9620224 B	 10 ms	962 MB/s

I also measured the time to decompress the initramfs on x86_64, i386, and
arm.  All three show the same decompression speed before and after, as
expected.

[1] https://github.com/lz4/lz4/pull/890

Signed-off-by: Nick Terrell <terrelln@fb.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Yann Collet <yann.collet.73@gmail.com>
Cc: Gao Xiang <gaoxiang25@huawei.com>
Cc: Sven Schmidt <4sschmid@informatik.uni-hamburg.de>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Arvind Sankar <nivedita@alum.mit.edu>
Link: http://lkml.kernel.org/r/20200803194022.2966806-1-nickrterrell@gmail.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: UtsavBalar1231 <utsavbalar1231@gmail.com>
2022-11-12 11:19:23 +00:00
Joe Perches
ce0fe0c93e lib/lz4/lz4_decompress.c: document deliberate use of `&'
This operation was intentional, but tools such as smatch will warn that it
might not have been.

Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Yann Collet <cyan@fb.com>
Cc: Vasily Averin <vvs@virtuozzo.com>
Cc: Gao Xiang <hsiangkao@aol.com>
Link: http://lkml.kernel.org/r/3bf931c6ea0cae3e23f3485801986859851b4f04.camel@perches.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: UtsavBalar1231 <utsavbalar1231@gmail.com>
2022-11-12 11:19:23 +00:00
Linus Torvalds
edaee3eb51 lz4: do not export static symbol
Kbuild now complains (rightly) about it.

Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: UtsavBalar1231 <utsavbalar1231@gmail.com>
2022-11-12 11:19:22 +00:00
Gao Xiang
0e023812f1 lib/lz4: update LZ4 decompressor module
Update the LZ4 compression module based on LZ4 v1.8.3 in order for the
erofs file system to use the newest LZ4_decompress_safe_partial() which
can now decode exactly the nb of bytes requested [1] to take place of the
open hacked code in the erofs file system itself.

Currently, apart from the erofs file system, no other users use
LZ4_decompress_safe_partial, so no worry about the interface.

In addition, LZ4 v1.8.x boosts up decompression speed compared to the
current code which is based on LZ4 v1.7.3, mainly due to shortcut
optimization for the specific common LZ4-sequences [2].

lzbench testdata (tested in kirin710, 8 cores, 4 big cores
at 2189Mhz, 2GB DDR RAM at 1622Mhz, with enwik8 testdata [3]):

Compressor name         Compress. Decompress. Compr. size  Ratio Filename
memcpy                   5004 MB/s  4924 MB/s   100000000 100.00 enwik8
lz4hc 1.7.3 -9             12 MB/s   653 MB/s    42203253  42.20 enwik8
lz4hc 1.8.0 -9             12 MB/s   908 MB/s    42203096  42.20 enwik8
lz4hc 1.8.3 -9             11 MB/s   965 MB/s    42203094  42.20 enwik8

[1] https://github.com/lz4/lz4/issues/566
    08d347b5b2

[2] v1.8.1 perf: slightly faster compression and decompression speed
    a31b7058cb
    v1.8.2 perf: slightly faster HC compression and decompression speed
    45f8603aae
    1a191b3f8d

[3] http://mattmahoney.net/dc/textdata.html
    http://mattmahoney.net/dc/enwik8.zip

Link: http://lkml.kernel.org/r/1537181207-21932-1-git-send-email-gaoxiang25@huawei.com
Signed-off-by: Gao Xiang <gaoxiang25@huawei.com>
Tested-by: Guo Xuenan <guoxuenan@huawei.com>
Cc: Colin Ian King <colin.king@canonical.com>
Cc: Yann Collet <yann.collet.73@gmail.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Fang Wei <fangwei1@huawei.com>
Cc: Chao Yu <yuchao0@huawei.com>
Cc: Miao Xie <miaoxie@huawei.com>
Cc: Sven Schmidt <4sschmid@informatik.uni-hamburg.de>
Cc: Kyungsik Lee <kyungsik.lee@lge.com>
Cc: <weidu.du@huawei.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: UtsavBalar1231 <utsavbalar1231@gmail.com>
2022-11-12 11:19:22 +00:00
LibXZR
536bab8c2f drivers: gpu: msm: Only build adreno 6xx part
* Inspired by 5898ed5670
2022-11-12 11:19:22 +00:00
spakkkk
f559c16965 arm64: config: disable audit 2022-11-12 11:19:22 +00:00
Tyler Nijmeh
bb35218571 selinux: Allow audit to be disabled
SELinux doesn't actually need audit support. In fact, auditing is not
particularly useful on production kernels, where minimal debugging is
needed.

Signed-off-by: Tyler Nijmeh <tylernij@gmail.com>
2022-11-12 11:19:22 +00:00
Sultan Alsawaf
7a6b4caed5 scripts: Don't append "+" to localversion
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
2022-11-12 11:19:21 +00:00
spakkkk
b9d2e2d755 arm64: Tune Clang for lito's CPU 2022-11-12 11:19:21 +00:00
Danny Lin
4a101f270c arm64: dts: lito: Revert to normal RCU grace periods after boot
Similar to what I've done on other devices, revert to normal grace
periods to reduce power usage. Expedited RCU hammers CPUs with IPIs to
reduce grace period durations and thus RCU latency, but that disrupts
busy CPUs and causes unnecessary wakeups while providing little to no
improvement in real-world performance.

Signed-off-by: Danny Lin <danny@kdrag0n.dev>
Co-authored-by: Panchajanya1999 <panchajanya@azure-dev.live>
Signed-off-by: Panchajanya1999 <panchajanya@azure-dev.live>
2022-11-12 11:19:21 +00:00
Adam W. Willis
30c44e9cd6 arm64: dts: lito: Disable debug monitoring 2022-11-12 11:19:21 +00:00
Danny Lin
b4c474bec2 arm64: dts: lito: Don't ratelimit userspace kmsg logging
Ratelimiting messages from init and fsck.f2fs make early userspace boot
debugging much more difficult for no good reason.

Signed-off-by: Danny Lin <danny@kdrag0n.dev>
2022-11-12 11:19:21 +00:00
Danny Lin
dfc1ef88ce arm64: dts: lito: Allow big cluster to idle in USB perf mode
This will save a lot of power should the USB IRQ ever get affined to
cpu6 or cpu7, whether it's for testing or balancing. The extra 6 µs
for the big cluster's WFI state won't hurt.

Signed-off-by: Danny Lin <danny@kdrag0n.dev>
2022-11-12 11:19:20 +00:00
Danny Lin
a674d976d7 arm64: dts: lito: Remove useless 40 MiB memory dump region
This reserved memory dump region is intended to be used with the memory
dump v2 driver, but we've disabled that and we don't need this memory
dumping functionality. Remove the unused region and disable the
associated driver node to save 40 MiB of memory.

Signed-off-by: Danny Lin <danny@kdrag0n.dev>
2022-11-12 11:19:20 +00:00
spakkkk
8613152ddd arm64: dts: tweak boot arguments
f888b0ec51
2022-11-12 11:19:20 +00:00
Sultan Alsawaf
9e33a6782c kernel: Use the stock config for /proc/config.gz
Userspace reads /proc/config.gz and spits out an error message after boot
finishes when it doesn't like the kernel's configuration. In order to
preserve our freedom to customize the kernel however we'd like, show
userspace the stock config so that it never complains about our
kernel configuration.

Signed-off-by: Sultan Alsawaf <sultanxda@gmail.com>
2022-11-12 11:19:20 +00:00
spakkkk
eacac4427d arm64: config: Enable userspace CNTVCT_EL0 access for vDSO
72840e5fc7
2022-11-12 11:19:20 +00:00
spakkkk
ed6967e849 arm64: config: clean 2022-11-12 11:19:19 +00:00
spakkkk
2afe0cf63a arm64: config: skizo 2022-11-12 11:19:19 +00:00
spakkkk
cac2d64e12 arm64: config: import picasso-r-oss monet config 2022-11-12 11:19:19 +00:00
spakkkk
6b388e0ef2 coresight: fix when disabled 2022-11-12 11:19:19 +00:00
spakkkk
5e574f75da kbuild: silence pointer-to-int-cast warnings 2022-11-12 11:19:19 +00:00
spakkkk
94964998d1 kernel: sched: silence Wconstant-logical-operand warnings 2022-11-12 11:19:18 +00:00
spakkkk
eeb6fe835a drivers: soc: fix wmisleading-indentation warnings 2022-11-12 11:19:18 +00:00
spakkkk
add5a6f371 crypto: msm: fix -Wmisleading-indentation warnings 2022-11-12 11:19:18 +00:00
spakkkk
e76e02fa8d crypto: msm: fix -Wbool-operation warning 2022-11-12 11:19:18 +00:00
spakkkk
f9fc42fa93 kconfig: force INIT_STACK_NONE 2022-11-12 11:19:18 +00:00
spakkkk
94613113f5 drivers: batterydata: use generic node if no node is found 2022-11-12 11:19:17 +00:00
spakkkk
f05085d542 drivers: spi: increase transfer timeout 2022-11-12 11:19:17 +00:00
spakkkk
967c4f25eb drivers: usb-charger: import picasso-r-oss changes 2022-11-12 11:19:17 +00:00
spakkkk
c2749f7cb2 scsi: ufs: add delay fix for internal initialization of some vendors 2022-11-12 11:19:17 +00:00
spakkkk
d3e43a11f0 sound: soc: support host-less 24bit formats 2022-11-12 11:19:17 +00:00
spakkkk
077707ce4b drivers: socinfo: import picasso-r-oss functions to get the hw version 2022-11-12 11:19:16 +00:00
spakkkk
4e007d173f fs: pstore: import vangogh-r-oss support to get last_kmsg 2022-11-12 11:19:16 +00:00
spakkkk
6b146db8c9 drivers: power: import picasso-r-oss changes 2022-11-12 11:19:16 +00:00
spakkkk
309fbc5d39 Revert "power: step-chg-jeita: Add support to tune hysteresis for jeita-fcc-step"
This reverts commit c5c21901f3.
2022-11-12 11:19:16 +00:00
spakkkk
180a0ff03c Revert "qcom: step-chg-jeita: Add support for jeita fcc scaling"
This reverts commit 5385079a2a.
2022-11-12 11:19:15 +00:00
spakkkk
d882cc2dab driver: ir: import picasso-r-oss changes 2022-11-12 11:19:15 +00:00
spakkkk
2fb6b3e281 drivers: xiaomi_touch: import picasso-r-oss driver 2022-11-12 11:19:15 +00:00
spakkkk
c61615ae37 input: focaltech: remove usages of lpm_disable_for_input 2022-11-12 11:19:15 +00:00
spakkkk
13ac7bb26d drivers: focaltech_touch: nuke touch_irq_boost 2022-11-12 11:19:15 +00:00