android_kernel_xiaomi_sm7250/kernel/locking
Sultan Alsawaf 73a40230d3 Revert "mutex: Add a delay into the SPIN_ON_OWNER wait loop."
This reverts commit 1e5a5b5e00.

This doesn't make sense for a few reasons. Firstly, upstream uses this
mutex code and it works fine on all arches; why should arm be any
different?

Secondly, once the mutex owner starts to spin on `wait_lock`,
preemption is disabled and the owner will be in an actively-running
state. The optimistic mutex spinning occurs when the lock owner is
actively running on a CPU, and while the optimistic spinning takes
place, no attempt to acquire `wait_lock` is made by the new waiter.
Therefore, it is guaranteed that new mutex waiters which optimistically
spin will not contend the `wait_lock` spin lock that the owner needs to
acquire in order to make forward progress.

Another potential source of `wait_lock` contention can come from tasks
that call mutex_trylock(), but this isn't actually problematic (and if
it were, it would affect the MUTEX_SPIN_ON_OWNER=n use-case too). This
won't introduce significant contention on `wait_lock` because the
trylock code exits before attempting to lock `wait_lock`, specifically
when the atomic mutex counter indicates that the mutex is already
locked. So in reality, the amount of `wait_lock` contention that can
come from mutex_trylock() amounts to only one task. And once it
finishes, `wait_lock` will no longer be contended and the previous
mutex owner can proceed with clean up.

Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
2022-11-12 11:24:02 +00:00
..
lock_events_list.h UPSTREAM: locking/rwsem: Remove reader optimistic spinning 2022-11-12 11:23:52 +00:00
lock_events.c UPSTREAM: locking/lock_events: Don't show pvqspinlock events on bare metal 2022-11-12 11:23:34 +00:00
lock_events.h UPSTREAM: locking/lock_events: Use raw_cpu_{add,inc}() for stats 2022-11-12 11:23:38 +00:00
lockdep_internals.h UPSTREAM: locking/lockdep: Decrement IRQ context counters when removing lock chain 2022-11-12 11:23:50 +00:00
lockdep_proc.c UPSTREAM: locking/lockdep: Introduce lockdep_next_lockchain() and lock_chain_count() 2022-11-12 11:23:29 +00:00
lockdep_states.h
lockdep.c UPSTREAM: locking/lockdep: Decrement IRQ context counters when removing lock chain 2022-11-12 11:23:50 +00:00
locktorture.c
Makefile UPSTREAM: locking/rwsem: Merge rwsem.h and rwsem-xadd.c into rwsem.c 2022-11-12 11:23:44 +00:00
mcs_spinlock.h
mutex-debug.c UPSTREAM: locking/mutex: Replace spin_is_locked() with lockdep 2022-11-12 11:23:23 +00:00
mutex-debug.h locking/mutex: clear MUTEX_FLAGS if wait_list is empty due to signal 2021-05-26 11:48:32 +02:00
mutex.c Revert "mutex: Add a delay into the SPIN_ON_OWNER wait loop." 2022-11-12 11:24:02 +00:00
mutex.h UPSTREAM: mutex: Fix up mutex_waiter usage 2022-11-12 11:23:50 +00:00
osq_lock.c UPSTREAM: locking/osq: Use optimized spinning loop for arm64 2022-11-12 11:23:51 +00:00
percpu-rwsem.c BACKPORT: locking/rwsem: Remove arch specific rwsem files 2022-11-12 11:23:32 +00:00
qrwlock.c locking/qrwlock: Fix ordering in queued_write_lock_slowpath() 2021-04-28 13:16:52 +02:00
qspinlock_paravirt.h UPSTREAM: locking/qspinlock_stat: Introduce generic lockevent_*() counting APIs 2022-11-12 11:23:34 +00:00
qspinlock_stat.h BACKPORT: locking/lock_events: Make lock_events available for all archs & other locks 2022-11-12 11:23:34 +00:00
qspinlock.c UPSTREAM: locking/qspinlock_stat: Introduce generic lockevent_*() counting APIs 2022-11-12 11:23:34 +00:00
rtmutex_common.h
rtmutex-debug.c
rtmutex-debug.h
rtmutex.c UPSTREAM: locking/rtmutex: Fix the preprocessor logic with normal #ifdef #else #endif 2022-11-12 11:23:21 +00:00
rtmutex.h
rwsem.c UPSTREAM: locking/rwsem: Optimize down_read_trylock() under highly contended case 2022-11-12 11:23:53 +00:00
rwsem.h UPSTREAM: locking/rwsem: Merge rwsem.h and rwsem-xadd.c into rwsem.c 2022-11-12 11:23:44 +00:00
semaphore.c
spinlock_debug.c
spinlock.c
test-ww_mutex.c