android_kernel_xiaomi_sm7250/mm
Nikolay Borisov 3e8f399da4 writeback: rework wb_[dec|inc]_stat family of functions
Currently the writeback statistics code uses a percpu counters to hold
various statistics.  Furthermore we have 2 families of functions - those
which disable local irq and those which doesn't and whose names begin
with double underscore.  However, they both end up calling
__add_wb_stats which in turn calls percpu_counter_add_batch which is
already irq-safe.

Exploiting this fact allows to eliminated the __wb_* functions since
they don't add any further protection than we already have.
Furthermore, refactor the wb_* function to call __add_wb_stat directly
without the irq-disabling dance.  This will likely result in better
runtime of code which deals with modifying the stat counters.

While at it also document why percpu_counter_add_batch is in fact
preempt and irq-safe since at least 3 people got confused.

Link: http://lkml.kernel.org/r/1498029937-27293-1-git-send-email-nborisov@suse.com
Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Acked-by: Tejun Heo <tj@kernel.org>
Reviewed-by: Jan Kara <jack@suse.cz>
Cc: Josef Bacik <jbacik@fb.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-07-12 16:26:05 -07:00
..
kasan kasan: make get_wild_bug_type() static 2017-07-10 16:32:33 -07:00
backing-dev.c
balloon_compaction.c mm/balloon_compaction.c: enqueue zero page to balloon device 2017-07-10 16:32:32 -07:00
bootmem.c
cleancache.c
cma_debug.c
cma.c cma: fix calculation of aligned offset 2017-07-10 16:32:32 -07:00
cma.h
compaction.c mm, compaction: skip over holes in __reset_isolation_suitable 2017-07-06 16:24:32 -07:00
debug_page_ref.c
debug.c
dmapool.c
early_ioremap.c
fadvise.c
failslab.c
filemap.c mm: hugetlb: return immediately for hugetlb page in __delete_from_page_cache() 2017-07-10 16:32:30 -07:00
frame_vector.c
frontswap.c
gup.c mm, gup: ensure real head page is ref-counted when using hugepages 2017-07-06 16:24:34 -07:00
highmem.c
huge_memory.c mm, THP, swap: check whether THP can be split firstly 2017-07-06 16:24:31 -07:00
hugetlb_cgroup.c
hugetlb.c mm, tree wide: replace __GFP_REPEAT by __GFP_RETRY_MAYFAIL with more useful semantic 2017-07-12 16:26:03 -07:00
hwpoison-inject.c
init-mm.c
internal.h mm, tree wide: replace __GFP_REPEAT by __GFP_RETRY_MAYFAIL with more useful semantic 2017-07-12 16:26:03 -07:00
interval_tree.c
Kconfig mm/kasan: add support for memory hotplug 2017-07-10 16:32:33 -07:00
Kconfig.debug
khugepaged.c mm: make PR_SET_THP_DISABLE immediately active 2017-07-10 16:32:31 -07:00
kmemcheck.c
kmemleak-test.c
kmemleak.c mm: kmemleak: treat vm_struct as alternative reference to vmalloc'ed objects 2017-07-06 16:24:34 -07:00
ksm.c ksm: optimize refile of stable_node_dup at the head of the chain 2017-07-06 16:24:31 -07:00
list_lru.c mm/list_lru.c: fix list_lru_count_node() to be race free 2017-07-10 16:32:33 -07:00
maccess.c
madvise.c userfaultfd: non-cooperative: add madvise() event for MADV_FREE request 2017-07-10 16:32:32 -07:00
Makefile
memblock.c mm, memory_hotplug: move movable_node to the hotplug proper 2017-07-06 16:24:35 -07:00
memcontrol.c mm, memcg: fix potential undefined behavior in mem_cgroup_event_ratelimit() 2017-07-10 16:32:32 -07:00
memory_hotplug.c mm/memory-hotplug: switch locking to a percpu rwsem 2017-07-10 16:32:33 -07:00
memory-failure.c mm, hugetlb, soft_offline: use new_page_nodemask for soft offline migration 2017-07-10 16:32:32 -07:00
memory.c mm/memory.c: mark create_huge_pmd() inline to prevent build failure 2017-07-12 16:25:59 -07:00
mempolicy.c mm, migration: do not trigger OOM killer when migrating memory 2017-07-12 16:26:04 -07:00
mempool.c
memtest.c
migrate.c mm/migrate.c: stabilise page count when migrating transparent hugepages 2017-07-10 16:32:31 -07:00
mincore.c
mlock.c
mm_init.c
mmap.c mm: use dedicated helper to access rlimit value 2017-07-10 16:32:33 -07:00
mmu_context.c
mmu_notifier.c
mmzone.c
mprotect.c mm: drop NULL return check of pte_offset_map_lock() 2017-07-06 16:24:33 -07:00
mremap.c
msync.c
nobootmem.c mm/nobootmem.c: return 0 when start_pfn equals end_pfn 2017-07-06 16:24:31 -07:00
nommu.c
oom_kill.c mm/oom_kill.c: add tracepoints for oom reaper-related events 2017-07-10 16:32:32 -07:00
page_alloc.c mm, tree wide: replace __GFP_REPEAT by __GFP_RETRY_MAYFAIL with more useful semantic 2017-07-12 16:26:03 -07:00
page_counter.c
page_ext.c
page_idle.c
page_io.c swap: add block io poll in swapin path 2017-07-10 16:32:30 -07:00
page_isolation.c mm: unify new_node_page and alloc_migrate_target 2017-07-10 16:32:31 -07:00
page_owner.c mm: avoid taking zone lock in pagetypeinfo_showmixed() 2017-07-10 16:32:32 -07:00
page_poison.c
page_vma_mapped.c mm/hugetlb: add size parameter to huge_pte_offset() 2017-07-06 16:24:34 -07:00
page-writeback.c writeback: rework wb_[dec|inc]_stat family of functions 2017-07-12 16:26:05 -07:00
pagewalk.c mm/hugetlb: add size parameter to huge_pte_offset() 2017-07-06 16:24:34 -07:00
percpu-internal.h
percpu-km.c
percpu-stats.c
percpu-vm.c
percpu.c
pgtable-generic.c
process_vm_access.c
quicklist.c
readahead.c
rmap.c mm: memcontrol: per-lruvec stats infrastructure 2017-07-06 16:24:35 -07:00
rodata_test.c
shmem.c mm: make PR_SET_THP_DISABLE immediately active 2017-07-10 16:32:31 -07:00
slab_common.c mm: allow slab_nomerge to be set at build time 2017-07-06 16:24:31 -07:00
slab.c mm: memcontrol: account slab stats per lruvec 2017-07-06 16:24:35 -07:00
slab.h mm: memcontrol: account slab stats per lruvec 2017-07-06 16:24:35 -07:00
slob.c
slub.c mm: memcontrol: account slab stats per lruvec 2017-07-06 16:24:35 -07:00
sparse-vmemmap.c mm, tree wide: replace __GFP_REPEAT by __GFP_RETRY_MAYFAIL with more useful semantic 2017-07-12 16:26:03 -07:00
sparse.c mm, memory_hotplug: do not associate hotadded memory to zones until online 2017-07-06 16:24:32 -07:00
swap_cgroup.c mm, THP, swap: delay splitting THP during swap out 2017-07-06 16:24:31 -07:00
swap_slots.c mm/swap_slots.c: don't disable preemption while taking the per-CPU cache 2017-07-10 16:32:32 -07:00
swap_state.c swap: add block io poll in swapin path 2017-07-10 16:32:30 -07:00
swap.c mm: swap: provide lru_add_drain_all_cpuslocked() 2017-07-10 16:32:33 -07:00
swapfile.c swap: add block io poll in swapin path 2017-07-10 16:32:30 -07:00
truncate.c mm/truncate.c: fix THP handling in invalidate_mapping_pages() 2017-07-10 16:32:32 -07:00
usercopy.c
userfaultfd.c
util.c mm: kvmalloc support __GFP_RETRY_MAYFAIL for all sizes 2017-07-12 16:26:03 -07:00
vmacache.c
vmalloc.c mm, tree wide: replace __GFP_REPEAT by __GFP_RETRY_MAYFAIL with more useful semantic 2017-07-12 16:26:03 -07:00
vmpressure.c mm, vmpressure: pass-through notification support 2017-07-10 16:32:31 -07:00
vmscan.c mm, tree wide: replace __GFP_REPEAT by __GFP_RETRY_MAYFAIL with more useful semantic 2017-07-12 16:26:03 -07:00
vmstat.c mm: avoid taking zone lock in pagetypeinfo_showmixed() 2017-07-10 16:32:32 -07:00
workingset.c mm: memcontrol: per-lruvec stats infrastructure 2017-07-06 16:24:35 -07:00
z3fold.c
zbud.c
zpool.c
zsmalloc.c mm/zsmalloc: simplify zs_max_alloc_size handling 2017-07-10 16:32:33 -07:00
zswap.c mm/zswap.c: delete an error message for a failed memory allocation in zswap_dstmem_prepare() 2017-07-06 16:24:35 -07:00