sched: reduce softirq conflicts with RT

This is a forward port of pa/890483 with modifications from the original
patch due to changes in sched/softirq.c which applies the same logic.

We're finding audio glitches caused by audio-producing RT tasks
that are either interrupted to handle softirq's or that are
scheduled onto cpu's that are handling softirq's.
In a previous patch, we attempted to catch many cases of the
latter problem, but it's clear that we are still losing
significant numbers of races in some apps.

This patch attempts to address the following problem::
   It attempts to reduce the most common windows in which
   we lose the race between scheduling an RT task on a remote
   core and starting to handle softirq's on that core.
   We still lose some races, but we lose significantly fewer.
   (And we don't want to introduce any heavyweight forms
   of synchronization on these paths.)

Bug: 64912585
Bug: 136771796
Bug: 144961676
Change-Id: Ida89a903be0f1965552dd0e84e67ef1d3158c7d8
Signed-off-by: Miguel de Dios <migueldedios@google.com>
Signed-off-by: Jimmy Shiu <jimmyshiu@google.com>
This commit is contained in:
Miguel de Dios 2019-07-08 15:39:40 -07:00 committed by spakkkk
parent 060d70c387
commit ca3ca78250

View File

@ -1483,8 +1483,10 @@ static int find_lowest_rq(struct task_struct *task);
/*
* Return whether the task on the given cpu is currently non-preemptible
* while handling a potentially long softint, or if the task is likely
* to block preemptions soon because it is a ksoftirq thread that is
* handling slow softints.
* to block preemptions soon because (a) it is a ksoftirq thread that is
* handling slow softints, (b) it is idle and therefore likely to start
* processing the irq's immediately, (c) the cpu is currently handling
* hard irq's and will soon move on to the softirq handler.
*/
bool
task_may_not_preempt(struct task_struct *task, int cpu)
@ -1494,15 +1496,16 @@ task_may_not_preempt(struct task_struct *task, int cpu)
struct task_struct *cpu_ksoftirqd = per_cpu(ksoftirqd, cpu);
return ((softirqs & LONG_SOFTIRQ_MASK) &&
(task == cpu_ksoftirqd ||
task_thread_info(task)->preempt_count & SOFTIRQ_MASK));
(task == cpu_ksoftirqd || is_idle_task(task) ||
(task_thread_info(task)->preempt_count
& (HARDIRQ_MASK | SOFTIRQ_MASK))));
}
static int
select_task_rq_rt(struct task_struct *p, int cpu, int sd_flag, int flags,
int sibling_count_hint)
{
struct task_struct *curr;
struct task_struct *curr, *tgt_task;
struct rq *rq;
bool may_not_preempt;
@ -1554,6 +1557,18 @@ select_task_rq_rt(struct task_struct *p, int cpu, int sd_flag, int flags,
curr->prio <= p->prio))) {
int target = find_lowest_rq(p);
/*
* Check once for losing a race with the other core's irq
* handler. This does not happen frequently, but it can avoid
* delaying the execution of the RT task in those cases.
*/
if (target != -1) {
tgt_task = READ_ONCE(cpu_rq(target)->curr);
if (task_may_not_preempt(tgt_task, target))
target = find_lowest_rq(p);
}
/*
* If cpu is non-preemptible, prefer remote cpu
* even if it's running a higher-prio task.