diff options
author | Luca Abeni <luca.abeni@santannapisa.it> | 2020-05-20 15:42:41 +0200 |
---|---|---|
committer | Peter Zijlstra <peterz@infradead.org> | 2020-06-15 14:10:05 +0200 |
commit | 60ffd5edc5e4fa69622c125c54ef8e7d5d894af8 (patch) | |
tree | 28335309287723ee841f9713ddd520e17f078f89 /kernel/sched/sched.h | |
parent | fc9dc698472aa460a8b3b036d9b1d0b751f12f58 (diff) | |
download | linux-60ffd5edc5e4fa69622c125c54ef8e7d5d894af8.tar.gz |
sched/deadline: Improve admission control for asymmetric CPU capacities
The current SCHED_DEADLINE (DL) admission control ensures that
sum of reserved CPU bandwidth < x * M
where
x = /proc/sys/kernel/sched_rt_{runtime,period}_us
M = # CPUs in root domain.
DL admission control works well for homogeneous systems where the
capacity of all CPUs are equal (1024). I.e. bounded tardiness for DL
and non-starvation of non-DL tasks is guaranteed.
But on heterogeneous systems where capacity of CPUs are different it
could fail by over-allocating CPU time on smaller capacity CPUs.
On an Arm big.LITTLE/DynamIQ system DL tasks can easily starve other
tasks making it unusable.
Fix this by explicitly considering the CPU capacity in the DL admission
test by replacing M with the root domain CPU capacity sum.
Signed-off-by: Luca Abeni <luca.abeni@santannapisa.it>
Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Juri Lelli <juri.lelli@redhat.com>
Link: https://lkml.kernel.org/r/20200520134243.19352-4-dietmar.eggemann@arm.com
Diffstat (limited to 'kernel/sched/sched.h')
-rw-r--r-- | kernel/sched/sched.h | 6 |
1 files changed, 3 insertions, 3 deletions
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 8d5d06881294..91b250f265c0 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -310,11 +310,11 @@ void __dl_add(struct dl_bw *dl_b, u64 tsk_bw, int cpus) __dl_update(dl_b, -((s32)tsk_bw / cpus)); } -static inline -bool __dl_overflow(struct dl_bw *dl_b, int cpus, u64 old_bw, u64 new_bw) +static inline bool __dl_overflow(struct dl_bw *dl_b, unsigned long cap, + u64 old_bw, u64 new_bw) { return dl_b->bw != -1 && - dl_b->bw * cpus < dl_b->total_bw - old_bw + new_bw; + cap_scale(dl_b->bw, cap) < dl_b->total_bw - old_bw + new_bw; } extern void init_dl_bw(struct dl_bw *dl_b); |