On 2021-12-13 09:37:36 [-0300], Wander Lairson Costa wrote: > --- > block/blk-cgroup.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c > index 663aabfeba18..0a532bb3003c 100644 > --- a/block/blk-cgroup.c > +++ b/block/blk-cgroup.c > @@ -1911,7 +1911,7 @@ void blk_cgroup_bio_start(struct bio *bio) > struct blkg_iostat_set *bis; > unsigned long flags; > > - cpu = get_cpu(); > + cpu = get_cpu_light(); > bis = per_cpu_ptr(bio->bi_blkg->iostat_cpu, cpu); > flags = u64_stats_update_begin_irqsave(&bis->sync); > > @@ -1928,7 +1928,7 @@ void blk_cgroup_bio_start(struct bio *bio) > u64_stats_update_end_irqrestore(&bis->sync, flags); > if (cgroup_subsys_on_dfl(io_cgrp_subsys)) > cgroup_rstat_updated(bio->bi_blkg->blkcg->css.cgroup, cpu); > - put_cpu(); > + put_cpu_light(); > } Are you sure patch and backtrace match? There is also u64_stats_update_begin_irqsave() which disables preemption on RT. So by doing what you are suggesting, you only avoid disabling preemption in cgroup_rstat_updated() which acquires a raw_spinlock_t. Sebastian