Re: [BUG] cgroupv2/blk: inconsistent I/O behavior in Cgroup v2 with set device wbps and wiops

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

在 2024/08/13 13:00, Lance Yang 写道:
Hi Kuai,

Thanks a lot for jumping in!

On Tue, Aug 13, 2024 at 9:37 AM Yu Kuai <yukuai1@xxxxxxxxxxxxxxx> wrote:

Hi,

在 2024/08/12 23:43, Michal Koutný 写道:
+Cc Kuai

On Mon, Aug 12, 2024 at 11:00:30PM GMT, Lance Yang <ioworker0@xxxxxxxxx> wrote:
Hi all,

I've run into a problem with Cgroup v2 where it doesn't seem to correctly limit
I/O operations when I set both wbps and wiops for a device. However, if I only
set wbps, then everything works as expected.

To reproduce the problem, we can follow these command-based steps:

1. **System Information:**
     - Kernel Version and OS Release:
       ```
       $ uname -r
       6.10.0-rc5+

       $ cat /etc/os-release
       PRETTY_NAME="Ubuntu 24.04 LTS"
       NAME="Ubuntu"
       VERSION_ID="24.04"
       VERSION="24.04 LTS (Noble Numbat)"
       VERSION_CODENAME=noble
       ID=ubuntu
       ID_LIKE=debian
       HOME_URL="https://www.ubuntu.com/";
       SUPPORT_URL="https://help.ubuntu.com/";
       BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/";
       PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy";
       UBUNTU_CODENAME=noble
       LOGO=ubuntu-logo
       ```

2. **Device Information and Settings:**
     - List Block Devices and Scheduler:
       ```
       $ lsblk
       NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
       sda     8:0    0   4.4T  0 disk
       └─sda1  8:1    0   4.4T  0 part /data
       ...

       $ cat /sys/block/sda/queue/scheduler
       none [mq-deadline] kyber bfq

       $ cat /sys/block/sda/queue/rotational
       1
       ```

3. **Reproducing the problem:**
     - Navigate to the cgroup v2 filesystem and configure I/O settings:
       ```
       $ cd /sys/fs/cgroup/
       $ stat -fc %T /sys/fs/cgroup
       cgroup2fs
       $ mkdir test
       $ echo "8:0 wbps=10485760 wiops=100000" > io.max
       ```
       In this setup:
       wbps=10485760 sets the write bytes per second limit to 10 MB/s.
       wiops=100000 sets the write I/O operations per second limit to 100,000.

     - Add process to the cgroup and verify:
       ```
       $ echo $$ > cgroup.procs
       $ cat cgroup.procs
       3826771
       3828513
       $ ps -ef|grep 3826771
       root     3826771 3826768  0 22:04 pts/1    00:00:00 -bash
       root     3828761 3826771  0 22:06 pts/1    00:00:00 ps -ef
       root     3828762 3826771  0 22:06 pts/1    00:00:00 grep --color=auto 3826771
       ```

     - Observe I/O performance using `dd` commands and `iostat`:
       ```
       $ dd if=/dev/zero of=/data/file1 bs=512M count=1 &
       $ dd if=/dev/zero of=/data/file1 bs=512M count=1 &

You're testing buffer IO here, and I don't see that write back cgroup is
enabled. Is this test intentional? Why not test direct IO?

Yes, I was testing buffered I/O and can confirm that CONFIG_CGROUP_WRITEBACK
was enabled.

$ cat /boot/config-6.10.0-rc5+ |grep CONFIG_CGROUP_WRITEBACK
CONFIG_CGROUP_WRITEBACK=y

We intend to configure both wbps (write bytes per second) and wiops
(write I/O operations
per second) for the containers. IIUC, this setup will effectively
restrict both their block device
I/Os and buffered I/Os.

Why not test direct IO?

I was testing direct IO as well. However it did not work as expected with
`echo "8:0 wbps=10485760 wiops=100000" > io.max`.

$ time dd if=/dev/zero of=/data/file7 bs=512M count=1 oflag=direct

So, you're issuing one huge IO, with 512M.
1+0 records in
1+0 records out
536870912 bytes (537 MB, 512 MiB) copied, 51.5962 s, 10.4 MB/s

And this result looks correct. Please noted that blk-throtl works before
IO submit, while iostat reports IO that are done. A huge IO can be
throttled for a long time.

real 0m51.637s
user 0m0.000s
sys 0m0.313s

$ iostat -d 1 -h -y -p sda
  tps    kB_read/s    kB_wrtn/s    kB_dscd/s    kB_read    kB_wrtn
kB_dscd Device
      9.00         0.0k         1.3M         0.0k       0.0k       1.3M
       0.0k sda
      9.00         0.0k         1.3M         0.0k       0.0k       1.3M
       0.0k sda1

I don't understand yet is why there are few IO during the wait. Can you
test for a raw disk to bypass filesystem?

Thanks,
Kuai





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux