Re: Mapped rbd is very slow

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



hello

just for the record the nvme disk are pretty fast.

dd if=/dev/zero of=test bs=8192k count=100 oflag=direct
100+0 records in
100+0 records out
838860800 bytes (839 MB, 800 MiB) copied, 0.49474 s, 1.7 GB/s

oau

Le vendredi 16 août 2019 à 13:31 +0200, Olivier AUDRY a écrit :
> hello
> 
> here the result :
> 
> fio --ioengine=rbd --name=test --bs=4k --iodepth=1 --rw=randwrite --
> runtime=60  -pool=kube -rbdname=bench
> test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T)
> 4096B-4096B, ioengine=rbd, iodepth=1
> fio-3.12
> Starting 1 process
> Jobs: 1 (f=1): [w(1)][100.0%][w=248KiB/s][w=62 IOPS][eta 00m:00s]
> test: (groupid=0, jobs=1): err= 0: pid=1903256: Fri Aug 16 13:22:59
> 2019
>   write: IOPS=58, BW=232KiB/s (238kB/s)(13.6MiB/60011msec); 0 zone
> resets
>     slat (usec): min=9, max=351, avg=52.82, stdev=23.02
>     clat (usec): min=1264, max=96970, avg=17156.70, stdev=6713.88
>      lat (usec): min=1276, max=97050, avg=17209.52, stdev=6715.06
>     clat percentiles (usec):
>      |  1.00th=[ 2933],  5.00th=[ 3884], 10.00th=[11863],
> 20.00th=[13304],
>      | 30.00th=[13960], 40.00th=[14484], 50.00th=[15008],
> 60.00th=[20579],
>      | 70.00th=[22152], 80.00th=[23987], 90.00th=[25297],
> 95.00th=[25822],
>      | 99.00th=[26346], 99.50th=[27395], 99.90th=[71828],
> 99.95th=[82314],
>      | 99.99th=[96994]
>    bw (  KiB/s): min=  104, max=  272, per=100.00%, avg=232.17,
> stdev=19.55, samples=120
>    iops        : min=   26, max=   68, avg=57.97, stdev= 4.88,
> samples=120
>   lat (msec)   : 2=0.06%, 4=5.51%, 10=3.41%, 20=50.22%, 50=40.69%
>   lat (msec)   : 100=0.11%
>   cpu          : usr=0.44%, sys=0.27%, ctx=3489, majf=0, minf=3582
>   IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%,
> >=64=0.0%
>      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
> >=64=0.0%
>      complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
> >=64=0.0%
>      issued rwts: total=0,3485,0,0 short=0,0,0,0 dropped=0,0,0,0
>      latency   : target=0, window=0, percentile=100.00%, depth=1
> 
> Run status group 0 (all jobs):
>   WRITE: bw=232KiB/s (238kB/s), 232KiB/s-232KiB/s (238kB/s-238kB/s),
> io=13.6MiB (14.3MB), run=60011-60011msec
> 
> Disk stats (read/write):
>     md2: ios=3/4611, merge=0/0, ticks=0/0, in_queue=0, util=0.00%,
> aggrios=3/8532, aggrmerge=0/2364, aggrticks=0/1346,
> aggrin_queue=52504, aggrutil=88.03%
>   nvme1n1: ios=3/8529, merge=0/2295, ticks=1/1347, in_queue=52932,
> util=88.03%
>   nvme0n1: ios=3/8535, merge=1/2434, ticks=0/1346, in_queue=52076,
> util=86.52%
> 
> for you information my disk setup is :
> 
> 2 500G nvme disk with :
> 
> 10G raid 1 partition with the OS
> 160G raid0 partition for local docker data
> 387.00g partition on each disk for the osd.
> 
> I got 5 physical devices with 12 core cpu and 32go RAM. Ceph is
> running on dedicated 1Gbps network.
> 
> I got quite the same hardware setup with ceph 10.2.11 and I got much
> better performance.
> 
> 
> oau
> 
> lsblk 
> NAME                  MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
> rbd0                  252:0    0     1G  0
> disk  /var/lib/kubelet/pods/db24eb3b-650c-42f3-bdf0-
> 92ea0eaf37d8/volumes/kubernetes.io~csi/pvc-e70f3d74-c7bd-4652-983b-
> de3874d36117/mou
> rbd1                  252:16   0     1G  0
> disk  /var/lib/kubelet/pods/a1b0b156-0bcc-4d0c-b2f8-
> 26a74337baed/volumes/kubernetes.io~csi/pvc-14460f8a-da5a-44e3-a033-
> a3bf5054f967/mou
> rbd2                  252:32   0     8G  0
> disk  /var/lib/kubelet/pods/0a5e3745-c6c2-49a9-971a-
> 3ddac59af66c/volumes/kubernetes.io~csi/pvc-b8907f44-58d6-4599-b189-
> fafe65daed09/mou
> rbd3                  252:48   0    50G  0
> disk  /var/lib/kubelet/pods/84df04af-a7dd-4035-a9a2-
> 22d3d315fa60/volumes/kubernetes.io~csi/pvc-faf5cefa-ecce-450a-9f5c-
> 42e7ca7d7fc2/mou
> rbd4                  252:64   0     1G  0
> disk  /var/lib/kubelet/pods/872a3d7d-63d5-4567-86f6-
> bedab3fe0ad3/volumes/kubernetes.io~csi/pvc-83ea4013-b936-469b-b20f-
> 7e703db2c871/mou
> nvme0n1               259:0    0   477G  0 disk  
> ├─nvme0n1p1           259:1    0   511M  0 part  /boot/efi
> ├─nvme0n1p2           259:2    0   9.8G  0 part  
> │ └─md2                 9:2    0   9.8G  0 raid1 /
> ├─nvme0n1p3           259:3    0  79.5G  0 part  
> │ └─md3                 9:3    0 158.9G  0 raid0 
> │   └─datavg-dockerlv 253:0    0    30G  0 lvm   /var/lib/docker
> └─nvme0n1p4           259:8    0 387.2G  0 part  
>   └─ceph--52ce0eb9--9e69--4f29--8b87--9ab3fbb5df3e-osd--block
> --c26bdb06--2325--4fcd--9b8c--77a93ab46de0
>                       253:1    0   387G  0 lvm   
> nvme1n1               259:4    0   477G  0 disk  
> ├─nvme1n1p1           259:5    0   511M  0 part  
> ├─nvme1n1p2           259:6    0   9.8G  0 part  
> │ └─md2                 9:2    0   9.8G  0 raid1 /
> ├─nvme1n1p3           259:7    0  79.5G  0 part  
> │ └─md3                 9:3    0 158.9G  0 raid0 
> │   └─datavg-dockerlv 253:0    0    30G  0 lvm   /var/lib/docker
> └─nvme1n1p4           259:9    0 387.2G  0 part  
>   └─ceph--4ef4eb77--4b23--4ffd--b7d2--0fd2cfa5e568-osd--block
> --28435fba--91c7--4424--99f2--ea86709a87ca
>                       253:2    0   387G  0 lvm   
> 
> Le vendredi 16 août 2019 à 01:16 +0300, Vitaliy Filippov a écrit :
> > > rbd -p kube bench kube/bench --io-type write --io-threads 1 --io-
> > > total  
> > > 10G --io-pattern rand
> > > elapsed:    14  ops:   262144  ops/sec: 17818.16  bytes/sec:
> > > 72983201.32
> > 
> > It's a totally unreal number. Something is wrong with the test.
> > 
> > Test it with `fio` please:
> > 
> > fio -ioengine=rbd -name=test -bs=4k -iodepth=1 -rw=randwrite
> > -runtime=60  
> > -pool=kube -rbdname=bench
> > 
> > > Reads are very very slow:
> > > elapsed:   445  ops:    81216  ops/sec:   182.37  bytes/sec:
> > > 747006.15
> > > elapsed:    14  ops:    14153  ops/sec:   957.57  bytes/sec:
> > > 3922192.15
> > 
> > This is closer to the reality.
> > 
> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux