very low RBD and Cephfs performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello 

I have a 4 nodes Ceph cluster on Azure. Each node is a E32s_v4 VM ,which has 32vcpus and 256GB memory.The network between nodes is 15GBit/sec measured with iperf.
The OS is CentOS 8.2 .Ceph version is Pacific and was deployed with ceph-ansible.

Three nodes have the OSDs and the fourth node is acting as rbd client.
In total there are 12 OSDs ,four per node , with each disk having 5000 IOPS for 4K  writes.

i have one pool with 512 PG and one rbd image.I am running the following fio command and i get only 1433 IOPS


fio --filename=/dev/rbd0 --direct=1 --fsync=1 --rw=write --bs=4k --numjobs=16 --iodepth=8 --runtime=360 --time_based --group_reporting --name=4k-sync-write

4k-sync-write: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=8
...
fio-3.19
Starting 16 processes
Jobs: 16 (f=16): [W(16)][100.0%][w=5734KiB/s][w=1433 IOPS][eta 00m:00s]
4k-sync-write: (groupid=0, jobs=16): err= 0: pid=12427: Mon Aug  9 16:18:38 2021
  write: IOPS=1327, BW=5309KiB/s (5436kB/s)(1866MiB/360011msec); 0 zone resets
    clat (msec): min=2, max=365, avg=12.04, stdev= 7.79
     lat (msec): min=2, max=365, avg=12.04, stdev= 7.79
    clat percentiles (usec):
     |  1.00th=[ 3556],  5.00th=[ 4686], 10.00th=[ 5669], 20.00th=[ 6849],
     | 30.00th=[ 7767], 40.00th=[ 8717], 50.00th=[ 9896], 60.00th=[11338],
     | 70.00th=[13173], 80.00th=[15795], 90.00th=[20841], 95.00th=[26608],
     | 99.00th=[41157], 99.50th=[47449], 99.90th=[66323], 99.95th=[76022],
     | 99.99th=[96994]
   bw (  KiB/s): min= 1855, max=10240, per=100.00%, avg=5313.12, stdev=97.24, samples=11488
   iops        : min=  463, max= 2560, avg=1324.30, stdev=24.33, samples=11488
  lat (msec)   : 4=2.42%, 10=48.56%, 20=37.90%, 50=10.73%, 100=0.38%
  lat (msec)   : 250=0.01%, 500=0.01%
  fsync/fdatasync/sync_file_range:
    sync (nsec): min=1100, max=114600, avg=5610.37, stdev=3387.10
    sync percentiles (nsec):
     |  1.00th=[ 2192],  5.00th=[ 3312], 10.00th=[ 3408], 20.00th=[ 3408],
     | 30.00th=[ 3504], 40.00th=[ 3600], 50.00th=[ 3888], 60.00th=[ 6816],
     | 70.00th=[ 7712], 80.00th=[ 7776], 90.00th=[ 7904], 95.00th=[ 9408],
     | 99.00th=[18304], 99.50th=[23936], 99.90th=[41216], 99.95th=[45824],
     | 99.99th=[61696]
  cpu          : usr=0.30%, sys=0.53%, ctx=477856, majf=0, minf=203
  IO depths    : 1=200.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,477811,0,477795 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
  WRITE: bw=5309KiB/s (5436kB/s), 5309KiB/s-5309KiB/s (5436kB/s-5436kB/s), io=1866MiB (1957MB), run=360011-360011msec

Disk stats (read/write):
rbd0: ios=0/469238, merge=0/4868, ticks=0/5598109, in_queue=5363153, util=38.89%





  
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux