ceph bluestore speed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I'm trying to set up a test lab with ceph (on proxmox).
I've got 3 nodes, but I figured I'd start with 1 to test out speeds and to
learn more about the setup of ceph. I will add the 2 extra nodes later.

One thing that was disappointing was the writing speed.

In my setup I've got 14 * 300GB SAS hdd drives. When I do a write test
directly on one of them I get around 130-140MB/sec write speed:
fio --filename=/dev/sdp1 -name=test -direct=1 -rw=write -bs=4M -iodepth=16
Run status group 0 (all jobs):
  WRITE: bw=132MiB/s (138MB/s), 132MiB/s-132MiB/s (138MB/s-138MB/s),
io=153GiB (164GB), run=1187061-1187061msec

When I set up ceph osd.0 trough osd.13 (hdd bluestore - DB/WAL on each
osd), create a pool that I can use and then run a write speed:
fio -ioengine=rbd -name=test -direct=1 -rw=write -bs=4M -iodepth=16
-pool=ceph -rbdname=vm-118-disk-0
I get only around 80MB/sec:
Run status group 0 (all jobs):
  WRITE: bw=82.4MiB/s (86.4MB/s), 82.4MiB/s-82.4MiB/s (86.4MB/s-86.4MB/s),
io=100GiB (107GB), run=1243230-1243230msec

Am I doing something wrong, or is the write speed supposed to drop when
clustering disks like this?

-Idar
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux