Re: Poor ceph cluster performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



CPU: 2 x E5-2603 @1.8GHz
RAM: 16GB
Network: 1G port shared for Ceph public and cluster traffics
Journaling device: 1 x 120GB SSD (SATA3, consumer grade)
OSD device: 2 x 2TB 7200rpm spindle (SATA3, consumer grade)

0.84 MB/s sequential write is impossibly bad, it's not normal with any kind of devices and even with 1G network, you probably have some kind of problem in your setup - maybe the network RTT is very high or maybe osd or mon nodes are shared with other running tasks and overloaded or maybe your disks are already dead... :))

As I moved on to test block devices, I got a following error message:

# rbd map image01 --pool testbench --name client.admin

You don't need to map it to run benchmarks, use `fio --ioengine=rbd` (however you'll still need /etc/ceph/ceph.client.admin.keyring)

--
With best regards,
  Vitaliy Filippov
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux