Rugged data distribution on OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

I made some tests on 3 node Ceph cluster: upload 3 million 50 KiB object to single container. Speed and performance were okay. But data does not distributed correctly. Every node has got 2 pcs. 4 TB and 1 pc. 2 TB HDD.

osd.0 41 GB (4 TB)
osd.1 47 GB (4 TB)
osd.3 16 GB (2 TB)
osd.4 40 GB (4 TB)
osd.5 49 GB (4 TB)
osd.6 17 GB (2 TB)
osd.7 48 GB (4 TB)
osd.8 42 GB (4 TB)
osd.9 18 GB (2 TB)

Every 4 TB and 2 TB HDDs are from same vendor and same type. (WD RE SATA)

I monitored iops with Zabbix under test, you can see here: http://ctrlv.in/237368
(sda and sdb are system HDDs) This graph are same on every three nodes.

Is there any idea what's wrong or what should I see?

I'm using ceph-0.67.3 on Ubuntu 12.04.3 x86_64.

Thank you,
Mihaly
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux