Uniform distribution

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

I'm working on filling a cluster to near capacity for testing purposes. Though I'm noticing that it isn't storing the data uniformly between OSDs during the filling process. I currently have the following levels:

Node 1:
/dev/sdb1                      3904027124  2884673100  1019354024  74% /var/lib/ceph/osd/ceph-0
/dev/sdc1                      3904027124  2306909388  1597117736  60% /var/lib/ceph/osd/ceph-1
/dev/sdd1                      3904027124  3296767276   607259848  85% /var/lib/ceph/osd/ceph-2
/dev/sde1                      3904027124  3670063612   233963512  95% /var/lib/ceph/osd/ceph-3

Node 2:
/dev/sdb1                      3904027124  3250627172   653399952  84% /var/lib/ceph/osd/ceph-4
/dev/sdc1                      3904027124  3611337492   292689632  93% /var/lib/ceph/osd/ceph-5
/dev/sdd1                      3904027124  2831199600  1072827524  73% /var/lib/ceph/osd/ceph-6
/dev/sde1                      3904027124  2466292856  1437734268  64% /var/lib/ceph/osd/ceph-7

I am using "rados put" to upload 100g files to the cluster, doing two at a time from two different locations. Is this expected behavior, or can someone shed light on why it is doing this? We're using the opensource version 80.7. We're also using the default CRUSH configuration.

Regards,
MICHAEL J. BREWER


Phone: 1-512-286-5596 | Tie-Line: 363-5596
E-mail:
 mjbrewer@xxxxxxxxxx


11501 Burnet Rd
Austin, TX 78758-3400
United States

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux