Re: Uniform distribution

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



100GB objects (or ~40 on a hard drive!) are way too large for you to
get an effective random distribution.
-Greg

On Thu, Jan 8, 2015 at 5:25 PM, Mark Nelson <mark.nelson@xxxxxxxxxxx> wrote:
> On 01/08/2015 03:35 PM, Michael J Brewer wrote:
>>
>> Hi all,
>>
>> I'm working on filling a cluster to near capacity for testing purposes.
>> Though I'm noticing that it isn't storing the data uniformly between
>> OSDs during the filling process. I currently have the following levels:
>>
>> Node 1:
>> /dev/sdb1                      3904027124  2884673100  1019354024  74%
>> /var/lib/ceph/osd/ceph-0
>> /dev/sdc1                      3904027124  2306909388  1597117736  60%
>> /var/lib/ceph/osd/ceph-1
>> /dev/sdd1                      3904027124  3296767276   607259848  85%
>> /var/lib/ceph/osd/ceph-2
>> /dev/sde1                      3904027124  3670063612   233963512  95%
>> /var/lib/ceph/osd/ceph-3
>>
>> Node 2:
>> /dev/sdb1                      3904027124  3250627172   653399952  84%
>> /var/lib/ceph/osd/ceph-4
>> /dev/sdc1                      3904027124  3611337492   292689632  93%
>> /var/lib/ceph/osd/ceph-5
>> /dev/sdd1                      3904027124  2831199600  1072827524  73%
>> /var/lib/ceph/osd/ceph-6
>> /dev/sde1                      3904027124  2466292856  1437734268  64%
>> /var/lib/ceph/osd/ceph-7
>>
>> I am using "rados put" to upload 100g files to the cluster, doing two at
>> a time from two different locations. Is this expected behavior, or can
>> someone shed light on why it is doing this? We're using the opensource
>> version 80.7. We're also using the default CRUSH configuration.
>
>
> So crush utilizes pseudo-random distributions, but sadly random
> distributions tend to be clumpy and not perfectly uniform until you get to
> very high sample counts. The gist of it is that if you have a really low
> density of PGs/OSD and/or are very unlucky, you can end up with a skewed
> distribution.  If you are even more unlucky, you could compound that with a
> streak of objects landing on PGs associated with some specific OSD.  This
> particular case looks rather bad.  How many PGs and OSDs do you have?
>
>>
>> Regards,
>> *MICHAEL J. BREWER*
>> ------------------------------------------------------------------------
>> *Phone:* 1-512-286-5596 | *Tie-Line:* 363-5596*
>> E-mail:*_mjbrewer@xxxxxx.com_ <mailto:mjbrewer@xxxxxxxxxx>
>>
>>
>> 11501 Burnet Rd
>> Austin, TX 78758-3400
>> United States
>>
>>
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux