Re: Small objects in erasure-coded pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Nov 21, 2017 at 4:59 AM, Aleksei Gutikov
<aleksey.gutikov@xxxxxxxxxx> wrote:
>
>> What is your EC config on the pool?
> k=3 m=2, on ssds
>
>> What is the output you're using to
>> judge how much space they're taking up?
> ceph df showed max_avail~=3T for data pool
> after uploading 50M files each about 5k ceph was 100% full
> So 3T/50M ~= 60k each file occupied, this is excepting ec-coded blocks.
>
> Probably this is because of bluestore_min_alloc_size_ssd=16k
> So the granularity of occupied space is 16k*3=48k.
> Plus some space occupied by xargs (profile, etc.),
> probably 4k for each ec-coded object, i.e. 4k*3=12k
>
> So with bluestore_min_alloc_size_ssd=4k
> I expect minimal raw space occupied by 5k file:
> 3+2 erasure-coding: 4k*5 + 4k*5 = 40k
> 3x replication: (8k + 4k)*3 = 36k
>
> I plan to test this cases, but theoretically,
> are this calculations correct?

That seems about right to me. It's possible the empty data shard(s)
will only take up metadata space in rocksdb, but the EC blocks will
always have to exist. Those and the existing data blocks will all take
up a min alloc block.
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux