Re: Bluestore with so many small files

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



so you mean that rocksdb and osdmap filled disk about 40G for only 800k files?
I think it's not reasonable and it's too high

On Mon, Feb 12, 2018 at 5:06 PM, David Turner <drakonstein@xxxxxxxxx> wrote:

Some of your overhead is the Wal and rocksdb that are on the OSDs. The Wal is pretty static in size, but rocksdb grows with the amount of objects you have. You also have copies of the osdmap on each osd. There's just overhead that adds up. The biggest is going to be rocksdb with how many objects you have.


On Mon, Feb 12, 2018, 8:06 AM Behnam Loghmani <behnam.loghmani@xxxxxxxxx> wrote:
Hi there,

I am using ceph Luminous 12.2.2 with:

3 osds (each osd is 100G) - no WAL/DB separation.
3 mons
1 rgw
cluster size 3

I stored lots of thumbnails with very small size on ceph with radosgw.

Actual size of files is something about 32G but it filled 70G of each osd.

what's the reason of this high disk usage?
should I change "bluestore_min_alloc_size_hdd"? and If I change it and set it to smaller size, does it impact on performance?

what is the best practice for storing small files on bluestore?

Best regards,
Behnam Loghmani
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux