Re: in the same ceph cluster, why the object in the same osd some are 8M and some are 4M?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 02/01/18 02:36, linghucongsong wrote:
> Hi, all!
> 
> I just use ceph rbd for openstack.
> 
> my ceph version is 10.2.7.
> 
> I find a surprise thing that the object save in the osd , in some pgs the objects are 8M, and in some pgs the objects are 4M, can someone tell me why?  thanks!
> 
> root@node04:/var/lib/ceph/osd/ceph-3/current/1.6e_head/DIR_E/DIR_6# ll -h
> -rw-r--r-- 1 ceph ceph 8.0M Dec 14 14:36 rbd\udata.0f5c1a238e1f29.000000000000012a__head_6967626E__1
> 
> root@node04:/var/lib/ceph/osd/ceph-3/current/3.13_head/DIR_3/DIR_1/DIR_3/DIR_6# ll -h
> -rw-r--r--  1 ceph ceph 4.0M Oct 24 17:39 rbd\udata.106f835ba64e8d.00000000000004dc__head_5B646313__3
By default, rbds are striped across 4M objects, but that is a configurable value - you can make it larger or smaller if you like. I note that the PGs you are looking at are from different pools (1.xx vs 3.xx) - so I'm guessing you have multiple storage pools configured in your openstack cluster. Is it possible that for the larger ones, the rbd_store_chunk_size parameter is being overridden?

Rich

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux