There are two types of "object", RBD-image-object and 8MiB-block-object. When create a RBD image, a RBD-image-object is created and 12800 8MiB-block-objects are allocated. That whole RBD-image-object is mapped to a single PG, which is mapped to 3 OSDs (replica 3). That means, all user data on that RBD image is stored in those 3 OSDs. Is my understanding correct? I doubt it, because, for example, a Ceph cluster with bunch of 2TB drives, and user won't be able to create RBD image bigger than 2TB. I don't believe that's true. So, what am I missing here? Thanks! Tony ________________________________________ From: Konstantin Shalygin <k0ste@xxxxxxxx> Sent: August 7, 2021 11:35 AM To: Tony Liu Cc: ceph-users; dev@xxxxxxx Subject: Re: rbd object mapping Object map show where your object with any object name will be placed in defined pool with your crush map, and which of osd will serve this PG. You can type anything in object name - and the the future placement or placement of existing object - this how algo works. 12800 means that your 100GiB image is a 12800 objects of 8 MiB of pool vm. All this objects prefixed with rbd header (seems block_name_prefix modern name of this) Cheers, k On 7 Aug 2021, at 21:27, Tony Liu <tonyliu0592@xxxxxxxxxxx<mailto:tonyliu0592@xxxxxxxxxxx>> wrote: This shows one RBD image is treated as one object, and it's mapped to one PG. "object" here means a RBD image. # ceph osd map vm fcb09c9c-4cd9-44d8-a20b-8961c6eedf8e_disk osdmap e18381 pool 'vm' (4) object 'fcb09c9c-4cd9-44d8-a20b-8961c6eedf8e_disk' -> pg 4.c7a78d40 (4.0) -> up ([4,17,6], p4) acting ([4,17,6], p4) When show the info of this image, what's that "12800 objects" mean? And what's that "order 23 (8 MiB objects)" mean? What's "objects" here? # rbd info vm/fcb09c9c-4cd9-44d8-a20b-8961c6eedf8e_disk rbd image 'fcb09c9c-4cd9-44d8-a20b-8961c6eedf8e_disk': size 100 GiB in 12800 objects order 23 (8 MiB objects) snapshot_count: 0 id: affa8fb94beb7e block_name_prefix: rbd_data.affa8fb94beb7e format: 2 _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx