Re: Consequence of maintaining hundreds of clones of a single RBD image snapshot

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

the closest thing to your request I see in a customer cluster are 186 rbd children of one single image, and nobody has complained yet. The pools are all-flash with 60 SSD OSDs across 5 nodes and are used for OpenStack. Regarding the consistency during flattening, I haven't done that too often and not with heavy load on the clones so I can't properly answer that, but my impression is that flattening is consistent. But I would leave that question for someone else with more insights.

Regards,
Eugen

Zitat von Eyal Barlev <perspectivus@xxxxxxxxx>:

Hello,

My use-case involves creating hundreds of clones (~1,000) of a single RBD
image snapshot.

I assume watchers exist for each clone, due to the copy-on-write nature of
clones.

Should I expect a penalty for maintaining such a large number of clones:
cpu, memory, performance?

If such penalty does exist, we might opt to flatten some of the clones. Is
consistency guaranteed during the flattening process? In other words, can I
write to a clone while it is being flattened?

Perspectivus
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux