Re: Large rbd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I think what's being suggested here is to create a good old LVM VG in a virtualized guest, from multiple RBDs, each accessed as a separate VirtIO SCSI device.

As each storage device in the LVM VG has its own queues at the VirtIO / QEMU / RBD interface levels, that would allow for greater parallel performance.

Loris Cuoghi


On 21/01/21 15:15, huxiaoyu@xxxxxxxxxxxx wrote:
Does Ceph now supports volume group of RBDs?  From which version if any?

regards,

samuel



huxiaoyu@xxxxxxxxxxxx
From: Robert Sander
Date: 2021-01-21 10:57
To: ceph-users
Subject:  Re: Large rbd
Hi,
Am 21.01.21 um 05:42 schrieb Chris Dunlop:
Is there any particular reason for that MAX_OBJECT_MAP_OBJECT_COUNT, or
it just "this is crazy large, if you're trying to go over this you're
doing something wrong, rethink your life..."?
IMHO the limit is there because of the way deletion of RBDs work. "rbd
rm" has to look for every object, not only the ones that were really
created. This would make deleting a very very large RBD take a very very
long time.
Rather than a single large rbd, should I be looking at multiple smaller
rbds linked together using lvm or somesuch? What are the tradeoffs?
IMHO there are no tradeoffs, there could even be benefits creating a
volume group with multiple physical volumes on RBD as the requests can
be bettere parallelized (i.e. virtio-single SCSI controller for qemu).
Regards
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux