Re: How many rbds can you map?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We're actually pursuing a similar configuration where it's easily
conceivable that we would have 230+ block devices that we want to mount
on a server.

We are moving to a configuration where each user in our cluster has a
distinct ceph block device for their storage.  We're mapping them on our
nas server and serving them to other nodes via NFS.  (BTW, we are
growing their storage using LVM abstractions, so each volume may have
multiple RBD devices as physical volumes.)

While 230 is not a big number in this scenario, it's not clear to me if
there are other hard-coded or performance limitations which would make
reaching even 230 block devices unreasonable on a single server.

~jpr

On 10/08/2013 02:04 PM, Wido den Hollander wrote:
> On 10/08/2013 07:58 PM, Gaylord Holder wrote:
>> Always nice to see I've hit a real problem, and not just my being dumb.
>>
> 
> May I ask why you are even trying to map so many RBD devices? Do you
> need access to >230 all at the same time on each host?
> 
> Can't you map them when you need them and unmap them when they are no
> longer required?
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux