Re: Safe maximum number of RADOS namespaces

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jan 9, 2017 at 8:00 AM, Samuel Just <sjust@xxxxxxxxxx> wrote:
> 30M namespaces is not a problem since they are pretty much just object
> name prefixes.  30M different cephx users probably would be a problem.
> -Sam

This is sort of the interesting bit of the problem. If you have a
limited number of users you want to be able to collectively read 30M
namespaces (without wide permissions) then you need to *enumerate*
those 30M namespaces within their keyrings, and the OSDAuthCaps
structure is unlikely to handle that well (although we don't really
know).

Otherwise, you need a different keyring for each namespace, and that
has two separate points of contention:
1) If they're all online frequently, that's a whole bunch of
independent connections to the OSDs. (I sort of assume this is not
likely.)
2) Storing each of those keyrings in the monitor LevelDB instances. I
think the only analogue here is the limited throughput of very large
RWG bucket indices; I think the situation ought to be quite a bit
better for the monitors — but I don't have any real data about it.
-Greg

>
> On Mon, Jan 9, 2017 at 12:59 AM, Wido den Hollander <wido@xxxxxxxx> wrote:
>>
>>> Op 7 januari 2017 om 2:03 schreef Blair Bethwaite <blair.bethwaite@xxxxxxxxx>:
>>>
>>>
>>> On 7 January 2017 at 04:04, Samuel Just <sjust@xxxxxxxxxx> wrote:
>>> > Namespaces are pretty much just a prefix, nothing tracks them.  Are
>>> > you using them simply to avoid name conflicts?
>>
>> Yes, but also to be able to list objects (I know, slow!) per 'user'.
>>
>>>
>>> I assumed for some sort of data and/or security segregation using
>>> cephx? 30M pools is not exactly feasible...
>>>
>>
>> Eventually yes. We have 30M users in this system now and using namespaces seems like a clean way to separate their data inside RADOS.
>>
>> Might use different cephx users in the future if we want to.
>>
>> Wido
>>
>>> --
>>> Cheers,
>>> ~Blairo
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux