Re: Any experience with multiple cephfs instances in one ceph cluster? How experimental is this?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Aug 21, 2017 at 3:03 PM, Bryan Banister
<bbanister@xxxxxxxxxxxxxxx> wrote:
> Thanks for the response John.
>
> Maybe I'm not understanding this correctly, but I thought clients could be restricted to specific file systems by limiting access to the underlying ceph pools used in each file system?
>
> client.cephfs.test1
>         key: AQDuQpdZp90MHhAAkYE6P5XYzsoswgEkZy6RLw==
>         caps: [mds] allow
>         caps: [mon] allow r
>         caps: [osd] allow rw pool cephfs01_data
>
> client.cephfs.test2
>         key: AQDuQpdZp90MHhAAkYE6P5XYzsoswgEkZy6RLw==
>         caps: [mds] allow
>         caps: [mon] allow r
>         caps: [osd] allow rw pool cephfs02_data
>
> Would these two clients keys which only have access to specific data pools restrict their access?
>
> Or I guess with mds allow on both, then they could mount the file system but only reading/writing the data in the file systems would be restricted?

Correct, although critically even if a client doesn't have pool caps,
it can still send *deletes* to the MDS and thereby attack the data.

In the long run the caps should be improved so that we are always
giving a client access to a specific filesystem, so that these sorts
of fine lines just go away.

John

>
> Thanks!
> -Bryan
>
> -----Original Message-----
> From: John Spray [mailto:jspray@xxxxxxxxxx]
> Sent: Monday, August 21, 2017 8:48 AM
> To: Bryan Banister <bbanister@xxxxxxxxxxxxxxx>
> Cc: ceph-users@xxxxxxxxxxxxxx
> Subject: Re:  Any experience with multiple cephfs instances in one ceph cluster? How experimental is this?
>
> Note: External Email
> -------------------------------------------------
>
> On Mon, Aug 21, 2017 at 2:35 PM, Bryan Banister
> <bbanister@xxxxxxxxxxxxxxx> wrote:
>> Hi all,
>>
>>
>>
>> I’m very new to ceph and cephfs, so I’m just starting to play around with
>> the Luminous release.  There are some very concerning warnings about
>> deploying multiple cephfs instances in the same cluster:
>>
>> “There are no known bugs, but any failures which do result from having
>> multiple active filesystems in your cluster will require manual intervention
>> and, so far, will not have been experienced by anybody else – knowledgeable
>> help will be extremely limited. You also probably do not have the security
>> or isolation guarantees you want or think you have upon doing so.”
>
> The sort of literal translation of this is:
>  - the automated tests for systems with multiple filesystems are not
> very comprehensive
>  - a client that can access one filesystem can access all of them
>
> If you're adventurous enough to be running upstream Ceph packages, and
> you have at least some level of test/staging environment to try it in,
> then I'd not be too scared about trying it out.
>
>> And Redhat says:
>>
>> “Creating multiple Ceph File Systems in one cluster is not fully supported
>> yet and can cause the MDS or client nodes to terminate unexpectedly.”
>
> I don't know who wrote that text, but I do not believe that there are
> any known issues involving MDS or client nodes terminating
> unexpectedly.
>
> John
>
>>
>>
>>
>> Is anybody deploying multiple cephfs instances and have there been any
>> issues like the warnings indicate?
>>
>>
>>
>> Thanks!
>>
>> -Bryan
>>
>>
>>
>>
>>
>>
>> ________________________________
>>
>> Note: This email is for the confidential use of the named addressee(s) only
>> and may contain proprietary, confidential or privileged information. If you
>> are not the intended recipient, you are hereby notified that any review,
>> dissemination or copying of this email is strictly prohibited, and to please
>> notify the sender immediately and destroy this email and any attachments.
>> Email transmission cannot be guaranteed to be secure or error-free. The
>> Company, therefore, does not make any guarantees as to the completeness or
>> accuracy of this email or any attachments. This email is for informational
>> purposes only and does not constitute a recommendation, offer, request or
>> solicitation of any kind to buy, sell, subscribe, redeem or perform any type
>> of transaction of a financial product.
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
> ________________________________
>
> Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux