Re: CephFS and clients [was: CephFS & Project Manila (OpenStack)]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Oct 23, 2013 at 11:47 AM, Dimitri Maziuk <dmaziuk@xxxxxxxxxxxxx> wrote:
> On 10/23/2013 12:53 PM, Gregory Farnum wrote:
>> On Wed, Oct 23, 2013 at 7:43 AM, Dimitri Maziuk <dmaziuk@xxxxxxxxxxxxx> wrote:
>>> On 2013-10-22 22:41, Gregory Farnum wrote:
>>> ...
>>>
>>>> Right now, unsurprisingly, the focus of the existing Manila developers
>>>> is on Option 1: it's less work than the others and supports the most
>>>> common storage protocols very well. But as mentioned, it would be a
>>>> pretty poor fit for CephFS
>>>
>>>
>>> I must be missing something, I thought CephFS was supposed to be a
>>> distributed filesystem which to me means option 1 was the point.
>>
>> It's a question of infrastructure and security.
>> 1) For a private cloud with flat networking this would probably be
>> fine, but if you're running per-tenant VLANs then you might not want
>> to plug all 1000 Ceph IPs into each tenant.
>
> What's a "tenant"? How is he different from "share" in the context of a
> filesystem?
>
> Why plug 1000 IPs into it, I thought you only needed an MDS or three  to
> mount the filesystem? Now, exporting different filesystems via different
> MDSes on top of the same set of OSDs might be useful for spreading the
> load, too.

Ah, I see. No, each CephFS client needs to communicate with the whole
cluster. Only the POSIX metadata changes flow through the MDS.

>
>> 2) Each running VM would need to have a good native CephFS client.
>> That means either a working FUSE install with ceph-fuse and its
>> dependencies installed, or a very new Linux kernel — quite different
>> from NFS or CIFS, which every OS in the world has a good client for.
>
> This is a problem you have to face anyway: right now cephfs is unusable
> on rhel family because elrepo's kernel-ml isn't fit for stable
> deployments and I've no idea if their -lt is, like the stock el6 kernel,
> "too old".
>
> I doubt rhel 7 will go any newer than 3.9, though with the number of
> patches they normally use their version numbers don't mean much.
>
> No idea what suse & ubuntu lts do, but with rhel family you're looking
> at "maybe 3.9 by maybe next summer".

True. Still, the nature of the problem is different between supporting
one organization with one system, versus supporting the public with
whatever install media they bring in to your cloud.

>> 3) Even if we get our multi-tenancy as good as we can make it, a buggy
>> or malicious client will still be able to introduce service
>> interruptions for the tenant in question. (By disappearing without
>> notification, leaving leases to time out; by grabbing locks and
>> refusing to release them; etc.) Nobody wants their cloud to be blamed
>> for a tenant's software issues, and this would cause trouble.
>> 4) As a development shortcut, providing multitenancy via CephFS would
>> be a lot easier if we can trust the client (that is, the hypervisor
>> host) to provide some of the security we need.
>
> There's a fix for that and it's called EULA: "your client breaks our
> cloud, we sue the swift out of you". See e.g. http://aws.amazon.com/aup/
>
> You can't trust the client. All you can do is make sure that when e.g.
> they kill their MDSes, other tenants' MDSes are not killed. The rest is
> essentially a non-technical problem, there's no software pill for those.

It is better to make such issues technically difficult or impossible,
than to make them legal requirements — being able to sue the guy
running 3 VMs for his side project doesn't do much good if he's
managed to damage somebody else. We need to not *need* to trust the
clients; there are a lot of things we can do in CephFS to make the
attack surface smaller but it is never going to be as small as
something over the NFS protocol.
-Greg
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux