On Wed, Oct 23, 2013 at 1:28 PM, Dimitri Maziuk <dmaziuk@xxxxxxxxxxxxx> wrote: > On 10/23/2013 02:46 PM, Gregory Farnum wrote: > >> Ah, I see. No, each CephFS client needs to communicate with the whole >> cluster. Only the POSIX metadata changes flow through the MDS. > > Yeah, I thought you'd say that. Back in February I asked if I could get > a cephfs client to read from a specific osd, localhost in my case, and > was given to understand that the whole point of cephfs is that it won't. > >> It is better to make such issues technically difficult or impossible, >> than to make them legal requirements — being able to sue the guy >> running 3 VMs for his side project doesn't do much good if he's >> managed to damage somebody else. > > Well, you can't, can you? If every client is banging on every osd, the > amount of damage it can potentially do is non-deterministic with upper > bound of "the entire storage infrastructure". At which point suing > anybody won't help indeed. > > All I need to do is subvert one "trusted" hypervisor, and then your "the > entire storage infrastructure" is just as dead. Actually, the OSDs are a pretty small attack vector. Buffer overflow attacks or whatever aside, we have a rich enough capabilities system to prevent anybody from accessing data not their own in the OSDs, and although heavy users can increase the latency for everybody else, the op processing is fair so they can't block access. (If somebody manages to subvert an OpenStack hypervisor, I believe they can do a lot worse than bang on the storage cluster!) -Greg Software Engineer #42 @ http://inktank.com | http://ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com