Thanks Wildo, I have to admit its slightly disappointing (but completely understandable) since it basically means it's not safe for us to use CephFS :( Without "userquotas", it would be sufficient to have multiple CephFS filesystems and to be able to set the size of each one. Is it part of the core design that there can only be one filesystem in the cluster? This seems like a 'single point of failure'. >> I have been testing CephFS on our computational cluster of about 30 >> computers. I've got 4 machines, 4 disks, 4 osd, 4 mon and 1 mds at the >> moment for testing. The testing has been going very well apart from one problem >> that needs to be resolved before we can use Ceph in place of our existing >> 'system' of NFS exports. >> >> Our users run simulations that are easily capable of writing out data at a >> rate limited only by the storage device. These jobs also often run for days or >> weeks unattended. This unfortunately means that using CephFS, if a user >> doesn't setup their simulation carefully enough or if their code has some >> bug, they are able to fill the entire filesystem (shared by aroud 10 other >> users) in around a day leaving no room for any other users and potentially >> crashing the entire cluster. I've read the FAQ entry about quotas but >> I'm not sure what to make of it. Is it correct that you can only have one >> "CephFS" per cluster? I guess I was imagining creating a separate >> file-system of known size for each user. >> > > The talks about quotas were indeed userquotas, but nothing about > enforcing them. The first step is to do accounting and maybe in a later > stage soft and hard enforcement can be added. > > I don't think it's on the roadmap currently. _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com