Re: Encryption/Multi-tennancy

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for the suggestion Seth.  It's unfortunately not an option in our model.  We did consider it.

On 2014 Mar 10, at 02:30, Seth Mason (setmason) wrote:

Why not have the application encrypt the data or at the compute server's file system? That way you don't have to manage keys.  



****
Seth 

On Mar 9, 2014, at 6:09 PM, "Mark s2c" <mark@xxxxxxxxxxxxxxx> wrote:

Ceph is seriously badass, but my requirements are to create a cluster in which I can host my customer's data in separate areas which are independently encrypted, with passphrases which we as cloud admins do not have access to.  

My current thoughts are:
1. Create an OSD per machine stretching over all installed disks, then create a user-sized block device per customer.  Mount this block device on an access VM and create a LUKS container in to it followed by a zpool and then I can allow the users to create separate bins of data as separate ZFS filesystems in the container which is actually a blockdevice striped across the OSDs.
2. Create an OSD per customer and use dm-crypt, then store the dm-crypt key somewhere which is rendered in some way so that we cannot access it, such as a pgp-encrypted file using a passphrase which only the customer knows.

My questions are:
1. What are people's comments regarding this problem (irrespective of my thoughts)
2. Which would be the most efficient of (1) and (2) above?
3. As per (1), would it be easy to stretch a created block dev over more OSDs dynamically should we increase the size of one or more? Also, what if we had millions of customers/block devices?

Any advice on the above would be deluxe.

M


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux