Re: Encryption/Multi-tennancy

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



There could be millions of tennants. Looking deeper at the docs, it looks like Ceph prefers to have one OSD per disk.  We're aiming at having backblazes, so will be looking at 45 OSDs per machine, many machines.  I want to separate the tennants and separately encrypt their data.  The encryption will be provided by us, but I was originally intending to have passphrase-based encryption, and use programmatic means to either hash the passphrase or/and encrypt it using the same passphrase.  This way, we wouldn't be able to access the tennant's data, or the key for the passphrase, although we'd still be able to store both.

I had originally intended to use ZFS to acheive this, but on Linux it's a fiddle.  We don't want to pay for any software or support, so that's Solaris out (Oracle  changed the plans after they bought Sun).

On 2014 Mar 11, at 00:55, Seth Mason (setmason) wrote:

> Are you expecting the tenant to provide the key?  Also how many tenants are you expecting to have? It seems like you're looking for per-object encryption and not per OSD.
> 
> -Seth
> 
> -----Original Message-----
> From: ceph-users-bounces@xxxxxxxxxxxxxx [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Mark s2c
> Sent: Monday, March 10, 2014 3:08 PM
> To: Kyle Bader
> Cc: ceph-users@xxxxxxxxxxxxxx
> Subject: Re:  Encryption/Multi-tennancy
> 
> Thanks Kyle.
> I've deliberately not provided the entire picture.  I'm aware of memory residency and of in-flight encryption issues.  Theses are less of a problem for us.
> For me, it's a question of finding a reliably encrypted, OSS, at-rest setup which involves Ceph and preferably ZFS for flexibility.
> M
> On 2014 Mar 10, at 21:04, Kyle Bader wrote:
> 
>>> Ceph is seriously badass, but my requirements are to create a cluster in which I can host my customer's data in separate areas which are independently encrypted, with passphrases which we as cloud admins do not have access to.
>>> 
>>> My current thoughts are:
>>> 1. Create an OSD per machine stretching over all installed disks, then create a user-sized block device per customer.  Mount this block device on an access VM and create a LUKS container in to it followed by a zpool and then I can allow the users to create separate bins of data as separate ZFS filesystems in the container which is actually a blockdevice striped across the OSDs.
>>> 2. Create an OSD per customer and use dm-crypt, then store the dm-crypt key somewhere which is rendered in some way so that we cannot access it, such as a pgp-encrypted file using a passphrase which only the customer knows.
>> 
>>> My questions are:
>>> 1. What are people's comments regarding this problem (irrespective of 
>>> my thoughts)
>> 
>> What is the threat model that leads to these requirements? The story 
>> "cloud admins do not have access" is not achievable through technology 
>> alone.
>> 
>>> 2. Which would be the most efficient of (1) and (2) above?
>> 
>> In the case of #1 and #2, you are only protecting data at rest. With
>> #2 you would need to decrypt the key to open the block device, and the 
>> key would remain in memory until it is unmounted (which the cloud 
>> admin could access). This means #2 is safe so long as you never mount 
>> the volume, which means it's utility is rather limited (archive 
>> perhaps). Neither of these schemes buy you much more than the 
>> encryption handling provided by ceph-disk-prepare (dmcrypted osd 
>> data/journal volumes), the key management problem becomes more acute, 
>> eg. per tenant.
>> 
>> --
>> 
>> Kyle
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux