Re: size limitations?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2013-01-03 18:56:49, Warren H. Prince wrote:
> I've read some posts about efficiency dropping with large numbers of
> encrypted files.

A link to the specific post(s) would be helpful. I would be taking a
shot in the dark otherwise.

> Is there any way to determine when encryptfs is no longer an
> appropriate solution?

Not really. Your workflow is going to be different than the next
person's workflow, so it is impossible to say. You can compare your
workflow on eCryptfs to the same workflow on dm-crypt/LUKS to the same
workflow on a non-encrypted filesystem, but that could be a lot of work.

If you're happy with the performance that you're seeing, then there's no
need to worry. If you're tired of waiting for filesystem operations to
complete, then you'll need to start looking around.

> For example, I have pretty well filled a 200 G AWS volume with pdf
> files.  I have no idea how many are there, but I do know I have close
> to 20K sub directories.  These files are all of a sensitive nature.
> My guess would be that this is way too big to be an ecryptfs volume
> without very major overhead.  Am I correct?--

Since the files are sensitive in nature and stored in the cloud, I'll
assume that you have a requirement to keep the data encrypted. That will
always introduce a considerable overhead.

Now you'll need to determine which encryption solution best meets your
need. You're already using eCryptfs and migration will require some
level of effort, so you must factor that in.

The best performing Linux-based solution, on average, is going to be
dm-crypt/LUKS but it can be less flexible to deploy than eCryptfs.

Good luck with your decision!

Tyler

Attachment: signature.asc
Description: Digital signature


[Index of Archives]     [Linux Crypto]     [Device Mapper Crypto]     [LARTC]     [Bugtraq]     [Yosemite Forum]

  Powered by Linux