On 21/03/16 06:40, Jan Schermer wrote: > Compared to ceph-osd overhead, the dm-crypt overhead will be completely negligible for most scenarios. > One exception could be sequential reads with slow CPU (not supporting AES), but I don't expect more than a few percent difference even then. > > Btw nicer solution is to use SED drives. Spindles are no problem (Hitachi makes them for example), SSDs are trickier - DC-class Intels don't support it for example. > > Jan > > >> On 20 Mar 2016, at 17:53, Daniel Delin <lists@xxxxxxxxx> wrote: >> >> Hi, >> >> I´m looking into running a Ceph cluster with the OSDs encrypted with dm-crypt, both >> spinning disks and cache-tier SSDs and I wonder if there are any solid data on the >> possible performance penalty this will incur, both bandwidth and latency. >> Done some googling, but can´t find that much. >> >> The CPUs involved will have hardware AES-NI support, 4 spinning disks and 2 cache tier >> SSDs / OSD node. When benchmarking this some months ago we found that it was important to choose the correct algorithms, and ended up choosing AES in XTS mode[0]. This gave us performance something like an average of 2% slower across various thread counts and IO sizes. The then defaults (AES in CBC mode) gave us around a 15% drop - mostly because of small IO size operation performance. The cluster on which this was tested was three nodes, 8 spinners and two Intel S3700 SSDs for journals on each, CPUs with AES-NI support, and running Giant RC1. We also had to make sure that the aesni_intel, and *not* the xts (which seemed to override it), kernel module was loaded. To use something other than the defaults you need to set parameters in ceph.conf to something like: osd_dmcrypt_type = luks osd_cryptsetup_parameters = --cipher aes-xts-plain64 osd_dmcrypt_key_size = 512 [0] https://en.wikipedia.org/wiki/Disk_encryption_theory#XEX-based_tweaked-codebook_mode_with_ciphertext_stealing_.28XTS.29 -- David Clarke
Attachment:
signature.asc
Description: OpenPGP digital signature
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com