Re: How OSD encryption affects latency/iops on NVMe, SSD and HDD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I did also some testing, but was more surprised how much cputime kworker 
and dmcrypt-write(?) instances are taking. Is there some way to get fio 
output realtime to influx or prometheus so you can view it with load 
together?
 
 



-----Original Message-----
From: tri@xxxxxxxxxx [mailto:tri@xxxxxxxxxx] 
Sent: maandag 28 september 2020 16:07
To: ceph-users@xxxxxxx
Subject:  Re: How OSD encryption affects latency/iops on 
NVMe, SSD and HDD

Some tests on dmcrypted (aes-xts-plain64, 512 bit) vs non-dmcrypted on a 
small SAS SSD drive. Latencies are reported at 99.9 percentile 

fio 4k, direct, sync, QD1
==========================
                         WRITE                         READ
                    IOPS     LATENCIES(us)        IOPS    LATENCIES(us)
      Base         17.5k       85                  20.4k      79         
    
      Encrypted     5.58k      685                 10.2k      206
       

fio 4k, direct, sync, QD32
==========================
                         WRITE                         READ
                    IOPS     LATENCIES(us)        IOPS    LATENCIES(us)
      Base         65.2k       1156               93.4k      742         
    
      Encrypted    52.7k       2442               65.2k      1123
       

fio 4k, direct, sync, QD128
==========================
                         WRITE                         READ
                    IOPS     LATENCIES(us)        IOPS    LATENCIES(us)
      Base         65.6k       4686               94.6k      2835        
     
      Encrypted    55.9k       12780              74.7k      3687
     
fio 4k, direct, sync, QD1, jobs=8
===================================
                         WRITE                         READ
                    IOPS     LATENCIES(us)        IOPS    LATENCIES(us)
      Base         51.6k       1336                  53.7k      273      
       
      Encrypted    24.8k       1205                  43.7k      367


It looks like the biggest encryption penalty is at 4k, QD1. So perhaps 
the journal is most impacted (block.db and wal.db). Since the iops on 
the journal limit an OSD's overall iops, the real impact is at 4k/QD1. I 
suspect that if one could tolerate an unencrypted journal with an 
encrypted data OSD (as in bluestore), the overall penalty could be 
lower, perhaps around 20-30%.

The penalty could be much larger with faster (RAM, nvme) drives. 
Cloudflare did a similar test and found that it could be as much as 7x.


September 26, 2020 12:50 PM, tri@xxxxxxxxxx wrote:

> Hi all,
> 
> For those who use encryption on your OSDs, what effect do you see on 
> your NVMe, SSD and HDD vs non-encrypted OSDs? I tried to find some 
> info on this subject but there isn't much detail available.
> 
> From experience, dmcrypt is CPU-bound and becomes a bottleneck when 
> used on very fast NVMe. Using aes-xts, one can only expect around 
1600-2000GB/s with 256/512 bit keys.
> 
> Best,
> 
> Tri Hoang
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an 
> email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an 
email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux