Re: Encryption per user Howto

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Stefan,

sorry, forgot. Block device is almost certainly LVM with dmcrypt - unless you have another way of using encryption with ceph OSDs.

I can compare LVM with LVM+dmcrypt(default/new) and possibly also raw /dev/sd? performance. If LVM+dmcrypt shows good results, I will also try it with our test cluster.

Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14

________________________________________
From: Frank Schilder <frans@xxxxxx>
Sent: Wednesday, June 7, 2023 10:03 AM
To: Stefan Kooman; Anthony D'Atri; ceph-users@xxxxxxx
Subject:  Re: Encryption per user Howto

Hi Stefan,

bare metal. I just need to know what kernel version and how to configure the new queue parameters (I guess its kernel boot parameters). I will do a fio test to the raw block device first, I think this is what you posted? I can probably try these settings on our test cluster, which is entirely SAS HDD based.

If there is anything I need to do when deploying an OSD, the bare metal instructions are good. I actually start all daemons myself, its containerized, but with custom startup scripts.

Thanks and best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14

________________________________________
From: Stefan Kooman <stefan@xxxxxx>
Sent: Wednesday, June 7, 2023 8:01 AM
To: Frank Schilder; Anthony D'Atri; ceph-users@xxxxxxx
Subject: Re:  Re: Encryption per user Howto

On 6/6/23 15:33, Frank Schilder wrote:
> Yes, would be interesting. I understood that it mainly helps with buffered writes, but ceph is using direct IO for writes and that's where bypassing the queues helps.

Yeah, that makes sense.


> Are there detailed instructions somewhere how to set up a host to disable the queues? I don't have time to figure this out myself. It should be detailed enough so that I just need to edit some configs, reboot et voila.

Do you want instructions for package based Ceph install, or container
based? I tested it with both deployment types. Container based
(Cephadm)) is a little bit more involved, but certainly doable.

>
> I have a number of new hosts to deploy and I could use one of these to run a test. They have a mix of NVMe, SSD and HDD and I can run fio benchmarks before deploying OSDs in the way you did

That would be great,

Gr. Stefan
.

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux