Re: cephadm to setup wal/db on nvme

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



this should be possible by specifying a "data_devices" and "db_devices"
fields in the OSD spec file each with different filters. There's some
examples in the docs
https://docs.ceph.com/en/latest/cephadm/services/osd/#the-simple-case that
show roughly how that's done, and some other sections (
https://docs.ceph.com/en/latest/cephadm/services/osd/#filters) that go more
in depth on the different filtering options available so you can try and
find one that works for your disks. You can check the output of "ceph orch
device ls --format json | jq" to see things like what cephadm considers the
model, size etc. for the devices to be for use in the filtering.

On Wed, Aug 23, 2023 at 1:13 PM Satish Patel <satish.txt@xxxxxxxxx> wrote:

> Folks,
>
> I have 3 nodes with each having 1x NvME (1TB) and 3x 2.9TB SSD. Trying to
> build ceph storage using cephadm on Ubuntu 22.04 distro.
>
> If I want to use NvME for Journaling (WAL/DB) for my SSD based OSDs then
> how does cephadm handle it?
>
> Trying to find a document where I can tell cephadm to deploy wal/db on nvme
> so it can speed up write optimization. Do I need to create or cephadm will
> create each partition for the number of OSD?
>
> Help me to understand how it works and is it worth doing?
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux