Re: best use of NVMe drives

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Magnus,
I agree with your last suggestion, putting the OSD DB on NVMe would be
a good idea. I'm assuming you are referring to the Bluestore DB rather
than filestore journal since you mentioned your cluster is Nautilus.
We have a cephfs cluster set up in this way and it performs well. We
don't have the metadata on NVMe at this stage but I would think that
that would improve performance further.
For sizing your db volumes there's some messages in the mailing list
about rocksdb and 3GB, 30GB, 300GB limits so size your volumes
accordingly to make sure you're not wasting space.
However if you are adding these new nodes to your existing cluster
with DB on the data disks then you likely won't see much improvement
as you'll be limited by the slowest osd.

Rich

On Tue, 16 Feb 2021 at 22:27, Magnus HAGDORN <Magnus.Hagdorn@xxxxxxxx> wrote:
>
> Hi there,
> we are in the process of growing our Nautilus ceph cluster. Currently,
> we have 6 nodes, 3 nodes with 2×5.5TB, 6x11TB disks and 8x186GB SSD and
> 3 nodes with 6×5.5TB and 6×7.5TB disks. All with dual link 10GE NICs.
> The SSDs are used for the CephFS metadata pool, the hard drives are
> used for the CephFS data pool. All OSD journals are kept on the drives
> themselves. Replication level is 3 for both data and metadata pools.
>
> The new servers have 12x12TB disks and 1 1.5TB NVMe drive. We expect to
> get another 3 similar nodes in the near future.
>
> My question is what is the most sensible thing to do with the NVMe
> drives. I would like to increase the replication level of the metadata
> pool. So my idea was to split the NVMes into say 4 partitions and add
> them to the metadata pool.
>
> Given the size of the drives and the metadata pool usage (~35GB) that
> seems overkill. Would it make sense to partition the drives further and
> stick the OSD journals on the NVMEs?
>
> Regards
> magnus
> The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336.
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux