Re: Move block.db to new ssd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2024/11/12 04:54, Alwin Antreich wrote:
Hi Roland,

On Mon, Nov 11, 2024, 20:16 Roland Giesler <roland@xxxxxxxxxxxxxx> wrote:

I have ceph 17.2.6 on a proxmox cluster and want to replace some ssd's
who are end of life.  I have some spinners who have their journals on
SSD.  Each spinner has a 50GB SSD LVM partition and I want to move those
each to new corresponding partitions.

The new 4TB SSD's I have split into volumes with:

# lvcreate -n NodeA-nvme-LV-RocksDB1 -L 47.69g NodeA-nvme0
# lvcreate -n NodeA-nvme-LV-RocksDB2 -L 47.69g NodeA-nvme0
# lvcreate -n NodeA-nvme-LV-RocksDB3 -L 47.69g NodeA-nvme0
# lvcreate -n NodeA-nvme-LV-RocksDB4 -L 47.69g NodeA-nvme0
# lvcreate -n NodeA-nvme-LV-data -l 100%FREE NodeA-nvme1
# lvcreate -n NodeA-nvme-LV-data -l 100%FREE NodeA-nvme0

I caution the mix of DB/WAL partitions with other applications. The
performance profile may not be suited for shared use. And depending on the
use case the ~48GB might not be big enough to hinder DB spillover. See the
current size when querying the OSD.

I see relatively small RocksDB and not WAL?

ceph daemon osd.4 perf dump
<snip>
    "bluefs": {
        "db_total_bytes": 45025845248,
        "db_used_bytes": 2131755008,
        "wal_total_bytes": 0,
        "wal_used_bytes": 0,
</snip>

I have been led to understand that 4% is die high end and only on very busy systems is that reached, if ever?

What am I missing to get these changes to be permanent?

Likely just an issue with the order of execution. But there is an easier
way to do the move. See:
https://docs.ceph.com/en/quincy/ceph-volume/lvm/migrate/

Ah, excellent!  I didn't find that in my searches.  Will try that now.

regards

Roland



Cheers,
Alwin

--

Alwin Antreich
Head of Training and Proxmox Services

croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges, Andy Muthmann - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263
Web: https://croit.io/
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux