Re: Best way to change disk in controller disk without affect cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Anthony,

I need make this because can't add new SSD disks in the node but these are not detected by the disk controller. We have two disk controller for can have 12 disks.

My idea is change one drive and test it. If doesn't work only lost 1 drive.

Ceph are installe directly in the machine and osd are created as bluestore. Are used for rbd. We used Proxmox for creating kvm machines.

A greeting.
________________________________
De: Anthony D'Atri <anthony.datri@xxxxxxxxx>
Enviado: miércoles, 18 de mayo de 2022 19:17
Para: Jorge JP <jorgejp@xxxxxxxxxx>
Cc: ceph-users@xxxxxxx <ceph-users@xxxxxxx>
Asunto: Re:  Re: Best way to change disk in controller disk without affect cluster


First question:  why do you want to do this?

There are some deployment scenarios in which moving the drives will Just Work, and others in which it won’t.  If you try, I suggest shutting the system down all the way, exchanging just two drives, then powering back on — and see if all is well before doing all.

On which Ceph release were these OSDs deployed? Containerized? Are you using ceph-disk or ceph-volume? LVM?  Colocated journal/DB/WAL, or on a seperate device?

Try `ls -l /var/lib/ceph/someosd` or whatever you have, look for symlinks that reference device paths that may be stale if drives are swapped.

>
> Hello,
>
> Have I check same global flag for this operation?
>
> Thanks!
> ________________________________
> De: Stefan Kooman <stefan@xxxxxx>
> Enviado: miércoles, 18 de mayo de 2022 14:13
> Para: Jorge JP <jorgejp@xxxxxxxxxx>
> Asunto: Re:  Best way to change disk in controller disk without affect cluster
>
> On 5/18/22 13:06, Jorge JP wrote:
>> Hello!
>>
>> I have a cluster ceph with 6 nodes with 6 HDD disks in each one. The status of my cluster is OK and the pool 45.25% (95.55 TB of 211.14 TB). I don't have any problem.
>>
>> I want change the position of a various disks in the disk controller of some nodes and I don't know what is the way.
>>
>>  - Stop osd and move the disk of position (hotplug).
>>
>>  - Reweight osd to 0 and move the pgs to other osds, stop osd and change position
>>
>> I think first option is ok, the data not deleted and when I will changed the disk the server recognised again and I will can start osd without problems.
>
> Order of the disks should not matter. First option is fine.
>
> Gr. Stefan
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux