Re: Best way to change disk in controller disk without affect cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

Simply the model of disk not is detected by controller disk. I tested in other nodes and this model not detected. So I need change the position of SATA disk and have free slot to ssd. Not is problem of passthrough or config. Thanks!

Cheers
________________________________
De: Eneko Lacunza <elacunza@xxxxxxxxx>
Enviado: jueves, 19 de mayo de 2022 16:34
Para: ceph-users@xxxxxxx <ceph-users@xxxxxxx>
Asunto:  Re: Best way to change disk in controller disk without affect cluster

Hola Jorge,

El 19/5/22 a las 9:36, Jorge JP escribió:
> Hello Anthony,
>
> I need make this because can't add new SSD disks in the node but these are not detected by the disk controller. We have two disk controller for can have 12 disks.
>
> My idea is change one drive and test it. If doesn't work only lost 1 drive.
>
> Ceph are installe directly in the machine and osd are created as bluestore. Are used for rbd. We used Proxmox for creating kvm machines.

So one of the controllers does not detect SSD disks, but the other does?

This might be a config problem, you may need to mark disk as passthrough
in controller BIOS/CLI utility. Some controllers can't do this; if
that's the case you may have to create a RAID0 on that SSD disk.

What controller(s) model(s)?

Cheers

>
> A greeting.
> ________________________________
> De: Anthony D'Atri<anthony.datri@xxxxxxxxx>
> Enviado: miércoles, 18 de mayo de 2022 19:17
> Para: Jorge JP<jorgejp@xxxxxxxxxx>
> Cc:ceph-users@xxxxxxx  <ceph-users@xxxxxxx>
> Asunto: Re:  Re: Best way to change disk in controller disk without affect cluster
>
>
> First question:  why do you want to do this?
>
> There are some deployment scenarios in which moving the drives will Just Work, and others in which it won’t.  If you try, I suggest shutting the system down all the way, exchanging just two drives, then powering back on — and see if all is well before doing all.
>
> On which Ceph release were these OSDs deployed? Containerized? Are you using ceph-disk or ceph-volume? LVM?  Colocated journal/DB/WAL, or on a seperate device?
>
> Try `ls -l /var/lib/ceph/someosd` or whatever you have, look for symlinks that reference device paths that may be stale if drives are swapped.
>
>> Hello,
>>
>> Have I check same global flag for this operation?
>>
>> Thanks!
>> ________________________________
>> De: Stefan Kooman<stefan@xxxxxx>
>> Enviado: miércoles, 18 de mayo de 2022 14:13
>> Para: Jorge JP<jorgejp@xxxxxxxxxx>
>> Asunto: Re:  Best way to change disk in controller disk without affect cluster
>>
>> On 5/18/22 13:06, Jorge JP wrote:
>>> Hello!
>>>
>>> I have a cluster ceph with 6 nodes with 6 HDD disks in each one. The status of my cluster is OK and the pool 45.25% (95.55 TB of 211.14 TB). I don't have any problem.
>>>
>>> I want change the position of a various disks in the disk controller of some nodes and I don't know what is the way.
>>>
>>>   - Stop osd and move the disk of position (hotplug).
>>>
>>>   - Reweight osd to 0 and move the pgs to other osds, stop osd and change position
>>>
>>> I think first option is ok, the data not deleted and when I will changed the disk the server recognised again and I will can start osd without problems.
>> Order of the disks should not matter. First option is fine.
>>
>> Gr. Stefan
>> _______________________________________________
>> ceph-users mailing list --ceph-users@xxxxxxx
>> To unsubscribe send an email toceph-users-leave@xxxxxxx
> _______________________________________________
> ceph-users mailing list --ceph-users@xxxxxxx
> To unsubscribe send an email toceph-users-leave@xxxxxxx

Eneko Lacunza
Zuzendari teknikoa | Director técnico
Binovo IT Human Project

Tel. +34 943 569 206 |https://www.binovo.es
Astigarragako Bidea, 2 - 2º izda. Oficina 10-11, 20180 Oiartzun

https://www.youtube.com/user/CANALBINOVO
https://www.linkedin.com/company/37269706/
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux