Re: Unknown PGs after osd move

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> > Is the crush map aware about that?
>
> Yes, it correctly shows the osds at serve8 (previously server15).
>
> > I didn't ever try that, but don't you need to cursh move it?
>
> I originally imagined this, too. But as soon as the osd starts on a new
> server it is automatically put into the serve8 bucket.

It does not work like this, unfortunately. If you physically move disks to a new server without "informing ceph" in advance, hat is, crush move the OSD while they are up, ceph looses placement information. You can post-repair such a situation by temporarily "crush moving" (software move, not hardware move) the OSDs back to their previous host buckets, wait for peering to complete, and then "crush move" them to their new location again. Do not restart OSDs during this process or while rebalancing of misplaced objects is going on. There is a long-standing issue that causes placement information to be lost again and one would need to repeat the procedure.

Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14

________________________________________
From: Nico Schottelius <nico.schottelius@xxxxxxxxxxx>
Sent: 22 September 2020 21:14:07
To: Andreas John
Cc: ceph-users@xxxxxxx
Subject:  Re: Unknown PGs after osd move

Hey Andreas,

Andreas John <aj@xxxxxxxxxxx> writes:

> Hello,
>
> On 22.09.20 20:45, Nico Schottelius wrote:
>> Hello,
>>
>> after having moved 4 ssds to another host (+ the ceph tell hanging issue
>> - see previous mail), we ran into 241 unknown pgs:
>
> You mean, that you re-seated the OSDs into another chassis/host?

That is correct.

> Is the crush map aware about that?

Yes, it correctly shows the osds at serve8 (previously server15).

> I didn't ever try that, but don't you need to cursh move it?

I originally imagined this, too. But as soon as the osd starts on a new
server it is automatically put into the serve8 bucket.

Cheers,

Nico


--
Modern, affordable, Swiss Virtual Machines. Visit www.datacenterlight.ch
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux