Re: crusmap show wrong osd for PGs (EC-Pool)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi again,

On 29.06.2018 17:37, ulembke@xxxxxxxxxxxx wrote:
> ...
> 24.cc crushmap: [8,111,12,88,128,44,56]
> real live:      [8,121, X,88,130,44,56] - due the new osd-12 and the
> wrong searchlist (osd-121 + osd-130) the PG is undersized!
>
> /var/lib/ceph/osd/ceph-8/current/24.ccs0_head
> /var/lib/ceph/osd/ceph-44/current/24.ccs5_head
> /var/lib/ceph/osd/ceph-56/current/24.ccs6_head
> /var/lib/ceph/osd/ceph-88/current/24.ccs3_head
> /var/lib/ceph/osd/ceph-121/current/24.ccs1_head
> /var/lib/ceph/osd/ceph-130/current/24.ccs4_head
>
> ...
unfortunality the PG-slices on the "wrong" ODSs (121,130) are empty - so
the data are gone...

and now it's happens to more PGs... looks, that deep-scrup remove
important data?! We have disabled scrubbing and deep-scrubbing for now.

The cluster was installed a long time ago (Cuttlefish/Dumpling) and was
updated to 0.94.7 over the time but allways online.
This year (Februrary) there was an power outage and the whole cluster
was down and fresh restartet.
After that, some rbd-devices, which are lying on the EC-Pool, are
corrupt and the VM (an archive, with an huge FS on lvm (80TB)) need an
repair of the lvm, which ends with an damaged FS without usefull data.
Looks that after restarting the cluster, ceph use another calculation
for the PGs (EC-Pool) than before?! Is that possible?


Best regards

Udo

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux