Re: Nautilus pg autoscale, data lost?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 10/1/19 12:16 PM, Raymond Berg Hansen wrote:
> Hi. I am new to ceph but have set it up on my homelab and started using it. It seemed very good intil I desided to try pg autoscale.
> After enabling autoscale to 3 of my pools, autoscale tried(?) to reduce the number of PGs and the pools are now unaccessible.
> I have tried to turn it off again, but no luck! Please help.
> 

Are you sure the data is not available? The 'unknown' status can
sometimes happen if the Mgr isn't receiving the data.

Have you tried to restart the active Manager?

Wido

> ceph status:
> https://pastebin.com/88qNivJi  (do not know why it lists 4 pools, I have 3. Maybe one of the pools I created after and deleted are in limbo?)
> 
> ceph osd pool ls detail:
> https://pastebin.com/HZLz6yHL
> 
> ceph health detail:
> https://pastebin.com/Kqd2YMtm
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> 
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux