Re: Help - Multiple OSD's Down

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I tried with disk based swap on a SATA SSD.

I think that might be the last option. I have exported already all the down
PG's from the OSD that they are waiting for.

Kind Regards

Lee

On Thu, 6 Jan 2022 at 20:00, Alexander E. Patrakov <patrakov@xxxxxxxxx>
wrote:

> пт, 7 янв. 2022 г. в 00:50, Alexander E. Patrakov <patrakov@xxxxxxxxx>:
>
>> чт, 6 янв. 2022 г. в 12:21, Lee <lquince@xxxxxxxxx>:
>>
>>> I've tried add a swap and that fails also.
>>>
>>
>> How exactly did it fail? Did you put it on some disk, or in zram?
>>
>> In the past I had to help a customer who hit memory over-use when
>> upgrading Ceph (due to shallow_fsck), and we were able to fix it by adding
>> 64 GB GB of zram-based swap on each server (with 128 GB of physical RAM in
>> this type of server).
>>
>>
> On the other hand, if you have some spare disks for temporary storage and
> for new OSDs, and this failed OSD is not a part of an erasure-coded pool,
> another approach might be to export all PGs using ceph-objectstore-tool as
> files onto the temporary storage (in hope that it doesn't suffer from the
> same memory explosion), and then import them all into a new temporary OSD.
>
> --
> Alexander E. Patrakov
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux