Re: [Octopus] Beware the on-disk conversion

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Correct
$
On 4/2/20 3:17 PM, Igor Fedotov wrote:
> And high memory usage is present for quick-fix after conversion as well,
> isn't it?
> 
> The same tens of GBs?
> 
> 
> On 4/2/2020 4:13 PM, Jack wrote:
>> (fsck / quick-fix, same story)
>>
>> On 4/2/20 3:12 PM, Jack wrote:
>>> Hi,
>>>
>>> A simple fsck eats the same amount of memory
>>>
>>> Cluster usage: rbd with a bit of rgw
>>>
>>> Here is the ceph df detail
>>> All OSDs are single rusty devices
>>>
>>> On 4/2/20 2:19 PM, Igor Fedotov wrote:
>>>> Hi Jack,
>>>>
>>>> could you please try the following - stop one of already converted OSDs
>>>> and do a quick-fix/fsck/repair against it using ceph_bluestore_tool:
>>>>
>>>> ceph-bluestore-tool --path <path to osd> --command
>>>> quick-fix|fsck|repair
>>>>
>>>> Does it cause similar memory usage?
>>>>
>>>> You can stop experimenting if quick-fix reproduces the issue.
>>>>
>>>>
>>>> Also could you please describe your cluster and its usage a bit: what's
>>>> the usage: rgw/rbd/cephfs? If possible - please share 'ceph df detail'
>>>> output, do you have standalone DB volume at SSD/NVMe?
>>>>
>>>> Thanks,
>>>>
>>>> Igor
>>>>
>>>>
>>>> On 4/1/2020 6:28 PM, Jack wrote:
>>>>> Hi,
>>>>>
>>>>> As the upgrade documentation tells:
>>>>>> Note that the first time each OSD starts, it will do a format
>>>>>> conversion to improve the accounting for “omap” data. This may
>>>>>> take a few minutes to as much as a few hours (for an HDD with lots
>>>>>> of omap data). You can disable this automatic conversion with:
>>>>> What the documentation does not say is that this process takes a
>>>>> lot of
>>>>> memory
>>>>>
>>>>> I am upgrading a rusty cluster from Nautilus, you can check out the
>>>>> ram
>>>>> consumption as attachment
>>>>>
>>>>> First, we have a 3TB osd conversion: it tooks ~15min, and 19GB of
>>>>> memory
>>>>>
>>>>> Then, we have a larger 6TB osd conversion: it tooks more than 2 hours,
>>>>> and 35GB of memory
>>>>>
>>>>> Finally, you have the largest 10TB osd: only 1H15, but 52GB of memory
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> ceph-users mailing list -- ceph-users@xxxxxxx
>>>>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux