Re: Don't upgrade to 13.2.2 if you use cephfs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




This is would be a question I had since Zheng posted the problem. I recently purged a brand new cluster because I needed to change default WAL/DB settings on all OSDs in collocate scenario. I decided to jump to 13.2.2 rather then upgrade from 13.2.1. Now I wonder if I am still in trouble.

Also, shouldn't the message to users be less subtle ( ... fix i coming ...) as this seem to be a production affecting issue for some.

On 10/8/2018 11:18 AM, Paul Emmerich wrote:
Does this only affect upgraded CephFS deployments? A fresh 13.2.2
should work fine if I'm interpreting this bug correctly?

Paul

Am Mo., 8. Okt. 2018 um 11:53 Uhr schrieb Daniel Carrasco
<d.carrasco@xxxxxxxxx>:



El lun., 8 oct. 2018 5:44, Yan, Zheng <ukernel@xxxxxxxxx> escribió:

On Mon, Oct 8, 2018 at 11:34 AM Daniel Carrasco <d.carrasco@xxxxxxxxx> wrote:

I've got several problems on 12.2.8 too. All my standby MDS uses a lot of memory (while active uses normal memory), and I'm receiving a lot of slow MDS messages (causing the webpage to freeze and fail until MDS are restarted)... Finally I had to copy the entire site to DRBD and use NFS to solve all problems...


was standby-replay enabled?


I've tried both and I've seen more less the same behavior, maybe less when is not in replay mode.

Anyway, we've deactivated CephFS for now there. I'll try with older versions on a test environment


El lun., 8 oct. 2018 a las 5:21, Alex Litvak (<alexander.v.litvak@xxxxxxxxx>) escribió:

How is this not an emergency announcement?  Also I wonder if I can
downgrade at all ?  I am using ceph with docker deployed with
ceph-ansible.  I wonder if I should push downgrade or basically wait for
the fix.  I believe, a fix needs to be provided.

Thank you,

On 10/7/2018 9:30 PM, Yan, Zheng wrote:
There is a bug in v13.2.2 mds, which causes decoding purge queue to
fail. If mds is already in damaged state, please downgrade mds to
13.2.1, then run 'ceph mds repaired fs_name:damaged_rank' .

Sorry for all the trouble I caused.
Yan, Zheng



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
_________________________________________

       Daniel Carrasco Marín
       Ingeniería para la Innovación i2TIC, S.L.
       Tlf:  +34 911 12 32 84 Ext: 223
       www.i2tic.com
_________________________________________
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux