Re: Corrupted files on CephFS since Luminous upgrade

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I don't know I didn't touched that setting. Which one is recommended ?


On 08/12/2017 11:49, Alexandre DERUMIER wrote:
> do you have disabled fuse pagecache on your clients ceph.conf ?
>
>
> [client]
> fuse_disable_pagecache = true
>
> ----- Mail original -----
> De: "Florent Bautista" <florent@xxxxxxxxxxx>
> À: "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
> Envoyé: Vendredi 8 Décembre 2017 10:54:59
> Objet: Re:  Corrupted files on CephFS since Luminous upgrade
>
> On 08/12/2017 10:44, Wido den Hollander wrote: 
>>
>> On 12/08/2017 10:27 AM, Florent B wrote: 
>>> Hi everyone, 
>>>
>>> A few days ago I upgraded a cluster from Kraken to Luminous. 
>>>
>>> I have a few mail servers on it, running Ceph-Fuse & Dovecot. 
>>>
>>> And since the day of upgrade, Dovecot is reporting some corrupted files 
>>> on a large account : 
>>>
>>> doveadm(myuser@xxxxxxxxxxxx): Error: Corrupted dbox file 
>>> /mnt/maildata1/mydomain.com/myuser//mdbox/storage/m.5808 (around 
>>> offset=79178): purging found mismatched offsets (79148 vs 72948, 
>>> 13/1313) 
>>> doveadm(myuser@xxxxxxxxxxxx): Warning: fscking index file 
>>> /mnt/maildata1/mydomain.com/myuser//mdbox/storage/dovecot.map.index 
>>> doveadm(myuser@xxxxxxxxxxxx): Warning: mdbox 
>>> /mnt/maildata1/mydomain.com/myuser//mdbox/storage: rebuilding indexes 
>>> doveadm(myuser@xxxxxxxxxxxx): Warning: Transaction log file 
>>> /mnt/maildata1/mydomain.com/myuser//mdbox/storage/dovecot.map.index.log 
>>> was locked for 1249 seconds (mdbox storage rebuild) 
>>> doveadm(myuser@xxxxxxxxxxxx): Error: Purging namespace '' failed: 
>>> Corrupted dbox file 
>>> /mnt/maildata1/mydomain.com/myuser//mdbox/storage/m.5808 (around 
>>> offset=79178): purging found mismatched offsets (79148 vs 72948, 
>>> 13/1313) 
>>>
>>> Even if Dovecot fixes this problem, every day new files are corrupted. 
>>>
>>> I never had this problem before ! And Ceph status is reporting some "MDS 
>>> slow requests" ! 
>>>
>>> Do you have an idea ? 
>>>
>> Not really, but could you share a bit more information: 
>>
>> - Which version if Luminous? 
>> - Running with BlueStore or FileStore? 
>> - Replication? 
>> - Cache tiering? 
>> - Which kernel version do the clients use? 
>>
>> Wido 
>>
> Luminous 12.2.1 upgraded to 12.2.2 yesterday, and always the same 
> problem today. 
>
> FileStore only (xfs). 
>
> Replication is 3 copies for these mail files. 
>
> No Cache Tiering. 
>
> Kernel on clients is default Debian Jessie (3.16.43-2+deb8u5) but I'm 
> using ceph-fuse, not kernel client. 
> _______________________________________________ 
> ceph-users mailing list 
> ceph-users@xxxxxxxxxxxxxx 
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
>

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux