Re: Help needed please ! Filesystem became read-only !

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi, 

A little break into this thread, but I have some questions:
* How does this happen, that the filesystem gets into readonly modus
* Is this avoidable? 
* How-to fix the issue, because I didn't see a workaround in the mentioned tracker (or I missed it) 
* With this bug around, should you use cephfs with reef? 

Kind regards, 
Sake 

> Op 04-06-2024 04:04 CEST schreef Xiubo Li <xiubli@xxxxxxxxxx>:
> 
>  
> Hi Nicolas,
> 
> This is a known issue and Venky is working on it, please see 
> https://tracker.ceph.com/issues/63259.
> 
> Thanks
> - Xiubo
> 
> On 6/3/24 20:04, nbarbier@xxxxxxxxxxxxxxx wrote:
> > Hello,
> >
> > First of all, thanks for reading my message. I set up a Ceph version 18.2.2 cluster with 4 nodes, everything went fine for a while, but after copying some files, the storage showed a warning status and the following message : "HEALTH_WARN: 1 MDSs are read only mds.PVE-CZ235007SH(mds.0): MDS in read-only mode".
> >
> > The logs are showing :
> >
> > Jun 03 08:20:41 PVE-CZ235007SH ceph-mds[1329868]:  -9999> 2024-06-03T07:57:17.589+0200 77250fc006c0 -1 log_channel(cluster) log [ERR] : failed to store backtrace on ino 0x1000000039c object, pool 5, errno -2
> > Jun 03 08:20:41 PVE-CZ235007SH ceph-mds[1329868]:  -9998> 2024-06-03T07:57:17.589+0200 77250fc006c0 -1 mds.0.189541 unhandled write error (2) No such file or directory, force readonly...
> >
> > After googling for a while, I did not find a hint to understand more precisely the root cause. Any help would we greatly appreciated, or even a link to post this request elsewhere if this is not the place to.
> >
> > Please find below additional details if needed. Thanks a lot !
> >
> > Nicolas
> >
> > ---
> >
> > # ceph osd dump
> > [...]
> > pool 5 'cephfs_metadata' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 292 flags hashpspool stripe_width 0 pg_autoscale_bias 4 pg_num_min 16 recovery_priority 5 application cephfs read_balance_score 4.51
> > [...]
> >
> > # ceph osd lspools
> > 1 .mgr
> > 4 cephfs_data
> > 5 cephfs_metadata
> > 18 ec-pool-001-data
> > 19 ec-pool-001-metadata
> >
> >
> > # ceph df
> > --- RAW STORAGE ---
> > CLASS     SIZE    AVAIL    USED  RAW USED  %RAW USED
> > hdd    633 TiB  633 TiB  61 GiB    61 GiB          0
> > TOTAL  633 TiB  633 TiB  61 GiB    61 GiB          0
> >
> > --- POOLS ---
> > POOL                  ID  PGS   STORED  OBJECTS     USED  %USED  MAX AVAIL
> > .mgr                   1    1  119 MiB       31  357 MiB      0    200 TiB
> > cephfs_data            4   32   71 KiB    8.38k  240 KiB      0    200 TiB
> > cephfs_metadata        5   32  329 MiB    6.56k  987 MiB      0    200 TiB
> > ec-pool-001-data      18   32   42 GiB   15.99k   56 GiB      0    451 TiB
> > ec-pool-001-metadata  19   32      0 B        0      0 B      0    200 TiB
> >
> >
> >
> > # ceph status
> >    cluster:
> >      id:     f16f53e1-7028-440f-bf48-f99912619c33
> >      health: HEALTH_WARN
> >              1 MDSs are read only
> >
> >    services:
> >      mon: 4 daemons, quorum PVE-CZ235007SG,PVE-CZ2341016V,PVE-CZ235007SH,PVE-CZ2341016T (age 35h)
> >      mgr: PVE-CZ235007SG(active, since 2d), standbys: PVE-CZ235007SH, PVE-CZ2341016T, PVE-CZ2341016V
> >      mds: 1/1 daemons up, 3 standby
> >      osd: 48 osds: 48 up (since 2d), 48 in (since 3d)
> >
> >    data:
> >      volumes: 1/1 healthy
> >      pools:   5 pools, 129 pgs
> >      objects: 30.97k objects, 42 GiB
> >      usage:   61 GiB used, 633 TiB / 633 TiB avail
> >      pgs:     129 active+clean
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux