Re: CEPH FS is always showing the status as creating

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



You need to fix the out OSDs first. The default pool size is very likely three and you only have two OSDs up, that's why 33% of your PGs are degraded. I'm pretty sure if you fix that your cephfs will become active.


Zitat von Alokkumar Mahajan <alokkumar.mahajan@xxxxxxxxx>:

Hello Nathan,
Below is the output of ceph status:-

  cluster:
    id:     a3ede5f7-ade8-4bfd-91f4-568e19ca9e69
    health: HEALTH_WARN
            1 MDSs report slow metadata IOs
            Degraded data redundancy: 12563/37689 objects degraded
(33.333%), 109 pgs degraded
            application not enabled on 2 pool(s)

  services:
    mon: 1 daemons, quorum vl-co-qbr
    mgr: node1(active)
    mds: cephfs-1/1/1 up  {0=vl-co-qbr=up:creating}
    osd: 4 osds: 2 up, 2 in

  data:
    pools:   4 pools, 248 pgs
    objects: 12.56 k objects, 37 GiB
    usage:   90 GiB used, 510 GiB / 600 GiB avail
    pgs:     12563/37689 objects degraded (33.333%)
             139 active+undersized
             109 active+undersized+degraded

  io:
    client:   1.4 KiB/s wr, 0 op/s rd, 5 op/s wr
    recovery: 12 B/s, 0 keys/s, 1 objects/s

On Wed, 19 Aug 2020 at 21:26, Nathan Fish <lordcirth@xxxxxxxxx> wrote:

Have you created any MDS daemons? Can you paste "ceph status"?

On Wed, Aug 19, 2020 at 11:52 AM Alokkumar Mahajan
<alokkumar.mahajan@xxxxxxxxx> wrote:
>
> Hello,
> We have created CEPH FS but it is always showing the status as creating.
>
> ceph fs get returns below output:-
>
> ===========================================
> Filesystem 'cephfs' (4)
> fs_name cephfs
> epoch   2865929
> flags   12
> created 2020-08-07 05:05:58.033824
> modified        2020-08-14 03:15:49.727680
> tableserver     0
> root    0
> session_timeout 60
> session_autoclose       300
> max_file_size   1099511627776
> min_compat_client       -1 (unspecified)
> last_failure    0
> last_failure_osd_epoch  652
> compat  compat={},rocompat={},incompat={1=base v0.20,2=client writeable
> ranges,3=default file layouts on dirs,4=dir inode in separate
object,5=mds
> uses versioned encoding,6=dirfrag is stored in omap,8=no anchor
> table,9=file layout v2,10=snaprealm v2}
> max_mds 1
> in      0
> up      {0=1494099}
> failed
> damaged
> stopped
> data_pools      [32]
> metadata_pool   33
> inline_data     disabled
> balancer
> standby_count_wanted    0
> 1494099:        10.18.97.47:6800/2041780514
> <http://10.128.97.247:6800/2041780514> 'vl-pun-qa' mds.0.2748022
> up:creating seq 149289
> ====================================================
>
> We are CEPH 13.2.6 MIMIC Version.
>
> I am new to CEPH so i am really not sure where to start checking this,
any
> help will be greatly appreciated.
>
> Thanks,
> -alok
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux