Re: Ceph mds is stuck in creating status

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I had the same thing happen too when I built a ceph cluster on a single VM for testing, I wasn't concerned though because I knew the slow speed was likely a problem.


On Mon, Oct 15, 2018 at 7:34 AM Kisik Jeong <kisik.jeong@xxxxxxxxxxxx> wrote:
Hello,

I successfully deployed Ceph cluster with 16 OSDs and created CephFS before.
But after rebooting due to mds slow request problem, when creating CephFS, Ceph mds goes creating status and never changes.
Seeing Ceph status, there is no other problem I think. Here is 'ceph -s' result:

csl@hpc1:~$ ceph -s
  cluster:
    id:     1a32c483-cb2e-4ab3-ac60-02966a8fd327
    health: HEALTH_OK
 
  services:
    mon: 1 daemons, quorum hpc1
    mgr: hpc1(active)
    mds: cephfs-1/1/1 up  {0=hpc1=up:creating}
    osd: 16 osds: 16 up, 16 in
 
  data:
    pools:   2 pools, 640 pgs
    objects: 7 objects, 124B
    usage:   34.3GiB used, 116TiB / 116TiB avail
    pgs:     640 active+clean

However, CephFS still works in case of 8 OSDs.

If there is any doubt of this phenomenon, please let me know. Thank you.

PS. I attached my ceph.conf contents:

[global]
fsid = 1a32c483-cb2e-4ab3-ac60-02966a8fd327
mon_initial_members = hpc1
mon_host = 192.168.40.10
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

public_network = 192.168.40.0/24
cluster_network = 192.168.40.0/24

[osd]
osd journal size = 1024
osd max object name len = 256
osd max object namespace len = 64
osd mount options f2fs = active_logs=2

[osd.0]
host = hpc9
public_addr = 192.168.40.18
cluster_addr = 192.168.40.18

[osd.1]
host = hpc10
public_addr = 192.168.40.19
cluster_addr = 192.168.40.19

[osd.2]
host = hpc9
public_addr = 192.168.40.18
cluster_addr = 192.168.40.18

[osd.3]
host = hpc10
public_addr = 192.168.40.19
cluster_addr = 192.168.40.19

[osd.4]
host = hpc9
public_addr = 192.168.40.18
cluster_addr = 192.168.40.18

[osd.5]
host = hpc10
public_addr = 192.168.40.19
cluster_addr = 192.168.40.19

[osd.6]
host = hpc9
public_addr = 192.168.40.18
cluster_addr = 192.168.40.18

[osd.7]
host = hpc10
public_addr = 192.168.40.19
cluster_addr = 192.168.40.19

[osd.8]
host = hpc9
public_addr = 192.168.40.18
cluster_addr = 192.168.40.18

[osd.9]
host = hpc10
public_addr = 192.168.40.19
cluster_addr = 192.168.40.19

[osd.10]
host = hpc9
public_addr = 192.168.10.18
cluster_addr = 192.168.40.18

[osd.11]
host = hpc10
public_addr = 192.168.10.19
cluster_addr = 192.168.40.19

[osd.12]
host = hpc9
public_addr = 192.168.10.18
cluster_addr = 192.168.40.18

[osd.13]
host = hpc10
public_addr = 192.168.10.19
cluster_addr = 192.168.40.19

[osd.14]
host = hpc9
public_addr = 192.168.10.18
cluster_addr = 192.168.40.18

[osd.15]
host = hpc10
public_addr = 192.168.10.19
cluster_addr = 192.168.40.19

--
Kisik Jeong
Ph.D. Student
Computer Systems Laboratory
Sungkyunkwan University
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux