Re: ceph health

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

please show the output of:

ceph osd pool ls detail

also

ceph health detail

please.

-- 
Mit freundlichen Gruessen / Best regards

Oliver Dzombic
IP-Interactive

mailto:info@xxxxxxxxxxxxxxxxx

Anschrift:

IP Interactive UG ( haftungsbeschraenkt )
Zum Sonnenberg 1-3
63571 Gelnhausen

HRB 93402 beim Amtsgericht Hanau
Geschäftsführung: Oliver Dzombic

Steuer Nr.: 35 236 3622 1
UST ID: DE274086107


Am 18.07.2016 um 11:49 schrieb Ivan Koortzen:
> Hi All,
>  
> I quite new to ceph but did a initial setup on these Virtual Machines:
>  
> 1x Ceph admin
> 3 x Ceph mons
> 3x Ceph OSD's
>  
> each osd has 3x 100GB drives, and 3x 20GB journals
>  
> After initial setup of Ceph and running # ceph healt I get the following
> error
>  
>  
> Any help would be appreciated!
>  
>  
> [cephuser@admin ceph-deploy]$ *ceph health*
> HEALTH_ERR 320 pgs are stuck inactive for more than 300 seconds; 320 pgs
> stuck inactive
>  
>  
> [cephuser@admin ceph-deploy]$ *ceph -s
> *    cluster 2f5abfd7-b13e-4679-8142-78f58b0389ee
>      health HEALTH_ERR
>             320 pgs are stuck inactive for more than 300 seconds
>             320 pgs stuck inactive
>      monmap e2: 3 mons at
> {mon1=192.168.10.85:6789/0,mon2=192.168.10.87:6789/0,mon3=192.168.10.89:6789/0}
>             election epoch 10, quorum 0,1,2 mon1,mon2,mon3
>      osdmap e13: 9 osds: 0 up, 0 in
>             flags sortbitwise
>       pgmap v14: 320 pgs, 2 pools, 0 bytes data, 0 objects
>             0 kB used, 0 kB / 0 kB avail
>                  320 creating
>  
>  
> [cephuser@admin ceph-deploy]$ *ceph osd tree*
> ID WEIGHT TYPE NAME    UP/DOWN REWEIGHT PRIMARY-AFFINITY
> -1      0 root default                                  
>  0      0 osd.0           down        0          1.00000
>  1      0 osd.1           down        0          1.00000
>  2      0 osd.2           down        0          1.00000
>  3      0 osd.3           down        0          1.00000
>  4      0 osd.4           down        0          1.00000
>  5      0 osd.5           down        0          1.00000
>  6      0 osd.6           down        0          1.00000
>  7      0 osd.7           down        0          1.00000
>  8      0 osd.8           down        0          1.00000
>  
>  
>  ceph-deploy *disk list osd1*
> [ceph_deploy.conf][DEBUG ] found configuration file at:
> /home/cephuser/.cephdeploy.conf
> [ceph_deploy.cli][INFO  ] Invoked (1.5.34): /usr/bin/ceph-deploy disk
> list osd1
> [ceph_deploy.cli][INFO  ] ceph-deploy options:
> [ceph_deploy.cli][INFO  ]  username                      : None
> [ceph_deploy.cli][INFO  ]  verbose                       : False
> [ceph_deploy.cli][INFO  ]  overwrite_conf                : False
> [ceph_deploy.cli][INFO  ]  subcommand                    : list
> [ceph_deploy.cli][INFO  ]  quiet                         : False
> [ceph_deploy.cli][INFO  ]  cd_conf                       :
> <ceph_deploy.conf.cephdeploy.Conf instance at 0xe33a28>
> [ceph_deploy.cli][INFO  ]  cluster                       : ceph
> [ceph_deploy.cli][INFO  ]  func                          : <function
> disk at 0xe27398>
> [ceph_deploy.cli][INFO  ]  ceph_conf                     : None
> [ceph_deploy.cli][INFO  ]  default_release               : False
> [ceph_deploy.cli][INFO  ]  disk                          : [('osd1',
> None, None)]
> [osd1][DEBUG ] connection detected need for sudo
> [osd1][DEBUG ] connected to host: osd1
> [osd1][DEBUG ] detect platform information from remote host
> [osd1][DEBUG ] detect machine type
> [osd1][DEBUG ] find the location of an executable
> [ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.2.1511 Core
> [ceph_deploy.osd][DEBUG ] Listing disks on osd1...
> [osd1][DEBUG ] find the location of an executable
> [osd1][INFO  ] Running command: sudo /usr/sbin/ceph-disk list
> [osd1][DEBUG ] /dev/dm-0 other, xfs, mounted on /
> [osd1][DEBUG ] /dev/dm-1 swap, swap
> [osd1][DEBUG ] /dev/sda :
> [osd1][DEBUG ]  /dev/sda2 other, LVM2_member
> [osd1][DEBUG ]  /dev/sda1 other, xfs, mounted on /boot
> [osd1][DEBUG ] /dev/sdb :
> [osd1][DEBUG ]  /dev/sdb1 other, ebd0a0a2-b9e5-4433-87c0-68b6b72699c7
> [osd1][DEBUG ]  /dev/sdb2 other, ebd0a0a2-b9e5-4433-87c0-68b6b72699c7
> [osd1][DEBUG ]  /dev/sdb3 other, ebd0a0a2-b9e5-4433-87c0-68b6b72699c7
> [osd1][DEBUG ] /dev/sdc :
> [osd1][DEBUG ]  /dev/sdc1 ceph data, prepared, cluster ceph, osd.0
> [osd1][DEBUG ] /dev/sdd :
> [osd1][DEBUG ]  /dev/sdd1 ceph data, prepared, cluster ceph, osd.1
> [osd1][DEBUG ] /dev/sde :
> [osd1][DEBUG ]  /dev/sde1 ceph data, prepared, cluster ceph, osd.2
> [osd1][DEBUG ] /dev/sr0 other, unknown
> 
>  
>  
>  
> 
> Vrywaringsklousule / Disclaimer:
> _http://www.nwu.ac.za/it/gov-man/disclaimer.html
> <http://www.nwu.ac.za/it/gov-man/disclaimer.html>_
> 
> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux