Re: HEALTH_WARN 29 pgs degraded; 29 pgs stuck degraded; 133 pgs stuck unclean; 29 pgs stuck undersized;

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for the tip. Will do.

Jiri

----- Reply message -----
From: "Nico Schottelius" <nico-ceph-users@xxxxxxxxxxxxxxx>
To: <ceph-users@xxxxxxxx>
Subject: [ceph-users] HEALTH_WARN 29 pgs degraded; 29 pgs stuck degraded; 133 pgs stuck unclean; 29 pgs stuck undersized;
Date: Sun, Dec 28, 2014 03:49

Hey Jiri,

also rais the pgp_num (pg != pgp - it's easy to overread).

Cheers,

Nico

Jiri Kanicky [Sun, Dec 28, 2014 at 01:52:39AM +1100]:
> Hi,
> 
> I just build my CEPH cluster but having problems with the health of
> the cluster.
> 
> Here are few details:
> - I followed the ceph documentation.
> - I used btrfs filesystem for all OSDs
> - I did not set "osd pool default size = 2 " as I thought that if I
> have 2 nodes + 4 OSDs, I can leave default=3. I am not sure if this
> was right.
> - I noticed that default pools "data,metadata" were not created.
> Only "rbd" pool was created.
> - As it was complaining that the pg_num is too low, I increased the
> pg_num for pool rbd to 133 (400/3) and end up with "pool rbd pg_num
> 133 > pgp_num 64".
> 
> Would you give me hint where I have made the mistake? (I can remove
> the OSDs and start over if needed.)
> 
> 
> cephadmin@ceph1:/etc/ceph$ sudo ceph health
> HEALTH_WARN 29 pgs degraded; 29 pgs stuck degraded; 133 pgs stuck
> unclean; 29 pgs stuck undersized; 29 pgs undersized; pool rbd pg_num
> 133 > pgp_num 64
> cephadmin@ceph1:/etc/ceph$ sudo ceph status
>     cluster bce2ff4d-e03b-4b75-9b17-8a48ee4d7788
>      health HEALTH_WARN 29 pgs degraded; 29 pgs stuck degraded; 133
> pgs stuck unclean; 29 pgs stuck undersized; 29 pgs undersized; pool
> rbd pg_num 133 > pgp_num 64
>      monmap e1: 2 mons at
> {ceph1=192.168.30.21:6789/0,ceph2=192.168.30.22:6789/0}, election
> epoch 8, quorum 0,1 ceph1,ceph2
>      osdmap e42: 4 osds: 4 up, 4 in
>       pgmap v77: 133 pgs, 1 pools, 0 bytes data, 0 objects
>             11704 kB used, 11154 GB / 11158 GB avail
>                   29 active+undersized+degraded
>                  104 active+remapped
> 
> 
> cephadmin@ceph1:/etc/ceph$ sudo ceph osd tree
> # id    weight  type name       up/down reweight
> -1      10.88   root default
> -2      5.44            host ceph1
> 0       2.72                    osd.0   up      1
> 1       2.72                    osd.1   up      1
> -3      5.44            host ceph2
> 2       2.72                    osd.2   up      1
> 3       2.72                    osd.3   up      1
> 
> 
> cephadmin@ceph1:/etc/ceph$ sudo ceph osd lspools
> 0 rbd,
> 
> cephadmin@ceph1:/etc/ceph$ cat ceph.conf
> [global]
> fsid = bce2ff4d-e03b-4b75-9b17-8a48ee4d7788
> public_network = 192.168.30.0/24
> cluster_network = 10.1.1.0/24
> mon_initial_members = ceph1, ceph2
> mon_host = 192.168.30.21,192.168.30.22
> auth_cluster_required = cephx
> auth_service_required = cephx
> auth_client_required = cephx
> filestore_xattr_use_omap = true
> 
> Thank you
> Jiri

> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


-- 
New PGP key: 659B 0D91 E86E 7E24 FD15  69D0 C729 21A1 293F 2D24
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux