Re: 2 probs after upgrade to emporer

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I don’t think it helps if you keep sending the same e-mail over and over. somebody will eventually reply - or not. if you keep sending out your e-mail regularly you will start to become annoying. 
-- 
http://www.wogri.at

On Nov 22, 2013, at 8:06 AM, Linke, Michael <m.linke@xxxxxxxxxx> wrote:

> Hi,
> maybe you can help us with following probs:
> if you need more info about our cluster or any debugging log I will be happy to help
>  
> Environment:
> -------------------
>  
> Small test cluster with 7 node, 1 osd per node
> Upgrade from dumpling to emporer 0.72.1
>  
>  
> 2 Problems after upgrade:
> -----------------------------------
>  
> - ceph -s shows <health HEALTH_WARN pool images has too few pgs>
>   - changing pg_num etc from 1024 to 1600 has no effect
>   - after setting <mon_pg_warn_min_objects> to a larger value the cluster moves to HEALTH_OK
>  
> - 20 pools
>   create new pool
>   wait until pool is ready
>   delete pool again
>   ceph -s still shows 21 pools
>  
>   stop first mon, wait, start mon again
>   ceph -s shows 20 pools, 21 pools, 20 pools, ... changing every n (~ 2) secs
>  
>   stop second mon, wait, start mon again
>   ceph -s shows 20 pools, 21 pools, 20 pools, ... changing every m > n secs
>  
>   stop third mon, wait, start mon again
>   ceph -s shows 20 pools ... stable
>  
>  
> Some infos about the cluster:
> ----------------------------------------
>  
> # uname –a
> Linux host1 3.2.0-53-generic #81-Ubuntu SMP Thu Aug 22 21:01:03 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
>  
>  
> # ceph --version
> ceph version 0.72.1 (4d923861868f6a15dcb33fef7f50f674997322de)
>  
> # ceph -s
>     cluster 3f4a289a-ad40-40e7-8204-ab38affb18f8
>      health HEALTH_WARN pool images has too few pgs
>      monmap e1: 3 mons at {host1=10.255.NNN.NN:6789/0,host2=10.255.NNN.NN:6789/0,host3=10.255.NNN.NN:6789/0},\
>                 election epoch 306, quorum 0,1,2 host1,host2,host3
>      mdsmap e21: 1/1/1 up {0=host4=up:active}
>      osdmap e419: 7 osds: 7 up, 7 in
>       pgmap v815042: 17152 pgs, 21 pools, 97795 MB data, 50433 objects
>             146 GB used, 693 GB / 839 GB avail
>                17152 active+clean
>   client io 25329 kB/s wr, 39 op/s
>  
>  
> # ceph osd dump
> epoch 419
> fsid 3f4a289a-ad40-40e7-8204-ab38affb18f8
> created 2013-09-27 14:37:13.601605
> modified 2013-11-20 16:55:35.791740
> flags
>  
> pool 0 'data' rep size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 owner 0 crash_replay_interval 45
> pool 1 'metadata' rep size 2 min_size 1 crush_ruleset 1 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 owner 0
> pool 2 'rbd' rep size 2 min_size 1 crush_ruleset 2 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 owner 0
> pool 3 'images' rep size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 1600 pgp_num 1600 last_change 367 owner 0
>   removed_snaps [1~1,3~2]
> ...
> ...
>  
> # rados lspools | wc -l
> 20
>  
>  
> Michael
>  
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux