R: cancel or remove default pool rbd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Micheal,

ceph -s:

     cluster ea296c34-e9b0-4a53-a740-f0b472f0c81d
      health HEALTH_WARN
             44 pgs degraded
             64 pgs stale
             44 pgs stuck degraded
             64 pgs stuck inactive
             64 pgs stuck stale
             128 pgs stuck unclean
             44 pgs stuck undersized
             44 pgs undersized
             too many PGs per OSD (1246 > max 300)
             pool rbd pg_num 128 > pgp_num 64
      monmap e1: 1 mons at {vltiobjmonmi001=192.168.245.135:6789/0}
             election epoch 1, quorum 0 vltiobjmonmi001
      osdmap e560: 24 osds: 24 up, 24 in
       pgmap v35252: 15008 pgs, 28 pools, 38197 kB data, 112 objects
             1632 MB used, 22342 GB / 22344 GB avail
                14880 active+clean
                   64 creating
                   31 stale+active+undersized+degraded+remapped
                   14 stale+active
                   13 stale+active+undersized+degraded
                    6 stale+active+remapped


Problem with change some characteristic of cluster:
ceph osd pool set rbd pgp_num 128
Error EBUSY: currently creating pgs, wait

I have set 2 replica number for a single object

#Choose reasonable numbers for number of replicas and placement groups.
osd pool default size = 2 # Write an object 2 times
osd pool default min size = 1 # Allow writing 1 copy in a degraded state
osd pool default pg num = 2048
osd pool default pgp num = 2048


For me the problem is locate on 64 creating pg.
I try to downsize pgp from 2048 to 128 ...without success.


Thanks Andrea.

-----Messaggio originale-----
Da: Michael Hackett [mailto:mhackett@xxxxxxxxxx] 
Inviato: giovedì 11 febbraio 2016 23:26
A: Andrea Annoè
Cc: ceph-users@xxxxxxxxxxxxxx
Oggetto: Re:  cancel or remove default pool rbd

Hello Andrea,

The question is why won't your PG's go into an active+clean state on the cluster? Are all of your OSD's up/in? Are you satisfying your CRUSH ruleset?

Can you provide an output of 'ceph osd tree', 'ceph -s', 'ceph osd crush show-tunables' and your ceph.conf file.

Thank you,


----- Original Message -----
From: "Andrea Annoè" <Andrea.Annoe@xxxxxx>
To: ceph-users@xxxxxxxxxxxxxx
Sent: Thursday, February 11, 2016 4:53:13 PM
Subject:  cancel or remove default pool rbd



I to all, 

someone have try to cancel rbd default pool? 



I have the cluster ceph in warning with stale create pg. 



Is possible cancel default rbd pool and remove all stale pg? 



Thanks to all for your reply. 

Andrea. 



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

-- 
Michael Hackett 
Senior Software Maintenance Engineer CEPH Storage 
Phone: 1-978-399-2196 
Westford, MA 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux