R: cancel or remove default pool rbd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Micheal,
I have resolve with cancel and recreate pool rbd.

Thanks in advance.
Andrea.

-----Messaggio originale-----
Da: Michael Hackett [mailto:mhackett@xxxxxxxxxx] 
Inviato: giovedì 11 febbraio 2016 23:26
A: Andrea Annoè
Cc: ceph-users@xxxxxxxxxxxxxx
Oggetto: Re:  cancel or remove default pool rbd

Hello Andrea,

The question is why won't your PG's go into an active+clean state on the cluster? Are all of your OSD's up/in? Are you satisfying your CRUSH ruleset?

Can you provide an output of 'ceph osd tree', 'ceph -s', 'ceph osd crush show-tunables' and your ceph.conf file.

Thank you,


----- Original Message -----
From: "Andrea Annoè" <Andrea.Annoe@xxxxxx>
To: ceph-users@xxxxxxxxxxxxxx
Sent: Thursday, February 11, 2016 4:53:13 PM
Subject:  cancel or remove default pool rbd



I to all, 

someone have try to cancel rbd default pool? 



I have the cluster ceph in warning with stale create pg. 



Is possible cancel default rbd pool and remove all stale pg? 



Thanks to all for your reply. 

Andrea. 



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

-- 
Michael Hackett 
Senior Software Maintenance Engineer CEPH Storage 
Phone: 1-978-399-2196 
Westford, MA 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux