Hi John Wilkins,
I use RAID 6, then I divided into 4 partitions, each partition = 1 OSD(format ext4). Therefore, I use "ceph osd pool set data size 1".
Thank you very much.
TienBM
On Tue, Apr 23, 2013 at 10:12 AM, John Wilkins <john.wilkins@xxxxxxxxxxx> wrote:
This may be related to having your pool size = 1. See http://ceph.com/docs/master/rados/operations/troubleshooting-osd/#placement-groups-never-get-cleanTry setting your data size to 2: "ceph osd pool set data size 2"On Mon, Apr 22, 2013 at 7:07 AM, MinhTien MinhTien <tientienminh080590@xxxxxxxxx> wrote:Dear all,- I use CentOS 6.3 up kernel 3.8.6-1.el6.elrepo.x86_64: ceph storage (version 0.56.4), i set pool data (contains all data):ceph osd pool set data size 1- pool metadata:ceph osd pool set data size 2I have osd, earch osd = 14TB (format ext4)I have 1 permanent error exists in the system.2013-04-22 20:24:20.942457 mon.0 [INF] pgmap v313221: 640 pgs: 638 active+clean, 2 active+clean+scrubbing+deep; 17915 GB data, 17947 GB used, 86469 GB / 107 TB avail2013-04-22 20:24:12.256632 osd.1 [INF] 1.2e scrub ok2013-04-22 20:24:23.348560 mon.0 [INF] pgmap v313222: 640 pgs: 638 active+clean, 2 active+clean+scrubbing+deep; 17915 GB data, 17947 GB used, 86469 GB / 107 TB avail2013-04-22 20:24:21.551528 osd.1 [INF] 1.3f scrub ok2013-04-22 20:24:52.009562 mon.0 [INF] pgmap v313223: 640 pgs: 638 active+clean, 2 active+clean+scrubbing+deep; 17915 GB data, 17947 GB used, 86469 GB / 107 TB availThis makes me not access some data.I tried to restart, use command "ceph pg repair " but error still exists
I need some advice..
Thanks
--
TienBM_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
John Wilkins
Senior Technical Writer
Intank
john.wilkins@xxxxxxxxxxx
(415) 425-9599
http://inktank.com
Bui Minh Tien
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com