(Found no response from the current list, so forwarded to
ceph-users@xxxxxxxx. ) Sorry if it's duplicated. -------- Original Message --------
Hi, I found there are 128 scrub errors in my ceph system. Checked with health detail and found many pgs with stuck unclean issue. Should I repair all of them? Or what I should do? [root@gcloudnet ~]# ceph -s cluster a4d0879f-abdc-4f9d-8a4b-53ce57d822f1 health HEALTH_ERR 128 pgs inconsistent; 128 scrub errors; mds1: Client HTRC:cephfs_data failing to respond to cache pressure; mds0: Client physics-007:cephfs_data failing to respond to cache pressure; pool 'cephfs_data' is full monmap e3: 3 mons at {gcloudnet=xxx.xxx.xxx.xxx:6789/0,gcloudsrv1=xxx.xxx.xxx.xxx:6789/0,gcloudsrv2=xxx.xxx.xxx.xxx:6789/0}, election epoch 178, quorum 0,1,2 gcloudnet,gcloudsrv1,gcloudsrv2 mdsmap e51000: 2/2/2 up {0=gcloudsrv1=up:active,1=gcloudnet=up:active} osdmap e2821: 18 osds: 18 up, 18 in pgmap v10457877: 3648 pgs, 23 pools, 10501 GB data, 38688 kobjects 14097 GB used, 117 TB / 130 TB avail 6 active+clean+scrubbing+deep 3513 active+clean 128 active+clean+inconsistent 1 active+clean+scrubbing
P.S. I am increasing the pg and pgp numbers for cephfs_data pool. Thanks,
Erming
---------------------------------------------------- Erming Pei, Ph.D, Senior System Analyst HPC Grid/Cloud Specialist, ComputeCanada/WestGrid Research Computing Group, IST University of Alberta, Canada T6G 2H1 Email: Erming@xxxxxxxxxxx Erming.Pei@xxxxxxx Tel. : +1 7804929914 Fax: +1 7804921729 |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com