ceph mds error

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi ,

          We have configured ceph rbd with cephfs filesystem and we are getting below error on MDS, also cephfs mounted partition size is showing double from the actual data 500 GB and used size is showing 1.1TB. Is this because of replica , if so we have replica 2. Kindly please let us know if any fix on this.

cluster a8c92ae6-6842-4fa2-bfc9-8cdefd28df5c
     health HEALTH_WARN
            too many PGs per OSD (384 > max 300)
            mds0: Client ceph-zclient failing to respond to cache pressure
            mds0: Client 192.168.107.242 failing to respond to cache pressure
            mds0: Client ceph-zclient1.labs.com failing to respond to cache pressure
     monmap e1: 3 mons at {ceph-zadmin=192.168.107.155:6789/0,ceph-zmonitor=192.168.107.247:6789/0,ceph-zmonitor1=192.168.107.246:6789/0}
            election epoch 6, quorum 0,1,2 ceph-zadmin,ceph-zmonitor1,ceph-zmonitor
     mdsmap e820: 1/1/1 up {0=ceph-zstorage1=up:active}
     osdmap e1339: 3 osds: 2 up, 2 in
      pgmap v3048828: 384 pgs, 3 pools, 493 GB data, 6515 kobjects
            1082 GB used, 3252 GB / 4335 GB avail
                 384 active+clean
  client io 21501 B/s rd, 33173 B/s wr, 20 op/s

Mounted

192.168.107.155:6789,192.168.107.247:6789,192.168.107.246:6789:/ ceph      4.3T  1.1T  3.2T  25% /home/side
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux