Re: ceph mds error

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Apr 5, 2016 at 7:58 PM, gjprabu <gjprabu@xxxxxxxxxxxx> wrote:
> Hi John,
>
>     Thanks for your reply, current kernel version we are using
> 3.10.0-229.20.1.el7.x86_64 and ceph version 0.94.5 , Please advice which
> version is perfect.
>

3.10.0-227.el7 is about one year old. please use the newest el7 kernel.

Regards
Yan, Zheng

> Regards
> Prabu GJ
>
>
> ---- On Tue, 05 Apr 2016 16:43:27 +0530 John Spray <jspray@xxxxxxxxxx>wrote
> ----
>
> Usually we see those warning from older clients which have some bugs.
> You should use the most recent client version you can (or the most
> recent kernel you can if it's the kernel client)
>
> John
>
> On Tue, Apr 5, 2016 at 7:00 AM, gjprabu <gjprabu@xxxxxxxxxxxx> wrote:
>> Hi ,
>>
>> We have configured ceph rbd with cephfs filesystem and we are
>> getting below error on MDS, also cephfs mounted partition size is showing
>> double from the actual data 500 GB and used size is showing 1.1TB. Is this
>> because of replica , if so we have replica 2. Kindly please let us know if
>> any fix on this.
>>
>> cluster a8c92ae6-6842-4fa2-bfc9-8cdefd28df5c
>> health HEALTH_WARN
>> too many PGs per OSD (384 > max 300)
>> mds0: Client ceph-zclient failing to respond to cache pressure
>> mds0: Client 192.168.107.242 failing to respond to cache
>> pressure
>> mds0: Client ceph-zclient1.labs.com failing to respond to cache
>> pressure
>> monmap e1: 3 mons at
>>
>> {ceph-zadmin=192.168.107.155:6789/0,ceph-zmonitor=192.168.107.247:6789/0,ceph-zmonitor1=192.168.107.246:6789/0}
>> election epoch 6, quorum 0,1,2
>> ceph-zadmin,ceph-zmonitor1,ceph-zmonitor
>> mdsmap e820: 1/1/1 up {0=ceph-zstorage1=up:active}
>> osdmap e1339: 3 osds: 2 up, 2 in
>> pgmap v3048828: 384 pgs, 3 pools, 493 GB data, 6515 kobjects
>> 1082 GB used, 3252 GB / 4335 GB avail
>> 384 active+clean
>> client io 21501 B/s rd, 33173 B/s wr, 20 op/s
>>
>> Mounted
>>
>> 192.168.107.155:6789,192.168.107.247:6789,192.168.107.246:6789:/ ceph
>> 4.3T 1.1T 3.2T 25% /home/side
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux