failing to respond to cache pressure

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



So this ‘production ready’ CephFS for jewel seems a little not quite….

 

Currently I have a single system mounting CephFS and merely scp-ing data to it.

The CephFS mount has 168 TB used, 345 TB / 514 TB avail.

 

Every so often, I get a HEALTH_WARN message of mds0: Client failing to respond to cache pressure

Even if I stop the scp, it will not go away until I umount/remount the filesystem.

 

For testing, I had the cephfs mounted on about 50 systems and when updated started on the, I got all kinds of issues with it all.

I figured having updated run on a few systems would be a good ‘see what happens’ if there is a fair amount of access to it.

 

So, should I not be even considering using CephFS as a large storage mount for a compute cluster? Is there a sweet spot for what CephFS would be good for?

 

 

Brian Andrus

ITACS/Research Computing

Naval Postgraduate School

Monterey, California

voice: 831-656-6238

 

 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux