Re: cephfs monitor I/O and throughput

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We have graphs for network usage in graphana.  We even have aggregate graphs for projects.  For my team, we specifically have graphs for the Ceph cluster osd public network, osd private network, rgw network, and mon network.  You should be able to do something similar for each of the servers in your project to know how much network traffic they are using, especially if they were using a vlan to access the ceph cluster where nothing else used that interface on the server.  It would only report cephfs traffic on each client node.

I agree that pool per project is a poor design when you don't have a finite and limited number of projects.

On Fri, Dec 8, 2017 at 9:44 AM Martin Dojcak <dojcak@xxxxxxxxxxxxxxx> wrote:
Hi

we are planning to replace our NFS infra with CephFS (Luminous). Our use
case for CephFS is mounting directories via kernel client (not fuse for
performance reason).  The cephfs root directory is logically split into
subdirs where each represent separate project with its own source codes
and application data, for example:
.
├── project-1
├── project-2
├── project-3
├── project-4
├── project-5
└── project-N

Project could be mounted only with client that has right permissions.
There are few options to do that:
1. all projects are in default data pool with default namespace and
client got permissions via mds cap path restriction (ceph auth
get-or-create client.project-1 mds 'allow rw path=/project-1' ...)
2. all projects are in default data pool with custom namespace per
project and client got permissions via osd cap namespace restriction
(ceph auth get-or-create client.project-1 namespace=project-1 ...)
3. each project is in separate data pool and client got permissions via
osd cap pool restriction (ceph auth get-or-create client.project-1
pool=project-1)

One `must have` for us is possibility to monitor per project I/O
activity and throughput. Sometimes you need to know what project
generate high load on storage and fix it.
With first and second options there is no way to accomplish that. I
search around for tool like iostat for cephfs kernel client but for now
it doesnt exists.
Third option can be monitored by `ceph osd pool stats` command but there
are some disadvantages when you have many pools:
- recovery and backfilling problems
- problem to stay with predefined PGs per OSD
- problem to have good number of PGs per pool
- many more

Any ideas how can we monitor load per project on cephfs or per client ?

Martin Dojcak

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux