Hi
we are planning to replace our NFS infra with CephFS (Luminous). Our use
case for CephFS is mounting directories via kernel client (not fuse for
performance reason). The cephfs root directory is logically split into
subdirs where each represent separate project with its own source codes
and application data, for example:
.
├── project-1
├── project-2
├── project-3
├── project-4
├── project-5
└── project-N
Project could be mounted only with client that has right permissions.
There are few options to do that:
1. all projects are in default data pool with default namespace and
client got permissions via mds cap path restriction (ceph auth
get-or-create client.project-1 mds 'allow rw path=/project-1' ...)
2. all projects are in default data pool with custom namespace per
project and client got permissions via osd cap namespace restriction
(ceph auth get-or-create client.project-1 namespace=project-1 ...)
3. each project is in separate data pool and client got permissions via
osd cap pool restriction (ceph auth get-or-create client.project-1
pool=project-1)
One `must have` for us is possibility to monitor per project I/O
activity and throughput. Sometimes you need to know what project
generate high load on storage and fix it.
With first and second options there is no way to accomplish that. I
search around for tool like iostat for cephfs kernel client but for now
it doesnt exists.
Third option can be monitored by `ceph osd pool stats` command but there
are some disadvantages when you have many pools:
- recovery and backfilling problems
- problem to stay with predefined PGs per OSD
- problem to have good number of PGs per pool
- many more
Any ideas how can we monitor load per project on cephfs or per client ?
Martin Dojcak
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com