OSD caching on EC-pools (heavy cross OSD communication on cached reads)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi.

I just changed some of my data on CephFS to go to the EC pool instead
of the 3x replicated pool. The data is "write rare / read heavy" data
being served to an HPC cluster.

To my surprise it looks like the OSD memory caching is done at the
"split object level" not at the "assembled object level", as a
consequence - even though the dataset is fully memory cached it
actually deliveres a very "heavy" cross OSD network traffic to
assemble the objects back.

Since (as far as I understand) no changes can go the the underlying
object without going though the primary pg - then caching could be
more effectively done at that level.

The caching on the 3x replica does not retrieve all 3 copies to compare
and verify on a read request (or I at least cannot see any network
traffic supporting that it should be the case).

Is above configurable? Or would that be a feature/performance request?

Jesper

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux