Hammer cache behavior

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We just enabled a small cache pool on one of our clusters (v 0.94.1) and have run into some issues:

1) Cache population appears to happen via the public network (not the cluster network). We're seeing basically no traffic on the cluster network, and multiple gigabits inbound to our cache OSDs. Normal rebuild/recovery happens via the cluster network, so I don't believe this is just a configuration issue.

2) Similar to #1, I was expecting to see cache traffic show up as repair traffic in 'ceph status'. Instead, it seems to appear as a client traffic.

3) We're using a readonly pool (we only really write to our pools once). I noticed that if all the OSDs hosting the cache pool go down, all reads stop until they're restored. I would have expected that reads would fall back to the backing pool if the cache pool is unavailable. Is this how it's supposed to work?

Any thoughts on these? Are my expectations just wrong here? The documentation is fairly sparse, so I'm not quite sure what to expect.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux