Hi Mark, I've just watched the 1st part regarding the cache tiering and found it very interesting. I think you guys have hit the nail on the head regarding the unnecessary promotions as well as hurting performance on the current in flight IOs, they also have an impact on future IO's that need to be re-promoted due to cache pollution. I have found that having a larger cache tier of slower disks gives better performance overall compared to a smaller SSD cache tier currently. I don't know if you saw my post from a couple of months ago, but I also found dropping the RBD object size had a very positive effect on latency and also made the cache much more effective:- http://permalink.gmane.org/gmane.comp.file-systems.ceph.user/17965 I'm also wondering if some sort of full block proxy write would be worth implementing for erasure coded pools? Similar to how full stripe writes work for RAID5/6. This would most likely help during sequential writes onto a cached erasure pool. RBD caching in front of the cache tier could assemble the writes ready for this. Nick > -----Original Message----- > From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of > Mark Nelson > Sent: 10 June 2015 19:51 > To: ceph-devel; ceph-users@xxxxxxxxxxxxxx > Subject: 6/10/2015 performance meeting recording > > Hi All, > > A couple of folks have asked for a recording of the performance meeting this > week as there was an excellent discussion today regarding simplemessenger > optimization with Sage. > > Here's a link to the recording: https://bluejeans.com/s/8knV/ > > You can access this recording and all previous performance meeting > recordings along with meeting notes on the performance etherpad here: > > http://pad.ceph.com/p/performance_weekly > > Thanks, > Mark > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com