Awesome! I have yet to hear of any zfs in ceph chat nor have I seen
it on the mailing lists that I have caught. I would assume it would
function pretty well considering how long it has been in use along
some production systems I have seen. I have little to no experience
with it personally though. I thought the rados issue was weird as well. Even with a degraded cluster I feel like I should be getting better throughput unless I hit an object with a bunch of bad PGs or something. We are using 2x 2x10G cards in LACP to get over 10G on average and have separate gateway nodes (Went with the Supermicro kit after all) so CPU on those nodes shouldn't be an issue. It is extremely low as it is currently which is again surprising. I honestly think that this is some kind of radosgw bug in giant as I have another giant cluster with the exact same config that is performing much better with much less hardware. Hopefully it is indeed a bug of somesort and not yet another screw up on my end. Furthermore hopefully I find the bug and fix it for others to find and profit from ^_^. Thanks for all of your help! On 12/22/2014 05:26 PM, Craig Lewis
wrote:
|
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com