Em Ter, 6 de jun de 2017 05:26, TYLin <wooertim@xxxxxxxxx> escreveu:
_______________________________________________On Jun 6, 2017, at 11:18 AM, jiajia zhong <zhong2plus@xxxxxxxxx> wrote:it's very similar to ours. but is there any need to seperate the osds for different pools ? why ?below's our crushmap.-98 6.29997 root tier_cache-94 1.39999 host cephn1-ssd95 0.70000 osd.95 up 1.00000 1.00000101 0.34999 osd.101 up 1.00000 1.00000102 0.34999 osd.102 up 1.00000 1.00000-95 1.39999 host cephn2-ssd94 0.70000 osd.94 up 1.00000 1.00000103 0.34999 osd.103 up 1.00000 1.00000104 0.34999 osd.104 up 1.00000 1.00000-96 1.39999 host cephn3-ssd105 0.34999 osd.105 up 1.00000 1.00000106 0.34999 osd.106 up 1.00000 1.0000093 0.70000 osd.93 up 1.00000 1.00000-93 0.70000 host cephn4-ssd97 0.34999 osd.97 up 1.00000 1.0000098 0.34999 osd.98 up 1.00000 1.00000-97 1.39999 host cephn5-ssd96 0.70000 osd.96 up 1.00000 1.0000099 0.34999 osd.99 up 1.00000 1.00000100 0.34999 osd.100 up 1.00000 1.00000Because ceph cannot distinguish metadata request and data request. If we use same osd sets for both metadata cache and data cache, the bandwidth of metadata request may be occupied by data request and lead to long response time.Thanks,Ting Yi Lin
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com