Am 29.05.2012 15:39, schrieb Yann Dupont: > On 29/05/2012 11:46, Stefan Priebe - Profihost AG wrote: >> It would be really nice if somebody from inktank can comment this whole >> sitation. >> > Hello. > I think I have the same bug : > > My setup is with 8 OSD nodes, 3 MDS (1 active) & 3 MON. > All my machines are debian, using a custom 3.4.0 kernel. Ceph is > 0.47.2-1~bpo60+1 (debian package) That sounds absolutely like the same issue. Sadly nobody from inktank has replied to this problems for the last days. > As you can see, much more stable bandwith with this pool. That's pretty strange... > I understand data & rbd pool probably don't use the same internals, but > is this difference expected ? There must be differences in pool handling. Stefan -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html