Jim Schutt wrote:
Sage Weil wrote:
I guess the other thing that would help to confirm this is to just
halve the number of OSDs on your machines in a test and see if the
problem goes away.
I was going to try this first, exactly because it seems like
a definitive test.
FWIW, I've done some testing on a file system using 48 OSDs
rather than 96.
With the 96-OSD version of this test (12 servers, 8 OSD/server),
with 64 clients writing a total of 128 GiB data, I usually see
multiple instances (5-6, or more, is common) of OSDs getting
marked down, noticing they were wrongly marked down, and coming back.
With the 48-OSD version of the file system (12 servers, 4 OSD/server)
I ran multiple tests, totaling several TiB data, and experienced
exactly one instance on an OSD being wrongly marked down.
-- Jim
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html