Lots of misdirected client requests with 73984 PGs/pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I was testing on 288 OSDs with pg_bits=8, for 73984 PGs/pool,
221952 total PGs.

Writing from CephFS clients generates lots of messages like this:

2012-08-28 14:53:33.772344 osd.235 [WRN] client.4533 172.17.135.45:0/1432642641 misdirected client.4533.1:124 pg 0.8b9d12d4 to osd.235 in e7, client e7 pg 0.112d4 features 262282

There's no trouble with 288 OSDs and pg_bits=7 (36992 PGs/pool).

This is all with current Linus master branch + ceph-client testing
branch, and master branch on the server side.

Does this have anything to do with old_pg_t AKA struct ceph_pg
only using 16 bits for the placement seed?

I'd about to start testing on 576 OSDs, and was hoping to use
pg_bits=7 to get more uniform data placement across OSDs.
My first step on that path was 288 OSDs with pg_bits=8.

Maybe I can't, or I have to just ignore the warnings, until
the kernel's libceph learns about 32-bit placement seeds?

Or maybe this 16-bit seed theory is all wrong, and something
else is going on?  If so, what more information is needed
to help sort this out?

Thanks -- Jim

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux