Hi, thanks for answering. Here are the answers to your questions.
Hopefully they will be helpful.
On 04/08/2015 12:36 PM, Lionel Bouton wrote:
I probably won't be able to help much, but people knowing more will
need at least: - your Ceph version, - the kernel version of the host
on which you are trying to format /dev/rbd1, - which hardware and
network you are using for this cluster (CPU, RAM, HDD or SSD models,
network cards, jumbo frames, ...).
ceph version 0.87 (c51c8f9d80fa4e0168aa52685b8de40e42758578)
Linux 3.18.4pl2 #3 SMP Thu Jan 29 21:11:23 CET 2015 x86_64 GNU/Linux
The hardware is an Amazon AWS c3.large. So, a (virtual) Xeon(R) CPU
E5-2680 v2 @ 2.80GHz, 3845992 kB RAM, plus whatever other virtual
hardware Amazon provides.
There's only one thing surprising me here: you have only 6 OSDs, 1504GB
(~ 250G / osd) and a total of 4400 pgs ? With a replication of 3 this is
2200 pgs / OSD, which might be too much and unnecessarily increase the
load on your OSDs.
Best regards,
Lionel Bouton
Our workload involves creating and destroying a lot of pools. Each pool
has 100 pgs, so it adds up. Could this be causing the problem? What
would you suggest instead?
Jeff
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com