On 2013/9/10 6:38, Gaylord Holder wrote:
Indeed, that pool was created with the default 8 pg_nums.
8 pg_num * 2T/OSD / 2 repl ~ 8TB which about how far I got.
I bumped up the pg_num to 600 for that pool and nothing happened.
I bumped up the pgp_num to 600 for that pool and ceph started shifting
things around.
Can you explain the difference between pg_num and pgp_num to me?
I can't understand the distinction.
Thank you for your help!
-Gaylord
On 09/09/2013 04:58 PM, Samuel Just wrote:
This is usually caused by having too few pgs. Each pool with a
significant amount of data needs at least around 100pgs/osd.
-Sam
On Mon, Sep 9, 2013 at 10:32 AM, Gaylord Holder
<gholder@xxxxxxxxxxxxx> wrote:
I'm starting to load up my ceph cluster.
I currently have 12 2TB drives (10 up and in, 2 defined but down and
out).
rados df
says I have 8TB free, but I have 2 nearly full OSDs.
I don't understand how/why these two disks are filled while the
others are
relatively empty.
How do I tell ceph to spread the data around more, and why isn't it
already
doing it?
Thank you for helping me understand this system better.
Cheers,
-Gaylord
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
well, pg_num as the total num of pgs, and pgp_num means the num of pgs
which are used now
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com