Re: too many PGs per OSD when pg_num = 256??

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hmm. Something happened then. I only have 20 OSDs. What may cause that?

 

Brian Andrus

ITACS/Research Computing

Naval Postgraduate School

Monterey, California

voice: 831-656-6238

 

 

 

From: David Turner [mailto:david.turner@xxxxxxxxxxxxxxxx]
Sent: Thursday, September 22, 2016 10:04 AM
To: Andrus, Brian Contractor <bdandrus@xxxxxxx>; ceph-users@xxxxxxxxxxxxxx
Subject: RE: too many PGs per OSD when pg_num = 256??

 

So you have 3,520 pgs.  Assuming all of your pools are using 3 replicas, and using the 377 pgs/osd in your health_warn state, that would mean your cluster has 28 osds.

When you calculate how many pgs a pool should have, you need to account for how many osds you have, how much percentage of data each pool will account for out of your entire cluster, and go from there.  The ceph PG Calc tool will be an excellent resource to help you figure out how many pgs each pool should have.  It takes all of those factors into account.  http://ceph.com/pgcalc/


David Turner | Cloud Operations Engineer | StorageCraft Technology Corporation
380 Data Drive Suite 300 | Draper | Utah | 84020
Office: 801.871.2760 | Mobile: 385.224.2943


If you are not the intended recipient of this message or received it erroneously, please notify the sender and delete it, together with any attachments, and be advised that any dissemination or copying of this message is prohibited.



From: Andrus, Brian Contractor [bdandrus@xxxxxxx]
Sent: Thursday, September 22, 2016 10:41 AM
To: David Turner; ceph-users@xxxxxxxxxxxxxx
Subject: RE: too many PGs per OSD when pg_num = 256??

David,

I have 15 pools:

# ceph osd lspools|sed 's/,/\n/g'

0 rbd

1 cephfs_data

2 cephfs_metadata

3 vmimages

14 .rgw.root

15 default.rgw.control

16 default.rgw.data.root

17 default.rgw.gc

18 default.rgw.log

19 default.rgw.users.uid

20 default.rgw.users.keys

21 default.rgw.users.email

22 default.rgw.meta

23 default.rgw.buckets.index

24 default.rgw.buckets.data

# ceph -s | grep -Eo '[0-9]+ pgs'

3520 pgs

 

 

 

Brian Andrus

ITACS/Research Computing

Naval Postgraduate School

Monterey, California

voice: 831-656-6238

 

 

 

From: David Turner [mailto:david.turner@xxxxxxxxxxxxxxxx]
Sent: Thursday, September 22, 2016 8:57 AM
To: Andrus, Brian Contractor <bdandrus@xxxxxxx>; ceph-users@xxxxxxxxxxxxxx
Subject: RE: too many PGs per OSD when pg_num = 256??

 

Forgot the + for the regex.

ceph -s | grep -Eo '[0-9]+ pgs'


David Turner | Cloud Operations Engineer | StorageCraft Technology Corporation
380 Data Drive Suite 300 | Draper | Utah | 84020
Office: 801.871.2760 | Mobile: 385.224.2943


If you are not the intended recipient of this message or received it erroneously, please notify the sender and delete it, together with any attachments, and be advised that any dissemination or copying of this message is prohibited.



From: David Turner
Sent: Thursday, September 22, 2016 9:53 AM
To: Andrus, Brian Contractor; ceph-users@xxxxxxxxxxxxxx
Subject: RE: too many PGs per OSD when pg_num = 256??

How many pools do you have?  How many pgs does your total cluster have, not just your rbd pool?

ceph osd lspools
ceph -s | grep -Eo '[0-9] pgs'

My guess is that you have other pools with pgs and the cumulative total of pgs per osd is too many.


From: ceph-users [ceph-users-bounces@xxxxxxxxxxxxxx] on behalf of Andrus, Brian Contractor [bdandrus@xxxxxxx]
Sent: Thursday, September 22, 2016 9:33 AM
To: ceph-users@xxxxxxxxxxxxxx
Subject: [ceph-users] too many PGs per OSD when pg_num = 256??

All,

 

I am getting a warning:

 

     health HEALTH_WARN

            too many PGs per OSD (377 > max 300)

            pool cephfs_data has many more objects per pg than average (too few pgs?)

 

yet, when I check the settings:

# ceph osd pool get rbd pg_num

pg_num: 256

# ceph osd pool get rbd pgp_num

pgp_num: 256

 

How does something like this happen?

I did create a radosgw several weeks ago and have put a single file in it for testing, but that is it. It only started giving the warning a couple days ago.

 

Brian Andrus

ITACS/Research Computing

Naval Postgraduate School

Monterey, California

voice: 831-656-6238

 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux