ceph -w output

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi!

I am trying to get behind the values in ceph -w, especially those regarding throughput(?) at the end:

2015-05-15 00:54:33.333500 mon.0 [INF] pgmap v26048646: 17344 pgs: 17344 active+clean; 6296 GB data, 19597 GB used, 155 TB / 174 TB avail; 6023 kB/s rd, 549 kB/s wr, 7564 op/s 2015-05-15 00:54:34.339739 mon.0 [INF] pgmap v26048647: 17344 pgs: 17344 active+clean; 6296 GB data, 19597 GB used, 155 TB / 174 TB avail; 1853 kB/s rd, 1014 kB/s wr, 2015 op/s 2015-05-15 00:54:35.353621 mon.0 [INF] pgmap v26048648: 17344 pgs: 17344 active+clean; 6296 GB data, 19597 GB used, 155 TB / 174 TB avail; 2101 kB/s rd, 1680 kB/s wr, 1950 op/s 2015-05-15 00:54:36.375887 mon.0 [INF] pgmap v26048649: 17344 pgs: 17344 active+clean; 6296 GB data, 19597 GB used, 155 TB / 174 TB avail; 1641 kB/s rd, 1266 kB/s wr, 1710 op/s 2015-05-15 00:54:37.399647 mon.0 [INF] pgmap v26048650: 17344 pgs: 17344 active+clean; 6296 GB data, 19597 GB used, 155 TB / 174 TB avail; 4735 kB/s rd, 777 kB/s wr, 7088 op/s 2015-05-15 00:54:38.453922 mon.0 [INF] pgmap v26048651: 17344 pgs: 17344 active+clean; 6296 GB data, 19597 GB used, 155 TB / 174 TB avail; 5176 kB/s rd, 942 kB/s wr, 7779 op/s 2015-05-15 00:54:39.462838 mon.0 [INF] pgmap v26048652: 17344 pgs: 17344 active+clean; 6296 GB data, 19597 GB used, 155 TB / 174 TB avail; 3407 kB/s rd, 768 kB/s wr, 2131 op/s 2015-05-15 00:54:40.488387 mon.0 [INF] pgmap v26048653: 17344 pgs: 17344 active+clean; 6296 GB data, 19597 GB used, 155 TB / 174 TB avail; 3343 kB/s rd, 518 kB/s wr, 1881 op/s 2015-05-15 00:54:41.512540 mon.0 [INF] pgmap v26048654: 17344 pgs: 17344 active+clean; 6296 GB data, 19597 GB used, 155 TB / 174 TB avail; 1221 kB/s rd, 2385 kB/s wr, 1686 op/s

Am I right to assume the values for "kB/s rd" and "kB/s wr" mean that the indicated amount of data has been read/written by clients since the last line, total over all OSDs?

As for the op/s I am a little more uncertain. What kind of operations does this count?
Assuming it is also reads and writes aggregated, what counts as an operation?
For example, when I request data via the Rados Gateway, do I see one "op" here for the request from RGW's perspective, or do I see multiple, depending on how many "low level" objects a big RGW upload was striped to? What about non-rgw objects that get striped? Are reads/writes on those counted as one or one per stripe? Is there anything else counting into this but reads/writes to the object data? What about key/value level accesses?

Is it possible to someone come up with a theoretical estimate for a maximum value achievable with a given set of hardware?
This is a cluster of 4 nodes with 48 OSDs, 4TB each, all spinners.
Are these values good, bad, critical?

Can I somehow deduce - even if it is just a rather rough estimate - how "loaded" my cluster is? I am not talking about precision monitoring, but some kind of traffic light system (e.g. up to X% of the theoretical max is fine, up to Y% show a very busy cluster and anything above Y% means we might be up for trouble)?

Any pointers to documentation or other material would be appreciated if this was discussed in some detail before. The only thing I found was a post on this list from 2013 which did not say more than "ops are reads, writes, anything", not going into detail about the "anything".

Thanks a lot!

Daniel


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux