[ceph-users] “ceph pg dump summary –f json” question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Weired. Maybe you can check the source code (src/mon/PGMonitor.cc,
around L1434).
But, looks like there is another command "ceph pg dump_json {all |
summary | sum | pools | ...} which you can try.

On Fri, May 16, 2014 at 2:56 PM, Cao, Buddy <buddy.cao at intel.com> wrote:
> In my env, "ceph pg dump all -f json" only returns result below,
>
> {"version":45685,"stamp":"2014-05-15 23:50:27.773608","last_osdmap_epoch":13875,"last_pg_scan":13840,"full_ratio":"0.950000","near_full_ratio":"0.850000","pg_stats_sum":{"stat_sum":{"num_bytes":151487109145,"num_objects":36186,"num_object_clones":0,"num_object_copies":72372,"num_objects_missing_on_primary":0,"num_objects_degraded":5716,"num_objects_unfound":0,"num_read":8502912,"num_read_kb":611729970,"num_write":2737247,"num_write_kb":340122861,"num_scrub_errors":39,"num_shallow_scrub_errors":39,"num_deep_scrub_errors":0,"num_objects_recovered":267486,"num_bytes_recovered":1120311874505,"num_keys_recovered":94},"stat_cat_sum":{},"log_size":952236,"ondisk_log_size":952236},"osd_stats_sum":{"kb":19626562368,"kb_used":296596996,"kb_avail":19329965372,"hb_in":[],"hb_out":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"op_queue_age_hist":{"histogram":[1,2,0,1,1,2,56,5,20,18],"upper_bound":1024},"fs_perf_stat":{"commit_latency_ms":55456,"apply_latency_ms":408}},"pg_stats_delta":{"stat_sum":{"num_bytes":45867008,"num_objects":10,"num_object_clones":0,"num_object_copies":20,"num_objects_missing_on_primary":0,"num_objects_degraded":0,"num_objects_unfound":0,"num_read":385,"num_read_kb":29360,"num_write":296,"num_write_kb":61320,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0},"stat_cat_sum":{},"log_size":0,"ondisk_log_size":0}}
>
> But "ceph pg dump summary" returns much more info than this and even includes a lot pg summaries likes below.  Any idea? thanks
> 3.3     30      0       0       0       125829120       77      77      active+clean    2014-05-15 23:41:47.950780      13875'77        13875:129       [11,4]  [11,4]  0'0     2014-05-15 20:02:58.937561      0'0     2014-05-13 20:02:53.011326
> 4.4     49      0       0       0       205520896       3001    3001    active+clean    2014-05-15 19:52:41.534026      13875'5784      13875:29766     [41,26] [41,26] 13244'5438      2014-05-15 09:47:09.501469      0'0     2014-05-14 09:45:29.872229
> 5.5     0       0       0       0       0       0       0       active+clean    2014-05-15 19:52:40.344205      0'0     13875:1702      [25,33] [25,33] 0'0     2014-05-15 09:48:33.067498    0'0       2014-05-14 09:47:40.406650
> 6.6     0       0       0       0       0       0       0       active+clean    2014-05-15 09:49:45.360999      0'0     13875:1678      [24,9]  [24,9]  0'0     2014-05-15 09:49:45.360951    0'0       2014-05-14 09:47:42.871263
> 7.7     0       0       0       0       0       0       0       active+clean    2014-05-15 20:12:03.812881      0'0     13875:58        [33,25] [33,25] 0'0     2014-05-15 20:12:02.859031    0'0       2014-05-15 20:12:02.859031
> pool 0  0       0       0       0       0       0       0
> pool 1  21      0       0       0       9470    21      21
> pool 2  0       0       0       0       0       0       0
> pool 3  13006   0       2032    0       54429261968     39905   39905
> pool 4  22725   0       3616    0       95251489157     909458  909458
> pool 5  3       0       0       0       262     4       4
> pool 6  0       0       0       0       0       0       0
> pool 7  0       0       0       0       0       0       0
> pool 8  0       0       0       0       0       0       0
>  sum    35755   0       5648    0       149680760857    949388  949388
> osdstat kbused  kbavail kb      hb in   hb out
> 0       79744   486985484       487065228       [1,8,9,16,17,32,33,34,35,36,37,47]      []
> 1       80068   486985160       487065228       [0,2,8,9,16,17,32,33,34,35,36,37]       []
> 2       9602980 477462248       487065228       [1,3,12,13,18,19,38,39,40,41]   []
> 3       18896156        468169072       487065228       [2,4,11,12,13,18,19,20,38,39,40,41]     []
> 4       15758800        471306428       487065228       [3,5,12,13,18,19,20,38,39,40,41]        []
> 5       13118956        473946272       487065228       [4,6,7,11,12,18,20,38,39,40]    []
> 6       83000   486982228       487065228       [5,7,14,15,22,23,42,43,44,45,46,47]     []
>
>
> Wei Cao (Buddy)
>
> -----Original Message-----
> From: xan.peng [mailto:xanpeng at gmail.com]
> Sent: Friday, May 16, 2014 2:20 PM
> To: Cao, Buddy
> Cc: ceph-users at ceph.com
> Subject: Re: ?ceph pg dump summary ?f json? question
>
> Looks like "ceph pg dump all -f json" = "ceph pg dump summary".
>
> On Fri, May 16, 2014 at 1:54 PM, Cao, Buddy <buddy.cao at intel.com> wrote:
>> Hi there,
>>
>> ?ceph pg dump summary ?f json? does not returns data as much as ?ceph
>> pg dump summary?,  are there any ways to get the fully Json format
>> data for ?ceph pg dump summary??
>>
>>
>>
>>
>>
>> Wei Cao (Buddy)
>>
>>
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users at lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux