`ceph pg stat` might be cleaner to watch than the `ceph status | grep pgs`. I also like watching `ceph osd pool stats` which breaks down all IO by pool. You also have the option of the dashboard mgr service which has a lot of useful information including the pool IO breakdown.
On Thu, Mar 1, 2018 at 7:22 AM Georgios Dimitrakakis <giorgis@xxxxxxxxxxxx> wrote:
Excellent! Good to know that the behavior is intentional!
Thanks a lot John for the feedback!
Best regards,
G.
> On Thu, Mar 1, 2018 at 12:03 PM, Georgios Dimitrakakis
> <giorgis@xxxxxxxxxxxx> wrote:
>> I have recently updated to Luminous (12.2.4) and I have noticed that
>> using
>> "ceph -w" only produces an initial output like the one below but
>> never gets
>> updated afterwards. Is this a feature because I was used to the old
>> way that
>> was constantly
>> producing info.
>
> It's intentional. "ceph -w" is the command that follows the Ceph
> cluster log. The monitor used to dump the pg status into the cluster
> log every 5 seconds, which was useful sometimes, but also made the
> log
> pretty unreadable for anything else, because other output was quickly
> swamped with the pg status spam.
>
> To replicate the de-facto old behaviour (print the pg status every 5
> seconds), you can always do something like `watch -n1 "ceph status |
> grep pgs"`
>
> There's work ongoing to create a nice replacement that does a status
> stream without spamming the cluster log to accomplish it here:
> https://github.com/ceph/ceph/pull/20100
>
> Cheers,
> John
>
>>
>> Here is what I get as initial output which is not updated:
>>
>> $ ceph -w
>> cluster:
>> id: d357a551-5b7a-4501-8d8f-009c63b2c972
>> health: HEALTH_OK
>>
>> services:
>> mon: 1 daemons, quorum node1
>> mgr: node1(active)
>> osd: 2 osds: 2 up, 2 in
>> rgw: 1 daemon active
>>
>> data:
>> pools: 11 pools, 152 pgs
>> objects: 9786 objects, 33754 MB
>> usage: 67494 MB used, 3648 GB / 3714 GB avail
>> pgs: 152 active+clean
>>
>>
>>
>> Even if I create a new volume in my Openstack installation, assign
>> it to a
>> VM, mount it and format it, I have to stop and re-execute the "ceph
>> -w"
>> command to see the following line:
>>
>>
>> io:
>> client: 767 B/s rd, 511 B/s wr, 0 op/s rd, 0 op/s wr
>>
>> which also pauses after the first display.
>>
>>
>> Kind regards,
>>
>>
>> G.
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com