One way might be to have a nag system, with a global flag that can turn nags off at the cluster level (for production deployments), but the nags are added to the cluster-state messages on a regular basis to remind operators that there is something to investigate. Having an indexed list of nags would allow them to be turned off individually. ceph --nagoff 1 List of example of nags: 1: There are too few placement groups (PGs) for the number of nodes and OSDs, adjust with: ceph osd pool set data pg_num XXX and: ceph osd pool set data pgp_num XXX 2: There are too few MONs deployment for the number of nodes 3: An even number of MONs are configured, consider removing or adding one for better efficiency 4: Journal size is too small for the write traffic, consider increasing to ... ... 14: /sys/block/.../queue/nr_requests is too low, adjust higher with: echo 512 > /sys/block/sdb/queue/nr_requests, observe with iostat -x /dev/sdX 15: Target Transaction size is too small for this CPU, adjust higher with ceph osd set_target_transaction_size = 50 99: I am nagging too much aren't I... ...etc... ceph -nagon 3 Nag me about #3 as a reminder. Definition of "nag" v. nagged, nag·ging, nags v.tr. 1. To annoy by constant scolding, complaining, or urging. 2. To torment persistently, as with anxiety or pain. This subsystem could be called ceph-wife, but I might get into trouble for that suggestion. On Sat, Sep 7, 2013 at 9:29 AM, Mark Nelson <mark.nelson@xxxxxxxxxxx> wrote: > At one point Sam and I were discussing some kind of message that wouldn't be > a health warning, but something kind of similar to what you are discussing > here. The idea is this would be for when Ceph thinks something is > configured sub-optimally, but the issue doesn't necessarily affect the > health of the cluster (at least in so much as everything is functioning as > defined). We were concerned that people might not want more things causing > health warnings. -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html