Re: v0.38 released

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Nov 18, 2011 at 07:01, Andre Noll <maan@xxxxxxxxxxxxxxx> wrote:
> For starters, it would be nice to include the ceph osd subcommands
> in the man pages. To my knowledge they are only documented on the
> (old) wiki
>
>        http://ceph.newdream.net/wiki/Monitor_commands
>
> at the moment. Would a patch that adds the subcommands and descriptions
> to the man pages be accepted?

I'm not sure if the man page are the best for that; there's a lot of
subcommands, and man forces it into a big list of things. I'd
personally go for putting a reference under
http://ceph.newdream.net/docs/latest/ops/ and using the structure for
separating osd/mon/mds etc into slightly more manageable chunks.

> If so, I'd be willing to do this work. However, the files in man/
> of the ceph git repo seem to be generated by docutils, so I suspect
> they are not meant to be edited directly. What's the preferred way
> to patch the man pages?

The content comes from doc/man/ and is built with ./admin/build-doc

That puts the whole html into build-doc/output/html/ and the *roff in
build-doc/output/man/ and from there it is migrated to man/ "by need"
(there's too much noise in the changes to keep doing that all the
time, and there's too many toolchain dependencies to generate docs on
every build).

>> Step 2: have a monitoring system be able to feed back information to
>> use as osd weights, with admin customazability
> How could such a monitoring system be implemented? In particular if
> abstract criteria like "future extension plans" have to be considered.

Going back to my initial list: storage size, disk IO speed, network
link bandwidth, heat in that
part of the data center, future expansion plans, ..

That divides into 3 groups:
- things that are more about the capability of the hardware (= change
very seldomly)
- things that are monitored outside of ceph
- plans

Hence, it seems to me that a sysadmin would do something like look at
the node data gathered by something like Ohai/Chef, combine that with
collectd/munin-style monitoring of the data center, optionally do
something like "increase weights of rack 7 by 40%", and then spit out
a mapping of osd id -> weight.

Our chef cookbooks will probably provide a skeleton for that in the
future, but that's not a short term need; most installations will
probably set the weights once when the hardware is new, and I'd expect
practically all clusters <6 months old to have fairly homogenous
hardware, and thus identical weights.
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux