Re: v0.38 released

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Nov 18, 10:47, Tommi Virtanen wrote:
> On Fri, Nov 18, 2011 at 07:01, Andre Noll <maan@xxxxxxxxxxxxxxx> wrote:
> > For starters, it would be nice to include the ceph osd subcommands
> > in the man pages. To my knowledge they are only documented on the
> > (old) wiki
> >
> >        http://ceph.newdream.net/wiki/Monitor_commands
> >
> > at the moment. Would a patch that adds the subcommands and descriptions
> > to the man pages be accepted?
> 
> I'm not sure if the man page are the best for that; there's a lot of
> subcommands, and man forces it into a big list of things. I'd
> personally go for putting a reference under
> http://ceph.newdream.net/docs/latest/ops/ and using the structure for
> separating osd/mon/mds etc into slightly more manageable chunks.

I believe that code and documentation should be located as close as
possible, and I'd also prefer to edit and access the documentation
locally via command line tools rather than through a browser. But
I don't have a strong opinion on this, so let's go for the web
documentation.

Should I prepare something and post a request for inclusion to the
web pages on this mailing list, or do you want me to edit the web
documentation directly?

> > If so, I'd be willing to do this work. However, the files in man/
> > of the ceph git repo seem to be generated by docutils, so I suspect
> > they are not meant to be edited directly. What's the preferred way
> > to patch the man pages?
> 
> The content comes from doc/man/ and is built with ./admin/build-doc
> 
> That puts the whole html into build-doc/output/html/ and the *roff in
> build-doc/output/man/ and from there it is migrated to man/ "by need"
> (there's too much noise in the changes to keep doing that all the
> time, and there's too many toolchain dependencies to generate docs on
> every build).

I see, thanks for explaining. The ./admin/build-doc command worked
for me out of the box on an Ubuntu lucid system btw.

> >> Step 2: have a monitoring system be able to feed back information to
> >> use as osd weights, with admin customazability
> > How could such a monitoring system be implemented? In particular if
> > abstract criteria like "future extension plans" have to be considered.
> 
> Going back to my initial list: storage size, disk IO speed, network
> link bandwidth, heat in that
> part of the data center, future expansion plans, ..
> 
> That divides into 3 groups:
> - things that are more about the capability of the hardware (= change
> very seldomly)
> - things that are monitored outside of ceph
> - plans
> 
> Hence, it seems to me that a sysadmin would do something like look at
> the node data gathered by something like Ohai/Chef, combine that with
> collectd/munin-style monitoring of the data center, optionally do
> something like "increase weights of rack 7 by 40%", and then spit out
> a mapping of osd id -> weight.

OK, got the idea. However, in this example the difficult thing is
the decision "increase weights of rack 7 by 40%", which is made by a
human. Recomputing the osd weights accordingly should be fairly simple.

> Our chef cookbooks will probably provide a skeleton for that in the
> future, but that's not a short term need; most installations will
> probably set the weights once when the hardware is new, and I'd expect
> practically all clusters <6 months old to have fairly homogenous
> hardware, and thus identical weights.

Are you implying that ceph is only suitable for new clusters with
homogeneous hardware? I'm asking because our cluster is far from
homogeneous. There are 8 year old 2-core nodes with small SCSI disks
as well as 64-core boxes with much larger SATA disks.

Thanks
Andre
-- 
The only person who always got his work done by Friday was Robinson Crusoe

Attachment: signature.asc
Description: Digital signature


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux