Re: Ceph community - how to make it even stronger

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi.

What makes us struggle / wonder again and again is the absence of CEPH __man pages__. On *NIX systems man pages are always the first way to go for help, right? Or is this considered "old school" from the CEPH makers / community? :O

And as many ppl complain again and again, the same here as well / again... the CEPH documentation on docs.ceph.com lacks lot of useful / needed things. If you really want to work with CEPH, you need to read and track many different sources all the time, like the community news posts, docs.ceph.com, the RedHat Storage stuff and sometimes even the GitHub source code... all together very time consuming and error prone... from my point of view this is the biggest drop-back of the hole (and overall GREAT!) "storage solution"! 

As we are on that topic... THANKS for all the great help and posts to YOU / the CEPH community! You guys are great and really "make the difference"!

 
---------

Hi All.

I was reading up and especially the thread on upgrading to mimic and
stable releases - caused me to reflect a bit on our ceph journey so far.

We started approximately 6 months ago - with CephFS as the dominant
use case in our HPC setup - starting at 400TB useable capacity and
as is matures going towards 1PB - mixed slow and SSD.

Some of the first confusions was.
bluestore vs. filestore - what was the recommendation actually?
Figuring out what kernel clients are useable with CephFS - and what
kernels to use on the other end?
Tuning of the MDS ?
Imbalace of OSD nodes rendering the cluster down - how to balance?
Triggering kernel bugs in the kernel client during OSD_FULL ?

This mailing list has been very responsive to the questions, thanks for
that.

But - compared to other open source projects we're lacking a bit of
infrastructure and guidance here.

I did check:
- http://tracker.ceph.com/projects/ceph/wiki/Wiki => Which does not seem
to be operational.
- http://docs.ceph.com/docs/mimic/start/get-involved/[http://docs.ceph.com/docs/mimic/start/get-involved/]
Gmane is probably not coming back - waiting 2 years now, can we easily get
the mailinglist archives indexed otherwise.

I feel that the wealth of knowledge being build up around operating ceph
is not really captured to make the next users journey - better and easier.

I would love to help out - hey - I end up spending the time anyway, but
some guidance on how to do it may help.

I would suggest:

1) Dump a 1-3 monthly status email on the project to the respective
mailing lists => Major releases, Conferences, etc
2) Get the wiki active - one of the main things I want to know about when
messing with the storage is - What is working for other people - just a
page where people can dump an aggregated output of their ceph cluster and
write 2-5 lines about the use-case for it.
3) Either get community more active on the documentation - advocate for it
- or start up more documentation on the wiki => A FAQ would be a nice
first place to start.

There may be an awful lot of things I've missed on the write up - but
please follow up.

If some of the core ceph people allready have thoughts / ideas / guidance,
please share so we collaboratively can make it better.

Lastly - thanks for the great support on the mailing list - so far - the
intent is only to try to make ceph even better.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux