Hi All. I was reading up and especially the thread on upgrading to mimic and stable releases - caused me to reflect a bit on our ceph journey so far. We started approximately 6 months ago - with CephFS as the dominant use case in our HPC setup - starting at 400TB useable capacity and as is matures going towards 1PB - mixed slow and SSD. Some of the first confusions was. bluestore vs. filestore - what was the recommendation actually? Figuring out what kernel clients are useable with CephFS - and what kernels to use on the other end? Tuning of the MDS ? Imbalace of OSD nodes rendering the cluster down - how to balance? Triggering kernel bugs in the kernel client during OSD_FULL ? This mailing list has been very responsive to the questions, thanks for that. But - compared to other open source projects we're lacking a bit of infrastructure and guidance here. I did check: - http://tracker.ceph.com/projects/ceph/wiki/Wiki => Which does not seem to be operational. - http://docs.ceph.com/docs/mimic/start/get-involved/ Gmane is probably not coming back - waiting 2 years now, can we easily get the mailinglist archives indexed otherwise. I feel that the wealth of knowledge being build up around operating ceph is not really captured to make the next users journey - better and easier. I would love to help out - hey - I end up spending the time anyway, but some guidance on how to do it may help. I would suggest: 1) Dump a 1-3 monthly status email on the project to the respective mailing lists => Major releases, Conferences, etc 2) Get the wiki active - one of the main things I want to know about when messing with the storage is - What is working for other people - just a page where people can dump an aggregated output of their ceph cluster and write 2-5 lines about the use-case for it. 3) Either get community more active on the documentation - advocate for it - or start up more documentation on the wiki => A FAQ would be a nice first place to start. There may be an awful lot of things I've missed on the write up - but please follow up. If some of the core ceph people allready have thoughts / ideas / guidance, please share so we collaboratively can make it better. Lastly - thanks for the great support on the mailing list - so far - the intent is only to try to make ceph even better. _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com