Re: installation docs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Manuel
I am Goncalo Borges (Portuguese) and I work at the university of Sydney. We have been using ceph and cephfs since almost two years. If you think worthwhile, we can just talk and discuss our experiences. There is good ceph community in Melbourne but you are actually the first one in Sydney that I am aware off.
Cheers 
Goncalo
________________________________________
From: ceph-users [ceph-users-bounces@xxxxxxxxxxxxxx] on behalf of Manuel Sopena Ballesteros [manuel.sb@xxxxxxxxxxxxx]
Sent: 30 December 2016 19:02
To: ceph-users@xxxxxxxxxxxxxx
Subject:  installation docs

Hi,

I just would like to point a couple of issues I have following the INSTALLATION (QUICK) document.


1.       The order to clean ceph deployment is:

a.       Ceph-deploy purge {ceph-node} [{ceph-node}]

b.      Ceph-deploy purgedata {ceph-node} [{ceph-node}]



2.       I run ceph jewel 10.2.5 and “eph-deploy preapre” also activates the OSD, this is something that confused me because I could not understand why there was 3 commands (prepare_activate or create) to do this when only 1 is needed (right now I don’t know the difference between prepare and create as for me both do the same)


3.       Right after the installation the status of the ceph cluster is “HEALTH_WARN too few PGs per OSD (10 < min 30)” and not “active + clean” as the documentation says.



4.       It would be good to add a small guide for troubleshooting like check if the monitors are working, how to restart the monitor processes, check communication between the OSDs processes and the monitors, run commands on the local nodes to see in more details what is failing, etc.



5.       Also I spent a lot of time from the IRC channel trying to understand why ceph-deploy and the problem was the disks were already mounted. I could not see in running df –h but lsblk, this is also something that in my opinion would be good to have. Special thanks to Ivve and badone who helped me to find out what the issue was.



6.       Last thing, would be good to mention that installing ceph through Ansible is also an option


Other than that congratulation to the community for your effort and keep it going!


Manuel Sopena Ballesteros | Big data Engineer
Garvan Institute of Medical Research
The Kinghorn Cancer Centre, 370 Victoria Street, Darlinghurst, NSW 2010
T: + 61 (0)2 9355 5760 | F: +61 (0)2 9295 8507 | E: manuel.sb@xxxxxxxxxxxxx<mailto:manuel.sb@xxxxxxxxxxxxx>

NOTICE
Please consider the environment before printing this email. This message and any attachments are intended for the addressee named and may contain legally privileged/confidential/copyright information. If you are not the intended recipient, you should not read, use, disclose, copy or distribute this communication. If you have received this message in error please notify us at once by return email and then delete both messages. We accept no liability for the distribution of viruses or similar in electronic communications. This notice should not be removed.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux