Re: [ceph-users] installation docs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



+ceph-devel

On Fri, Dec 30, 2016 at 6:02 PM, Manuel Sopena Ballesteros
<manuel.sb@xxxxxxxxxxxxx> wrote:
> Hi,
>
>
>
> I just would like to point a couple of issues I have following the
> INSTALLATION (QUICK) document.
>
>
>
> 1.       The order to clean ceph deployment is:
>
> a.       Ceph-deploy purge {ceph-node} [{ceph-node}]
>
> b.      Ceph-deploy purgedata {ceph-node} [{ceph-node}]
>
>
>
> 2.       I run ceph jewel 10.2.5 and “eph-deploy preapre” also activates the
> OSD, this is something that confused me because I could not understand why
> there was 3 commands (prepare_activate or create) to do this when only 1 is
> needed (right now I don’t know the difference between prepare and create as
> for me both do the same)
>
>
>
> 3.       Right after the installation the status of the ceph cluster is
> “HEALTH_WARN too few PGs per OSD (10 < min 30)” and not “active + clean” as
> the documentation says.
>
>
>
> 4.       It would be good to add a small guide for troubleshooting like
> check if the monitors are working, how to restart the monitor processes,
> check communication between the OSDs processes and the monitors, run
> commands on the local nodes to see in more details what is failing, etc.
>
>
>
> 5.       Also I spent a lot of time from the IRC channel trying to
> understand why ceph-deploy and the problem was the disks were already
> mounted. I could not see in running df –h but lsblk, this is also something
> that in my opinion would be good to have. Special thanks to Ivve and badone
> who helped me to find out what the issue was.
>
>
>
> 6.       Last thing, would be good to mention that installing ceph through
> Ansible is also an option
>
>
>
> Other than that congratulation to the community for your effort and keep it
> going!
>
>
>
>
>
> Manuel Sopena Ballesteros | Big data Engineer
> Garvan Institute of Medical Research
> The Kinghorn Cancer Centre, 370 Victoria Street, Darlinghurst, NSW 2010
> T: + 61 (0)2 9355 5760 | F: +61 (0)2 9295 8507 | E: manuel.sb@xxxxxxxxxxxxx
>
>
>
> NOTICE
> Please consider the environment before printing this email. This message and
> any attachments are intended for the addressee named and may contain legally
> privileged/confidential/copyright information. If you are not the intended
> recipient, you should not read, use, disclose, copy or distribute this
> communication. If you have received this message in error please notify us
> at once by return email and then delete both messages. We accept no
> liability for the distribution of viruses or similar in electronic
> communications. This notice should not be removed.
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 
Cheers,
Brad
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux