Re: Learning Ceph - Workshop ideas for entry level

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I’ve found that having > 1  drive controller in vagrant is problematic, although i do agree it would be preferred logically for > 1 osd per node.   That being said, have you all run into the problems that i’ve seen with > 1 hdd controller?   Namely, the inability to use vagrant's up command after a machine has been provisioned and halted.   At least from my perspective, once you provision you can’t use vagrant for up / halt and just use VB commands.  VB does not check for drive controller’s existence, and gets confused.  Is that consistent with what you’ve found?  I’ve spent some time on this issue, so i’m curious as to if you’ve found any “magic grits” to solve it.  This issue is on both *nix and windows too btw.  (It is certainly a VB issue, not vagrant’s falt imo)


Thank you in advance for your thoughts, anything that jumpstarts newbs past a ceph installation is a great thing! 

Bob





Bob Wassell

Solutions Architect | Soft <http://www.softiron.com/>Iron <http://www.softiron.com/>
+1 610 505 9861

bob@xxxxxxxxxxxx <mailto:bob@xxxxxxxxxxxx>

> On Feb 14, 2020, at 11:50 AM, Ignacio Ocampo <nafiux@xxxxxxxxx> wrote:
> 
> Hi all,
> 
> A group of friends and my self are documenting a hands-on workshop about
> Ceph https://github.com/Nafiux/ceph-workshop, for learning purposes.
> 
> The idea is to provide visibility step-by-step on common scenarios, from
> basic usage, to disaster and recovery scenarios.
> 
> We will hold a workshop next weekend, and some of the ideas for learning
> are:
> 
>   - Configure and learn how to consume Block Storage Devices
>   - Configure and learn how to consume File Systems Storage
>   - Configure and learn how to consume Object Storage
>   - Simulate a disaster and recovery event by killing a node and setting
>   up a new one
>   - Simulate a node migration
> 
> Any idea or feedback is welcome! From the way we've decided to install Ceph
> (ceph-ansible), the way we're configuring the cluster, suggestions on basic
> day-to-day operations we should learn, etc.
> 
> Thanks for your support!
> 
> -- 
> Ignacio Ocampo
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux