Re: ISCSI Setup

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 06/19/2019 12:34 AM, Brent Kennedy wrote:
> Recently upgraded a ceph cluster to nautilus 14.2.1 from Luminous, no
> issues.  One of the reasons for doing so was to take advantage of some
> of the new ISCSI updates that were added in Nautilus.  I installed
> CentOS 7.6 and did all the basic stuff to get the server online.  I then
> tried to use the
> http://docs.ceph.com/docs/nautilus/rbd/iscsi-target-cli/ document and
> hit a hard stop.  Apparently, the package versions for the required
> packages at the top nor the ceph-iscsi exist yet in any repositories. 

I am in the process of updating the upstream docs (Aaron wrote up the
changes to the RHCS docs and I am just converting to the upstream docs
and making into patches for a PR, and ceph-ansible
(https://github.com/ceph/ceph-ansible/pull/3977) for the transition from
ceph-iscsi-cli/config to ceph-iscsi.

The upstream GH for ceph-iscsi is here
https://github.com/ceph/ceph-iscsi

and it is built here:
https://shaman.ceph.com/repos/ceph-iscsi/

I think we are just waiting on one last patch for fqdn support from SUSE
so we can make a new ceph-iscsi release.


> Reminds me of when I first tried to setup RGWs.  Is there a hidden
> repository somewhere that hosts these required packages?  Also, I found
> a thread talking about those packages and the instructions being off,
> which concerns me.  Is there a good tutorial online somewhere?  I saw
> the ceph-ansible bits, but wasn’t sure if that would even work because
> of the package issue.  I use ansible to deploy machines all the time.  I
> also wonder if the ISCSI bits are considered production or Test ( I see
> RedHat has a bunch of docs talking about using iscsi, so I would think
> production ).
> 
>  
> 
> Thoughts anyone?
> 
>  
> 
> Regards,
> 
> -Brent
> 
>  
> 
> Existing Clusters:
> 
> Test: Nautilus 14.2.1 with 3 osd servers, 1 mon/man, 1 gateway ( all
> virtual on SSD )
> 
> US Production(HDD): Nautilus 14.2.1 with 11 osd servers, 3 mons, 4
> gateways behind haproxy LB
> 
> UK Production(HDD): Luminous 12.2.11 with 25 osd servers, 3 mons/man, 3
> gateways behind haproxy LB
> 
> US Production(SSD): Luminous 12.2.11 with 6 osd servers, 3 mons/man, 3
> gateways behind haproxy LB
> 
>  
> 
>  
> 
> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux