Re: RBD as backend for iSCSI SAN Targets

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, 15 Mar 2014, Karol Kozubal wrote:
> I just re-read the documentation… It looks like its a proposed feature
> that is in development. I will have to adjust my test in consequence in
> that case.
> 
> Any one out there have any ideas when this will be implemented? Or what
> the plans look like as of right now?

This will appear in 0.78, which will be out in the next week.

sage

> 
> 
> 
> On 2014-03-15, 1:17 PM, "Karol Kozubal" <Karol.Kozubal@xxxxxxxxx> wrote:
> 
> >How are the SSDs going to be in writeback? Is that the new caching pool
> >Feature?
> >
> >I am not sure what version implemented this, but it is documented here
> >(https://ceph.com/docs/master/dev/cache-pool/).
> >I will be using the latest stable release for my next batch of testing,
> >right now I am on 0.67.4 and I will be moving towards the 0.72.x branch.
> >
> >As for the IOPS, it would be a total cluster IO throughput estimate based
> >on an application that would be reading/writing to more than 60 rbd
> >volumes.
> >
> >
> >
> >
> >
> >On 2014-03-15, 1:11 PM, "Wido den Hollander" <wido@xxxxxxxx> wrote:
> >
> >>On 03/15/2014 05:40 PM, Karol Kozubal wrote:
> >>> Hi Wido,
> >>>
> >>> I will have some new hardware for running tests in the next two weeks
> >>>or
> >>> so and will report my findings once I get a chance to run some tests. I
> >>> will disable writeback on the target side as I will be attempting to
> >>> configure an ssd caching pool of 24 ssd's with writeback for the main
> >>>pool
> >>> with 360 disks with a 5 osd spinners to 1 ssd journal ratio. I will be
> >>
> >>How are the SSDs going to be in writeback? Is that the new caching pool
> >>feature?
> >>
> >>> running everything through 10Gig SFP+ Ethernet interfaces with a
> >>>dedicated
> >>> cluster network interface, dedicated public ceph interface and a
> >>>separate
> >>> iscsi network also with 10 gig interfaces for the target machines.
> >>>
> >>
> >>That seems like a good network.
> >>
> >>> I am ideally looking for a 20,000 to 60,000 IOPS from this system if I
> >>>can
> >>> get the caching pool configuration right. The application has a 30ms
> >>>max
> >>> latency requirement for the storage.
> >>>
> >>
> >>20.000 to 60.000 is a big difference. But the only way you are going to
> >>achieve that is by doing a lot of parellel I/O. Ceph doesn't excel in
> >>single threads doing a lot of I/O.
> >>
> >>So if you have multiple RBD devices on which you are doing the I/O it
> >>shouldn't be that much of a problem.
> >>
> >>Just spread out the I/O. Scale horizontal instead of vertical.
> >>
> >>> In my current tests I have only spinners with SAS 10K disks, 4.2ms
> >>>write
> >>> latency on the disks with separate journaling on SAS 15K disks with a
> >>> 3.3ms write latency. With 20 OSDs and 4 Journals I am only concerned
> >>>with
> >>> the overall operation apply latency that I have been seeing (1-6ms idle
> >>>is
> >>> normal, but up to 60-170ms for a moderate workload using rbd
> >>>bench-write)
> >>> however I am on a network where I am bound to 1500 mtu and I will get
> >>>to
> >>> test jumbo frames with the next setup in addition to the ssd¹s. I
> >>>suspect
> >>> the overall performance will be good in the new test setup and I am
> >>> curious to see what my tests will yield.
> >>>
> >>> Thanks for the response!
> >>>
> >>> Karol
> >>>
> >>>
> >>>
> >>> On 2014-03-15, 12:18 PM, "Wido den Hollander" <wido@xxxxxxxx> wrote:
> >>>
> >>>> On 03/15/2014 04:11 PM, Karol Kozubal wrote:
> >>>>> Hi Everyone,
> >>>>>
> >>>>> I am just wondering if any of you are running a ceph cluster with an
> >>>>> iSCSI target front end? I know this isn¹t available out of the box,
> >>>>> unfortunately in one particular use case we are looking at providing
> >>>>> iSCSI access and it's a necessity. I am liking the idea of having rbd
> >>>>> devices serving block level storage to the iSCSI Target servers while
> >>>>> providing a unified backed for native rbd access by openstack and
> >>>>> various application servers. On multiple levels this would reduce the
> >>>>> complexity of our SAN environment and move us away from expensive
> >>>>> proprietary solutions that don¹t scale out.
> >>>>>
> >>>>> If any of you have deployed any HA iSCSI Targets backed by rbd I
> >>>>>would
> >>>>> really appreciate your feedback and any thoughts.
> >>>>>
> >>>>
> >>>> I haven't used it in production, but a couple of things which come to
> >>>> mind:
> >>>>
> >>>> - Use TGT so you can run it all in userspace backed by librbd
> >>>> - Do not use writeback caching on the targets
> >>>>
> >>>> You could use multipathing if you don't use writeback caching. Use
> >>>> writeback would also cause data loss/corruption in case of multiple
> >>>> targets.
> >>>>
> >>>> It will probably just work with TGT, but I don't know anything about
> >>>>the
> >>>> performance.
> >>>>
> >>>>> Karol
> >>>>>
> >>>>>
> >>>>> _______________________________________________
> >>>>> ceph-users mailing list
> >>>>> ceph-users@xxxxxxxxxxxxxx
> >>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >>>>>
> >>>>
> >>>>
> >>>> --
> >>>> Wido den Hollander
> >>>> 42on B.V.
> >>>>
> >>>> Phone: +31 (0)20 700 9902
> >>>> Skype: contact42on
> >>>> _______________________________________________
> >>>> ceph-users mailing list
> >>>> ceph-users@xxxxxxxxxxxxxx
> >>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >>>
> >>
> >>
> >>-- 
> >>Wido den Hollander
> >>42on B.V.
> >>
> >>Phone: +31 (0)20 700 9902
> >>Skype: contact42on
> >>_______________________________________________
> >>ceph-users mailing list
> >>ceph-users@xxxxxxxxxxxxxx
> >>http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> >_______________________________________________
> >ceph-users mailing list
> >ceph-users@xxxxxxxxxxxxxx
> >http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux