RBD as backend for iSCSI SAN Targets

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi everyone,

      I want to know the scheme if someone has a Ceph Cluster and 
running on it Cassandra databases, and if possible to send some 
performance numbers. Also if i wanna know if someone has a linux 
server (Ubuntu or CentOS) running on an HP Blade server and use a Ceph 
cluster as Backend with 10GbE ports?, if that's the case, if it's 
possible to run this DD and send me the output to see how it performs:

dd if=/dev/zero of=./$RANDOM bs=4k count=220000 oflag=direct
dd if=/dev/zero of=./$RANDOM bs=8k count=140000 oflag=direct
dd if=/dev/zero of=./$RANDOM bs=16k count=90000 oflag=direct
dd if=/dev/zero of=./$RANDOM bs=32k count=40000 oflag=direct
dd if=/dev/zero of=./$RANDOM bs=64k count=20000 oflag=direct
dd if=/dev/zero of=./$RANDOM bs=128k count=10000 oflag=direct
dd if=/dev/zero of=./$RANDOM bs=256k count=4000 oflag=direct
dd if=/dev/zero of=./$RANDOM bs=512k count=3000 oflag=direct
dd if=/dev/zero of=./$RANDOM bs=1024k count=3000 oflag=direct
dd if=/dev/zero of=./$RANDOM bs=2048k count=1000 oflag=direct
dd if=/dev/zero of=./$RANDOM bs=4096k count=800 oflag=direct
dd if=/dev/zero of=./$RANDOM bs=8192k count=400 oflag=direct
dd if=/dev/zero of=./$RANDOM bs=16384k count=200 oflag=direct
dd if=/dev/zero of=./$RANDOM bs=32768k count=300 oflag=direct
dd if=/dev/zero of=./$RANDOM bs=65536k count=260 oflag=direct

And also if possible to run a sysbench:

sysbench --num-threads=16 --test=fileio --file-total-size=3G 
--file-test-mode=rndrw prepare
sysbench --num-threads=16 --test=fileio --file-total-size=3G 
--file-test-mode=rndrw run
sysbench --num-threads=16 --test=fileio --file-total-size=3G 
--file-test-mode=rndrw cleanup

Thanks in advance,

Best regards,



German Anders










> --- Original message ---
> Asunto: Re: RBD as backend for iSCSI SAN Targets
> De: Jianing Yang <jianingy.yang at gmail.com>
> Para: Karol Kozubal <Karol.Kozubal at elits.com>
> Cc: ceph-users at lists.ceph.com <ceph-users at lists.ceph.com>
> Fecha: Tuesday, 01/04/2014 07:39
>
>
> On Fri 28 Mar 2014 08:55:30 AM CST, Karol Kozubal wrote:
>
>> Hi Jianing,
>
>> Sorry for the late reply, I missed your contribution to the thread.
>
> > Thank you for your response. I am still waiting for some of my 
> hardware
> > and will begin testing the new setup with firefly once it is 
> available as
> > a long term support release. I am looking forward to testing the new 
> setup.
>
> > I am curious about more details on your proxy node configuration for 
> the
> > tgt deamons? I am interested if your setup tolerates node failure on 
> the
>> iscsi end of things, if so how it is configured?
>
>
> Actually, The fault tolerance is provided by ms exchange servers. We
> just setup two proxy nodes (server /w tgt daemon). One for master
> database and the other for backup database. The exchange servers will 
> do
> the switch thing on failure.
>
>> Thanks,
>
>> Karol
>
>
>
>
>
> > On 2014-03-19, 6:58 AM, "Jianing Yang" <jianingy.yang at gmail.com> 
> wrote:
>
>> >Hi, Karol
>> >
> > >Here is something that I can share. We are running Ceph as an 
> Exchange
> > >Backend via iSCSI. We currently host about 2000 mailboxes which is 
> about
>> >7 TB data overall. Our configuration is
>> >
>> >- Proxy Node (with tgt daemon) x 2
>> >- Ceph Monitor x 3 (virtual machines)
> > >- Ceph OSD x 50 (SATA 7200rpm 2T), Replica = 2, Journal on OSD (I 
> know it
>> >is
>> >bad, but ...)
>> >
> > >We tested RBD using fio and got a randwrite around 1500 iops. On 
> the
>> >living system, I saw the highest op/s around 3.1k.
>> >
> > >I've benchmarked "tgt with librdb" vs "tgt with kernel rbd" using 
> my
>> >virtual machines. It seems that "tgt with librdb" doesn't perform
>> >well. It has only 1/5 iops of kernel rbd.
>> >
> > >We are new to Ceph and still finding ways to improve the 
> performance. I
>> >am really looking forward to your benchmark.
>> >
>> >On Sun 16 Mar 2014 12:40:53 AM CST, Karol Kozubal wrote:
>> >
>> > > Hi Wido,
>> >
> > > > I will have some new hardware for running tests in the next two 
> weeks
>> >or
> > > > so and will report my findings once I get a chance to run some 
> tests. I
> > > > will disable writeback on the target side as I will be 
> attempting to
> > > > configure an ssd caching pool of 24 ssd's with writeback for the 
> main
>> >pool
> > > > with 360 disks with a 5 osd spinners to 1 ssd journal ratio. I 
> will be
>> > > running everything through 10Gig SFP+ Ethernet interfaces with a
>> >dedicated
>> > > cluster network interface, dedicated public ceph interface and a
>> >separate
> > > > iscsi network also with 10 gig interfaces for the target 
> machines.
>> >
> > > > I am ideally looking for a 20,000 to 60,000 IOPS from this 
> system if I
>> >can
> > > > get the caching pool configuration right. The application has a 
> 30ms
>> >max
>> > > latency requirement for the storage.
>> >
> > > > In my current tests I have only spinners with SAS 10K disks, 
> 4.2ms
>> >write
> > > > latency on the disks with separate journaling on SAS 15K disks 
> with a
> > > > 3.3ms write latency. With 20 OSDs and 4 Journals I am only 
> concerned
>> >with
> > > > the overall operation apply latency that I have been seeing 
> (1-6ms
>> >idle is
>> > > normal, but up to 60-170ms for a moderate workload using rbd
>> >bench-write)
> > > > however I am on a network where I am bound to 1500 mtu and I 
> will get
>> >to
> > > > test jumbo frames with the next setup in addition to the ssd?s. 
> I
>> >suspect
> > > > the overall performance will be good in the new test setup and I 
> am
>> > > curious to see what my tests will yield.
>> >
>> > > Thanks for the response!
>> >
>> > > Karol
>> >
>> >
>> >
> > > > On 2014-03-15, 12:18 PM, "Wido den Hollander" <wido at 42on.com> 
> wrote:
>> >
>> > > >On 03/15/2014 04:11 PM, Karol Kozubal wrote:
>> > > >> Hi Everyone,
>> > > >>
> > > > >> I am just wondering if any of you are running a ceph cluster 
> with an
> > > > >> iSCSI target front end? I know this isn?t available out of 
> the box,
> > > > >> unfortunately in one particular use case we are looking at 
> providing
> > > > >> iSCSI access and it's a necessity. I am liking the idea of 
> having
>> >rbd
> > > > >> devices serving block level storage to the iSCSI Target 
> servers
>> >while
> > > > >> providing a unified backed for native rbd access by openstack 
> and
> > > > >> various application servers. On multiple levels this would 
> reduce
>> >the
> > > > >> complexity of our SAN environment and move us away from 
> expensive
>> > > >> proprietary solutions that don?t scale out.
>> > > >>
> > > > >> If any of you have deployed any HA iSCSI Targets backed by 
> rbd I
>> >would
>> > > >> really appreciate your feedback and any thoughts.
>> > > >>
>> > > >
> > > > >I haven't used it in production, but a couple of things which 
> come to
>> > > >mind:
>> > > >
>> > > >- Use TGT so you can run it all in userspace backed by librbd
>> > > >- Do not use writeback caching on the targets
>> > > >
> > > > >You could use multipathing if you don't use writeback caching. 
> Use
> > > > >writeback would also cause data loss/corruption in case of 
> multiple
>> > > >targets.
>> > > >
> > > > >It will probably just work with TGT, but I don't know anything 
> about
>> >the
>> > > >performance.
>> > > >
>> > > >> Karol
>> > > >>
>> > > >>
>> > > >> _______________________________________________
>> > > >> ceph-users mailing list
>> > > >> ceph-users at lists.ceph.com
>> > > >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> > > >>
>> > > >
>> > > >
>> > > >--
>> > > >Wido den Hollander
>> > > >42on B.V.
>> > > >
>> > > >Phone: +31 (0)20 700 9902
>> > > >Skype: contact42on
>> > > >_______________________________________________
>> > > >ceph-users mailing list
>> > > >ceph-users at lists.ceph.com
>> > > >http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >
>> > > _______________________________________________
>> > > ceph-users mailing list
>> > > ceph-users at lists.ceph.com
>> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >
>> >
>> >--
>> > _________________________________________
>> >/ Save time starting to type a command or \
>> >| file name, then press tab to complete   |
>> >| Hit tab twice to bring up multiple      |
>> >\ completion options.                     /
>> > -----------------------------------------
>> >   \
>> >    \
>> >             _____               _______
>> >    ____====  ]OO|_n_n__][.      |     |
>> >   [________]_|__|________)<     |     |
>> >    oo    oo  'oo OOOO-| oo\\_   ~~~|~~~
>> >+--+--+--+--+--+--+--+--+--+--+--+--+--+
>
>
>
> --
> _________________________________________
> / Debian Hint #16: If you're searching    \
> | for a particular file, but don't know   |
> | which package it belongs to, try        |
> | installing 'apt-file', which maintains  |
> | a small database of this information,   |
> | or search the contents of the Debian    |
> | Packages database, which can be done    |
> | at:                                     |
> |                                         |
> | http://www.debian.org/distrib/packages# |
> \ search_contents                         /
> -----------------------------------------
>    \
>      \
>        \
>        .- <O> -.        .-====-.      ,-------.      .-=<>=-.
>      /_-\'''/-_\      / / '' \ \     |,-----.|     /__----__\
>    |/  o) (o  \|    | | ')(' | |   /,'-----'.\   |/ (')(') \|
>      \   ._.   /      \ \    / /   {_/(') (')\_}   \   __   /
>      ,>-_,,,_-<.       >'=jf='<     `.   _   .'    ,'--__--'.
> /      .      \    /        \     /'-___-'\    /    :|    \
> (_)     .     (_)  /          \   /         \  (_)   :|   (_)
> \_-----'____--/  (_)        (_) (_)_______(_)   |___:|____|
>    \___________/     |________|     \_______/     |_________|
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140401/48f74454/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux