Re: Fast Ceph a Cluster with PB storage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Vladimir,

On Wed, 10 Aug 2016 09:12:39 +0500 Дробышевский, Владимир wrote:

> Christian,
> 
>   I have to say that OpenNebula 5 doesn't need any additional hacks (ok,
> just two lines of code to support rescheduling in case of the original node
> failure and even these patch scheduled to 5.2 to be added after my question
> a couple of weeks ago; but it isn't about 'live') or an additional shared
> fs to support live migration with ceph. It works like a charm. I have an
> installation I just finished with OpenNebula 5.0.1 + ceph with dual root
> (HDD + ssd journal and pure SSD), so it's a first-hand information.
>
Thanks for bringing that to my attention.
I was of course referring to 4.14 and wasn't aware that 5 had been
released, thanks to the way their repository (apt sources lines) works.
 
>   In ONE 5 it's possible to use ceph as a system datastore, so it
> eliminates any problems with live migration. For file-based datastore
> (which is recommended to use for custom kernels and configs) it's possible
> to use CephFS (but it doesn't belong to ONE, of course).
> 
Right.

>   P.S. If somebody needs to reschedule (restore) VM from a host in the
> ERROR state then here is the patch for the ceph driver:
> https://github.com/OpenNebula/one/pull/106
> This patch doesn't need to rebuild the ONE from source, it could be applied
> to a working system (since ONE drivers are mostly a set of shell scripts).
> 
Thanks, I'll give that a spin next week.

Christian

> Best regards,
> Vladimir
> 
> 
> С уважением,
> Дробышевский Владимир
> Компания "АйТи Город"
> +7 343 2222192
> 
> Аппаратное и программное обеспечение
> IBM, Microsoft, Eset
> Поставка проектов "под ключ"
> Аутсорсинг ИТ-услуг
> 
> 2016-08-10 3:26 GMT+05:00 Christian Balzer <chibi@xxxxxxx>:
> 
> >
> > Hello,
> >
> > On Tue, 9 Aug 2016 14:15:59 -0400 Jeff Bailey wrote:
> >
> > >
> > >
> > > On 8/9/2016 10:43 AM, Wido den Hollander wrote:
> > > >
> > > >> Op 9 augustus 2016 om 16:36 schreef Александр Пивушков <pivu@xxxxxxx
> > >:
> > > >>
> > > >>
> > > >>  > >> Hello dear community!
> > > >>>>>> I'm new to the Ceph and not long ago took up the theme of
> > building clusters.
> > > >>>>>> Therefore it is very important to your opinion.
> > > >>>>>> It is necessary to create a cluster from 1.2 PB storage and very
> > rapid access to data. Earlier disks of "Intel® SSD DC P3608 Series 1.6TB
> > NVMe PCIe 3.0 x4 Solid State Drive" were used, their speed of all
> > satisfies, but with increase of volume of storage, the price of such
> > cluster very strongly grows and therefore there was an idea to use Ceph.
> > > >>>>>
> > > >>>>> You may want to tell us more about your environment, use case and
> > in
> > > >>>>> particular what your clients are.
> > > >>>>> Large amounts of data usually means graphical or scientific data,
> > > >>>>> extremely high speed (IOPS) requirements usually mean database
> > > >>>>> like applications, which one is it, or is it a mix?
> > > >>>>
> > > >>>> This is a mixed project, with combined graphics and science.
> > Project linking the vast array of image data. Like google MAP :)
> > > >>>> Previously, customers were Windows that are connected to powerful
> > servers directly.
> > > >>>> Ceph cluster connected on FC to servers of the virtual machines is
> > now planned. Virtualization - oVirt.
> > > >>>
> > > >>> Stop right there. oVirt, despite being from RedHat, doesn't really
> > support
> > > >>> Ceph directly all that well, last I checked.
> > > >>> That is probably where you get the idea/need for FC from.
> > > >>>
> > > >>> If anyhow possible, you do NOT want another layer and protocol
> > conversion
> > > >>> between Ceph and the VMs, like a FC gateway or iSCSI or NFS.
> > > >>>
> > > >>> So if you're free to choose your Virtualization platform, use
> > KVM/qemu at
> > > >>> the bottom and something like Openstack, OpenNebula, ganeti,
> > Pacemake with
> > > >>> KVM resource agents on top.
> > > >> oh, that's too bad ...
> > > >> I do not understand something...
> > > >>
> > > >> oVirt built on kvm
> > > >> https://www.ovirt.org/documentation/introduction/about-ovirt/
> > > >>
> > > >> Ceph, such as support kvm
> > > >> http://docs.ceph.com/docs/master/architecture/
> > > >>
> > > >
> > > > KVM is just the hypervisor. oVirt is a tool which controls KVM and it
> > doesn't have support for Ceph. That means that it can't pass down the
> > proper arguments to KVM to talk to RBD.
> > > >
> > > >> What could be the overhead costs and how big they are?
> > > >>
> > > >>
> > > >> I do not understand why oVirt bad, and the qemu in the Openstack,
> > it's good.
> > > >> What can be read?
> > > >>
> > > >
> > > > Like I said above. oVirt and OpenStack both control KVM. OpenStack
> > also knows how to  'configure' KVM to use RBD, oVirt doesn't.
> > > >
> > > > Maybe Proxmox is a better solution in your case.
> > > >
> > >
> > > oVirt can use ceph through cinder.  It doesn't currently provide all the
> > > functionality of
> > > other oVirt storage domains but it does work.
> > >
> > Well, I saw this before I gave my answer:
> > http://www.ovirt.org/develop/release-management/features/
> > storage/cinder-integration/
> >
> > And based on that I would say oVirt is not a good fit for Ceph at this
> > time.
> >
> > Even less so than OpenNebula, which currently needs an additional shared
> > network FS or hacks to allow live migration with RBD.
> >
> > Christian
> >
> > > > Wido
> > > >
> > > >>
> > > >> --
> > > >> Александр Пивушков_______________________________________________
> > > >> ceph-users mailing list
> > > >> ceph-users@xxxxxxxxxxxxxx
> > > >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > > > _______________________________________________
> > > > ceph-users mailing list
> > > > ceph-users@xxxxxxxxxxxxxx
> > > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > > >
> > > _______________________________________________
> > > ceph-users mailing list
> > > ceph-users@xxxxxxxxxxxxxx
> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> >
> > --
> > Christian Balzer        Network/Systems Engineer
> > chibi@xxxxxxx           Global OnLine Japan/Rakuten Communications
> > http://www.gol.com/
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >


-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Rakuten Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux