Re: RBD as a boot image [was: libceph: get_reply osd2 tid 1459933 data 3248128 > preallocated 131072, skipping]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Markus Kienast <mark@xxxxxxxxxxxxx> writes:

> Hi Nico,
>
> we are already doing exactly that:
>
> Loading initrd via iPXE
> which contains the necessary modules and scripts to boot an RBD boot dev.
> Works just fine.

Interesting and very good to hear. How do you handle kernel differences
(loaded kernel vs. modules in the RBD image)?

> And Ilya just helped to work out the last show stopper, thanks again for
> that!
>
> We are using a modified LTSP system for this.
>
> We have proposed some patches to LTSP to get the necessary facilities
> upstream but Alkis Georgopoulos first want to see that there is enough
> interest for that before he considers merging our patch or creating the
> necessary changes himself.

I think seeing LTSP booting on RBD is a great move forward, also for
other projects.

> However the necessary initrd code is already available in this merge
> request:
> https://github.com/trickkiste/ltsp/blob/feature-boot_method-rbd/debian/ltsp-rbd.initramfs-script
>
> I see you are from Switzerland - neighbors!

We might actually meet at a Linuxtag - but we should probably take this
off-list :-)

> Out of interest, what are you planning to use this for? Servers, Thin/Fat
> Clients?

Our objective in the end is to boot servers and VMs from possible the
same RBD pool/images.

The problem there is though that we don't know what is inside the RBD
image, so we don't know which kernel to load besides we would do some
kind of kexec magic, which would pass on the RBD parameters.

Or in other words, we have this use case:

- a customer books a VM and needs more performance
- the customer decides to go with *a* server, but not necessarily a
  specific server
- the customer VM should be shutdown and the RBD image should boot on a server

If the server crashes, the OS should be booted on a different server.

We can obviously work around this by *always* running a VM, but this is
not exactly what our customers want. At the moment they use local disks
+ nfs shares to achieve a similar solution, but it is far from perfect.

Cheers,

Nico

--
Sustainable and modern Infrastructures by ungleich.ch
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux