Re: Bluestore deploys to tmpfs?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Feb 1, 2019 at 3:08 PM Stuart Longland
<stuartl@xxxxxxxxxxxxxxxxxx> wrote:
>
> On 1/2/19 10:43 pm, Alfredo Deza wrote:
> >>> I think mounting tmpfs for something that should be persistent is highly
> >>> dangerous.  Is there some flag I should be using when creating the
> >>> BlueStore OSD to avoid that issue?
> >>
> >> The tmpfs setup is expected. All persistent data for bluestore OSDs
> >> setup with LVM are stored in LVM metadata. The LVM/udev handler for
> >> bluestore volumes create these tmpfs filesystems on the fly and populate
> >> them with the information from the metadata.
> > That is mostly what happens. There isn't a dependency on UDEV anymore
> > (yay), but the reason why files are mounted on tmpfs
> > is because *bluestore* spits them out on activation, this makes the
> > path fully ephemeral (a great thing!)
> >
> > The step-by-step is documented in this summary section of  'activate'
> > http://docs.ceph.com/docs/master/ceph-volume/lvm/activate/#summary
> >
> > Filestore doesn't have any of these capabilities and it is why it does
> > have an actual existing path (vs. tmpfs), and the files come from the
> > data partition that
> > gets mounted.
> >
>
> Well, for whatever reason, ceph-osd isn't calling the activate script
> before it starts up.

ceph-osd doesn't call the activate script. Systemd is the one that
calls ceph-volume to activate OSDs.
>
> It is worth noting that the systems I'm using do not use systemd out of
> simplicity.  I might need to write an init script to do that.  It wasn't
> clear last weekend what commands I needed to run to activate a BlueStore
> OSD.

If deployed with ceph-volume, you can just do:

    ceph-volume lvm activate --all

>
> For now though, sounds like tarring up the data directory, unmounting
> the tmpfs then unpacking the tar is a good-enough work-around.  That's
> what I've done for my second node (now I know of the problem) and so it
> should survive a reboot now.

There is no need to tar anything. Calling out to ceph-volume to
activate everything should just work.

>
> The only other two steps were to ensure `lvm` was marked to start at
> boot (so it would bring up all the volume groups) and that there was a
> UDEV rule in place to set the ownership on the LVM VGs for Ceph.

Right, you do need to ensure LVM is installed/enabled. But *for sure*
there is no need to UDEV rules to set any ownership for Ceph, this is
a task
that ceph-volume handles

> --
> Stuart Longland (aka Redhatter, VK4MSL)
>
> I haven't lost my mind...
>   ...it's backed up on a tape somewhere.
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux