Re: [PULL] Re: bcache stability patches

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Jan 02, 2016 at 10:48:04AM -0500, Denis Bychkov wrote:
> On Sat, Jan 2, 2016 at 6:50 AM, Vojtech Pavlik <vojtech@xxxxxxxx> wrote:
> > On the contrary, all modern filesystems cope with endianness
> > portability. The only major filesystem in use where endianness is not
> > handled is, as far I know, UFS.
> >
> > At the same time, I don't see endianness portability, the ability to
> > create a cache on a machine with one endian and then mounting it on a
> > machine with the opposite endian a real use case.
> >
> > Unlike fileystems, which can be used to transfer valuable data between
> > machines, the cache only contains ephemeral data, which can easily be
> > recreated from the backing device.
> >
> > Hence I believe that it is reasonable to require the user to nuke the
> > contents of the cache when moving the cache set between machines of
> > different endianity.
> >
> > Ideally this would happen automatically and error out if the cache isn't
> > clean.
> >
> > Actually, the same would be fine for format version changes.
> 
> Yeah, I totally agree with you here. I just think that dirty cache
> situation might be much more common and less avoidable, which means it
> requires a lot of dancing around in terms of tooling, documentation,
> testing, etc. But it can easily be solved, it's not a hard problem,
> it's just time-consuming and this is something that Kent might use
> some help with.

The bcache2 on disk format is the same on disk format as bcachefs - so I do want
to get endian portability done right.

It shouldn't be an outrageous amount of work though, the biggest hassle is just
going to be getting a test environment set up.

> >> > And this isn't a trivial amount of work - and besides finishing the on disk
> >> > format, there's a fair amount of work on tooling and related stuff to make sure
> >> > everything is ready for the switch.
> >> >
> >> > And, I can't work for free, so somehow funding has to be secured. Given the
> >> > number of companies that are using bcache, and the fact that Canonical and SuSe
> >> > are both apparantly putting in at least a little bit of engineering time into
> >> > supporting bcache, you'd think it should be possible but offers have not been
> >> > forthcoming.
> >>
> >> I don't know, IMHO bcache was hurt a lot because of a host of small
> >> problems that nobody was able to address for quite some time. It
> >> gained a bad reputation as a production system, unfortunately, which
> >> means not much interest from the enterprise world, which means
> >> Canonical & co. did not want to invest into it. Don't get me wrong, I
> >> am not blaming you. Of all people, I might understand pretty well what
> >> was going on, just explaining why RH or Canonical or Suse did not
> >> fight for the privilege to financially support this project.
> >
> > SUSE had plans for bcache, however, since upstram stable branch
> > maintenance has been more than unreliable, we postponed most of them and
> > are building knowledge in-house to be able to fully support it before we
> > deploy.

The biggest reason for maintainence dropping off was me going off to a certain
startup that shall not be named, which ended up being fairly all-consuming, and
left me pretty burned out in the end. I'm not going to revisit that topic right
now, except to say that upstream maintainence is not the only reason I have
mixed feelings about that decision...

I do want to say though that I never knew Suse or Canonical engineers were ever
looking at the code or that either companies were ever considering supporting it
- if I had, I would've certainly made an effort to work with your engineers on
getting them up to speed.

Anyways, what's done is done but if the demand is there I'd really like to see
the codebase live a long happy life, and figure out if we can make that happen
now.
--
To unsubscribe from this list: send the line "unsubscribe linux-bcache" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux ARM Kernel]     [Linux Filesystem Development]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux