Re: [GIT PULL] bcachefs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jul 06, 2023 at 01:38:19PM -0400, Kent Overstreet wrote:
> You, the btrfs developers, got started when Linux filesystem teams were
> quite a bit bigger than they are now: I was at Google when Google had a
> bunch of people working on ext4, and that was when ZFS had recently come
> out and there was recognition that Linux needed an answer to ZFS and you
> were able to ride that excitement. It's been a bit harder for me to get
> something equally ambitions going, to be honest.

Just to set the historical record straight, I think you're mixing up
two stories here.

*Btrfs* was started while I was at the IBM Linux Technology Center,
and it was because there were folks from more than one companies that
were concerned that there needed to be an answer to ZFS.  IBM hosted
that meeting, but ultimately, never did contribute any developers to
the btrfs effort.  That's because IBM had a fairly cold, hard
examination of what their enterprise customers really wanted, and
would be willing to pay $$$, and the decision was made at a corporate
level (higher up than the Linux Technology Center, although I
participated in the company-wide investigation) that *none* of OS's
that IBM supported (AIX, zOS, Linux, etc.) needed ZFS-like features,
because IBM's customers didn't need them.  The vast majority of what
paying customers' workloads at the time was to run things like
Websphere, and Oracle and DB/2, and these did not need fancy
snapshots.  And things like integrity could be provided at other
layers of the storage stack.

As far as Google was concerned, yes, we had several software engineers
working on ext4, but it had nothing to do with ZFS.  We had a solid
business case for how replacing ext2 with ext4 (in nojournal mode,
since the cluster file system handled data integrity and crash
recovery) would save the company $XXX millions of dollars in storage
TCO (total cost of ownership) dollars per year.

In any case, at neither company was a "sense of excitement" something
which drove the technical decisions.  It was all about Return on
Investment (ROI).  As such, that's driven my bias towards ext4
maintenance.

I view part of my job is finding matches between interesting file
system features that I would find technically interesting, and which
would benefit the general ext4 user base, and specific business cases
that would encourage the investment of several developers on file
system technologies.

Things like case insensitive file names, fscrypt, fsverity, etc.,
where all started *after* I had found a business case that would
interest one or more companies or divisions inside Google to put
people on the project.  Smaller projects can get funded on the
margins, sure.  But for anything big, that might require the focused
attention of one or more developers for a quarter or more, I generally
find the business case first, and often, that will inform the
requirements for the feature.  In other words, not only am I ext4's
maintainer, I'm also its product manager.

Of course, this is not the only way you can drive technology forward.
For example, at Sun Microsystems, ZFS was driven just by the techies,
and initially, they hid the fact that the project was taking place,
not asking the opinion of the finance and sales teams.  And so ZFS had
quite a lot of very innovative technologies that pushed the industry
forward, including inspiring btrfs.  Of course, Sun Microsystems
didn't do all that well financially, until they were forced to sell
themselves to the highest bidder.  So perhaps, it might be that this
particular model is one that other companies, including IBM, Red Hat,
Microsoft, Oracle, Facebook, etc., might choose to avoid emulating.

Cheers,

					- Ted



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux