Re: [LSF/MM/BPF TOPIC] Lustre filesystem upstreaming

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 1/30/25, 11:57 AM, "Theodore Ts'o" <tytso@xxxxxxx <mailto:tytso@xxxxxxx>> wrote:
> On Thu, Jan 30, 2025 at 04:18:29PM +0000, Day, Timothy wrote:
> >
> > Lustre has a lot of usage and development outside of DDN/Whamcloud
> > [1][2]. HPE, AWS, SuSe, Azure, etc. And at least at AWS, we use
> > Lustre on fairly up-to-date kernels [3][4]. And I think this is
> > becoming more common - although I don't have solid data on that.
>
>
> I agree that I am seeing more use/interest of Lustre in various Cloud
> deployments, and to the extent that Cloud clients tend to use newer
> Linux kernels (e.g., commonly, the the LTS from the year before) that
> certainly does make them use kernels newer than a typical RHEL kernel.
>
>
> It's probably inherent in the nature of cluster file systems that they
> won't be of interest for home users who aren't going to be paying the
> cost of a dozen or so Cloud VM's being up on a more-or-less continuous
> basis. However, the reality is that more likely than not, developers
> who are most likely to be using the latest upstream kernel, or maybe
> even Linux-next, are not going to be using cloud VM's.

I don't have a good sense of who's running absolute latest mainline or
linux-next. But agreed, I doubt there will be tons of home users of Lustre
post-upstreaming. Although, you can definitely play Counter Strike on
a home Lustre setup. I've personally validated that. :)

> > And if you have dedicated hardware - setting up a small filesystem over
> > TCP/IP isn't much harder than an NFS server IMHO. Just a mkfs and
> > mount per storage target. With a single MDS and OSS, you only need two
> > disks. So I think we have everything we need to enable upstream
> > users/devs to use Lustre without too much hassle. I think it's mostly a
> > matter of documentation and scripting.
>
> Hmm... would it be possible to set up a simple toy Lustre file system
> using a single system running in qemu --- i.e., using something like a
> kvm-xfstests[1] test appliance? TCP/IP over loopback might be
> interesting, if it's posssible to run the Lustre MDS, OSS, and client
> on the same kernel. This would make repro testing a whole lot easier,
> if all someone had to do was run the command "kvm-xfstests -c lustre smoke".
>
> [1] https://github.com/tytso/xfstests-bld/blob/master/Documentation/kvm-quickstart.md <https://github.com/tytso/xfstests-bld/blob/master/Documentation/kvm-quickstart.md>

Definitely possible. You can run all of the Lustre services on the same kernel.
I have Lustre working on a similar QEMU setup as part of Kent's ktest repo [1].
I use it to test/develop Lustre patches against mainline kernels - mostly for the
Lustre in-memory OSD (i.e. storage backend) [2]. So we can get a Lustre
development workflow that's pretty similar to the existing workflow for in-tree
filesystems, I think.

Tim Day

[1] https://github.com/koverstreet/ktest
[2] https://review.whamcloud.com/c/fs/lustre-release/+/55594





[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux