Re: ceph fs mv does copy, not move

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Stefan,

> Isn't that where LazyIO is for? See ...

Yes, it is, to some extend. However, there are many large HPC applications that will not start using exotic libraries for IO. A parallel file system offers everything that is needed with standard OS library calls. This is better solved on the FS than the client side. We put the link to lazy IO on our cluster documentation over a year ago, but I cannot imagine any of our users starting to invest porting massive applications even though we have ceph. So far, nobody did.

Its also that HPC uses MPI, which comes with IO libraries users don't have influence on. I don't see this becoming a relevant alternative to a parallel file system any-time soon. Sorry.

Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14

________________________________________
From: Stefan Kooman <stefan@xxxxxx>
Sent: 24 June 2021 20:01:16
To: Frank Schilder; Patrick Donnelly
Cc: ceph-users@xxxxxxx
Subject: Re:  Re: ceph fs mv does copy, not move

On 6/24/21 5:34 PM, Frank Schilder wrote:

> Please, in such situations where developers seem to have to make a definite choice, consider the possibility of offering operators to choose the alternative that suits their use case best. Adding further options seems far better than limiting functionality in a way that becomes a terrible burden in certain, if not many use cases.

Yeah, I agree.
>
> In ceph fs there have been many such decisions that allow for different answers from a user/operator perspective. For example, I would prefer if I could get rid of the attempted higher POSIX compliance level of ceph fs compared with Lustre, just disable all the client-caps and cache-coherence management and turn it into an awesome scale-out parallel file system. The attempt of POSIX compliant handling of simultaneous writes to files offers nothing to us, but costs huge in performance and forces users to move away from perfectly reasonable HPC work flows. Also, that it takes a TTL to expire before changes on one client become visible on another (unless direct_io is used for all IO) is perfectly acceptable for us given the potential performance gain due to simpler client-MDS communication.

Isn't that where LazyIO is for? See
https://docs.ceph.com/en/latest/cephfs/lazyio/

Gr. Stefan
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux