On (22/11/03 11:18), Johannes Weiner wrote: > > > I'm not in love with this, to be honest. One big pool lock instead > > > of 255 per-class locks doesn't look attractive, as one big pool lock > > > is going to be hammered quite a lot when zram is used, e.g. as a regular > > > block device with a file system and is under heavy parallel writes/reads. > > TBH the class always struck me as an odd scope to split the lock. Lock > contention depends on how variable the compression rate is of the > hottest incoming data, which is unpredictable from a user POV. > > My understanding is that the primary usecase for zram is swapping, and > the pool lock is the same granularity as the swap locking. That's what we thought until a couple of merge windows ago we figured (the hard way) that SUSE uses ZRAM as a normal block device with a real file-system on it. And they use it often enough to immediately spot the regression which we landed. > Do you have a particular one in mind? (I'm thinking journaled ones are > not of much interest, since their IO tends to be fairly serialized.) > > btrfs? Probably some parallel fio workloads? Seq, random reads/writes from numerous workers. I personally sometimes use ZRAM when I want to compile something and I care only about the package, I don't need .o for recomplilation or something, just the final package.