Re: About BFQ interaction with git filter-branch

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> Il giorno 8 apr 2019, alle ore 14:40, Pacho Ramos <pachoramos@xxxxxxxxx> ha scritto:
> 
> (Resending as text plain as it seems HTML is tagged as SPAM, sorry for
> the noise)
> 
> ---------- Forwarded message ---------
> De: Pacho Ramos <pachoramos@xxxxxxxxx>
> Date: lun., 8 abr. 2019 a las 14:38
> Subject: About BFQ interaction with git filter-branch
> To: Paolo VALENTE <paolo.valente@xxxxxxxxxx>
> Cc: <bfq-iosched@xxxxxxxxxxxxxxxx>, <linux-block@xxxxxxxxxxxxxxx>
> 
> 
> I wanted to simply give you thanks to you and BFQ team.

You're very welcome.

> It's the only
> scheduler allowing me to be able to work with my computer even running
> git filter-branch --force --tree-filter... at the same time.
> 

Great!  This seems a demanding instance of the personal use cases that
I had in mind while designing BFQ heuristics.

> And that huge difference is with a SSD disk, even if some people in
> internet suggest that noop or mq-deadline would perform better for
> them. Maybe for throughput it could perform a bit worse...

Fortunately no.  Thanks to the last developments [1], even the last
throughput gaps seem to have been filled.  With commodity CPUs, BFQ
reaches the same or higher throughput than the other I/O schedulers,
with SSDs peaking up to ~500 KIOPS.

[1] https://lwn.net/Articles/784267/ (wait for one more week if you
				      are not an LWN subscriber, or
				      subscribe! :) )


> I don't
> know, but for being able to work with the system and not simply need
> to move to other computer to work, BFQ clearly wins.
> 
> Thanks for your work, hopefully it will be chosen by default for not
> needing to play with udev rules or echo ... >
> 

That's hard to say.  So far, all numbers have been in favor of BFQ,
but this didn't convince, e.g., Jens himself.  IIRC, Jen's last
request was:

"I'm mostly just interested in plain fast NVMe device, and a big box
hardware raid setup with a ton of drives."

Now we have plenty of results on NVMe drives, but I still don't know
well how to proceed with raids with a ton of drives.  The problem is
not much that a ton of drives is not a precise number, but that it may
hint at so large configurations that I'm unlikely to own them.  If
Jens, or anybody else able to access such a hardware, can give access
to me too, for some test, I'd be very happy to compare all I/O
schedulers on their systems of interest.

> Maybe you could try to add this test with git filter-branch to your
> benchmarks.

I'll consider adding also this test to my kern-dev set of sets (now it
includes only make, git checkout and git merge).

> In my case I have tested with: none, mq-deadline
> (kernel-5.0.7) , deadline, cfq, noop (kernel 4.19.32) on a Gentoo
> Linux system running their gentoo-sources kernel (mostly vanilla
> kernel) while trying to work with a Gnome 3.24 desktop, Chrome,
> terminal...
> 
> The harddisk is SanDisk X400 2.5 7MM 512GB (X4152012) and cpu is
> Intel(R) Core(TM) i5-6600 CPU @ 3.30GHz
> 

A rather thorough test.  From 5.2, for such a test you should get
even better performance with BFQ, compared to 4.19.

Thanks,
Paolo

> Best regards!





[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux