Hi all, I am working on some experiments where I need to have 12 jobs. Each job is reading from a different file (job1 -> file1, job2 -> file2, etc). Now, some files should receieve more I/O than others. To implement this bias between files, I opted for flows. Basically, I start from 100 and I assign a percentage to each file. For example, a snippet of my fio job file below: [private-0] flow=37 filename=/srv/dax/private/private-0 [private-1] flow=23 filename=/srv/dax/private/private-1 [private-2] flow=10 filename=/srv/dax/private/private-2 [private-3] flow=9 filename=/srv/dax/private/private-3 [private-4] flow=7 filename=/srv/dax/private/private-4 [private-5] flow=3 filename=/srv/dax/private/private-5 [private-6] flow=3 filename=/srv/dax/private/private-6 I am not sure if this is properly achieving what I want to achieve. I have a doubt that having too many files and many flows will reduce scheduling precision or introduce scheduling overhead. I don't know how this feature is implemented internally, thus my question. Can you please briefly explain how the feature works. And Do you think this is a sane approach? If not, what better alternative can I use? -- Best, Karim Edinburgh University