[User_question] Unexpected behaviour on data=journal commit=large_number mount options I used an ext4 filesystem with external journal device, with mount options data=journal commit=3600 My goal was to improve the read io from my backend device, while i write to it. (simultaneous read/write io) The expected behaviour to me, that while I write to the filesystem at the /proc/vmstat nr_dirty counter would get bigger, and when this counter exceed the nr_dirty_background_threshold counter then the kernel would the dirty pages write to the external ext4 journal, and in the background it sync it back to the backend device at a later time. But the all dirty pages sitted in the ram for a whole hour, while the dirty_expire_centisecs expired a lot times, my machine swapped and trashed all the pages, so it was not the best for the performance... I set then the commit time to 5. It was a lot better. (I think it is default setting for a reason :D) But the performance was sluggish/interrupted. - I'm not sure how to say... - It is burst for 5 second full speed write, and after 5 sec until the commit to the journal device was completed blocked the write and the read io too... How can i set that the dirty pages synced to the journal like every 5 sec, without blocking io - in ram with pagecache -, and sync the journal and the backend device like every 3600 sec? Or how can i induce the dirty pages writed to journal dev, from terminal, manually? -without sync, because, it is blocking write io too... - fsync is a forced sync syscall, it isn't a soft sync somewhere? Thanks the help to everyone!