Add. info: Linux nas 2.6.31-ARCH #1 SMP PREEMPT Fri Oct 23 10:03:24 CEST 2009 x86_64 Intel(R) Xeon(R) CPU E5405 @ 2.00GHz GenuineIntel GNU/Linux kernel26 2.6.31.5-1 Cachefiles 0.9 nfs-utils 1.2.0-4 When looking at iostat it is obvious that the write-pipe to the SSD is unable to keep up with the reads, and when it grows too large the below happens. 20 parallel reads of 100MB files work, but 40 does not. Kind regards, Fredrik Widlund -----Ursprungligt meddelande----- Från: linux-cachefs-bounces@xxxxxxxxxx [mailto:linux-cachefs-bounces@xxxxxxxxxx] För Fredrik Widlund Skickat: den 2 november 2009 14:57 Till: linux-cachefs@xxxxxxxxxx Ämne: cachefiles_ Hi I'm evaluation using cachefiles in a production environment. Is this considered a nono? I tried benching it quickly but ran into the below when mounting an NFS share through localhost. Below I have 100 processes reading 100 different files with dd. Kind regards, Fredrik Widlund Nov 2 14:37:49 nas kernel: INFO: task dd:1927 blocked for more than 120 seconds. Nov 2 14:37:49 nas kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Nov 2 14:37:49 nas kernel: dd D 0000000000000000 0 1927 1 0x00000004 Nov 2 14:37:49 nas kernel: ffff88012ee79a80 0000000000000082 ffff88012ee79a80 000000001dde118e Nov 2 14:37:49 nas kernel: 00000000000138c0 ffff88012f84cf80 ffff88012ee79d20 00000000000138c0 Nov 2 14:37:49 nas kernel: 000000000000e168 ffff88012ee79d20 ffff88012ee79a80 0000000000011280 Nov 2 14:37:49 nas kernel: Call Trace: Nov 2 14:37:49 nas kernel: [<ffffffffa041adb5>] ? __fscache_wait_on_page_write+0x75/0xc0 [fscache] Nov 2 14:37:49 nas kernel: [<ffffffff8107a130>] ? autoremove_wake_function+0x0/0x60 Nov 2 14:37:49 nas kernel: [<ffffffffa0477b7f>] ? nfs_fscache_release_page+0x6f/0x160 [nfs] Nov 2 14:37:49 nas kernel: [<ffffffff810ee0d2>] ? shrink_page_list+0x512/0x850 Nov 2 14:37:49 nas kernel: [<ffffffff811cb90a>] ? generic_make_request+0x1fa/0x500 Nov 2 14:37:49 nas kernel: [<ffffffff810ec8ce>] ? isolate_pages_global+0xce/0x240 Nov 2 14:37:49 nas kernel: [<ffffffff811cbcc5>] ? submit_bio+0xb5/0x170 Nov 2 14:37:49 nas kernel: [<ffffffff810ee726>] ? shrink_list+0x316/0x710 Nov 2 14:37:49 nas kernel: [<ffffffff810eed9d>] ? shrink_zone+0x27d/0x3a0 Nov 2 14:37:49 nas kernel: [<ffffffff810efe61>] ? try_to_free_pages+0x271/0x410 Nov 2 14:37:49 nas kernel: [<ffffffff810ec800>] ? isolate_pages_global+0x0/0x240 Nov 2 14:37:49 nas kernel: [<ffffffff810e7359>] ? __alloc_pages_nodemask+0x429/0x640 Nov 2 14:37:49 nas kernel: [<ffffffff8138545c>] ? io_schedule+0x8c/0xd0 Nov 2 14:37:49 nas kernel: [<ffffffff810ea213>] ? __do_page_cache_readahead+0x123/0x2b0 Nov 2 14:37:49 nas kernel: [<ffffffff810ea3cc>] ? ra_submit+0x2c/0x50 Nov 2 14:37:49 nas kernel: [<ffffffff810e23e9>] ? generic_file_aio_read+0x369/0x640 Nov 2 14:37:49 nas kernel: [<ffffffff8111f502>] ? do_sync_read+0xf2/0x150 Nov 2 14:37:49 nas kernel: [<ffffffff8107a130>] ? autoremove_wake_function+0x0/0x60 Nov 2 14:37:49 nas kernel: [<ffffffff81120541>] ? vfs_read+0xe1/0x1d0 Nov 2 14:37:49 nas kernel: [<ffffffff8112074e>] ? sys_read+0x5e/0xb0 Nov 2 14:37:49 nas kernel: [<ffffffff8100c382>] ? system_call_fastpath+0x16/0x1b -- Linux-cachefs mailing list Linux-cachefs@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cachefs -- Linux-cachefs mailing list Linux-cachefs@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cachefs