----- Ursprüngliche Mail ----- > Von: "Dave Chinner" <david@xxxxxxxxxxxxx> > An: "Thomas Klaube" <thomas@xxxxxxxxxx> > CC: xfs@xxxxxxxxxxx > Gesendet: Mittwoch, 20. August 2014 00:55:07 > Betreff: Re: xlog_write: reservation ran out. Need to up reservation Hi, > Can you post the fio job configuration? first I run this job for 600 sec: wtk@ubuntu ~ $ cat write.fio [rnd] rw=randwrite ramp_time=30 runtime=600 time_based gtod_reduce=1 size=100g refill_buffers=1 directory=. iodepth=64 direct=1 blocksize=16k numjobs=64 nrfiles=1 group_reporting ioengine=libaio loops=1 Then I run this job for 2hours: wtk@ubuntu ~ $ cat random.fio [rnd] rw=randrw ramp_time=30 runtime=7200 time_based rwmixread=30 size=100g refill_buffers=1 directory=. iodepth=64 direct=1 blocksize=4k numjobs=64 group_reporting ioengine=libaio loops=1 I run this workload on 2 devices in parallel. One is the bcache device (with xfs), the other is a non cached device. The random.fio job causes the problem on the bcache device after ~30-75mins. > http://xfs.org/index.php/XFS_FAQ#Q:_What_information_should_I_include_when_reporting_a_problem.3F I have sent a mail with all collected data to Dave. > Run the workload directly on the SSD rather than with bcache. Use > mkfs parameters to give you 8 ags and the same size log, and see > if you get the same problem. I created a xfs direclty on the SSD: mkfs.xfs -f -d agcount=8 -l size=521728b /dev/sdc1 Then I started tho fio jobs as described above for 10 hours. I could not reproduce the problem. I will send a mail to the bcache mailing list as well... Thanx and Regards Thomas _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs