Hi, I was doing some tests in my cluster with fio tool, one fio instance with 70 jobs, each job writing 1GB random with 4K block size. I did this test with 3 variations: 1- Creating 70 images, 60GB each, in the pool. Using rbd kernel module, format and mount each image as ext4. Each fio job writing in a separate image/directory. (ioengine=libaio, queue_depth=4, direct=1) IOPS: 6542 AVG LAT: 41ms 2- Creating 1 large image 4,2TB in the pool. Using rbd kernel module, format and mount the image as ext4. Each fio job writing in a separate file in the same directory. (ioengine=libaio, queue_depth=4,direct=1) IOPS: 5899 AVG LAT: 47ms 3- Creating 1 large image 4,2TB in the pool. Use ioengine rbd in fio to access the image through librados. (ioengine=rbd, queue_depth=4,direct=1) IOPS: 2638 AVG LAT: 96ms Do these results make sense? From Ceph perspective, It is better to have many small images than a larger one? What is the best approach to simulate the workload of 70 VMs? thanks in advance or any help, Xabier