Hi,
There were some discussions about this before on the mailing list but I
am still confused with this. I thought Ceph would flush data from the
journal to disk when either the journal is full or when the time to do
synchronization is due. In my test experiment, I used 24 osds(one osd
for each disk). I used a 10 GB tmpfs file as the journal disk for each
osd. Then for testing, I delayed the synchronization between the journal
and disk on purpose. I increased the 'journal min sync interval' to be
60 s and 'journal max sync interval' to be 300 s. Then I created a rbd
and then started a 4M sequential write workload with fio for 30 seconds.
I was expecting that no IO should happen to disks, unless we have filled
240 GB data (10G*24). However, 'iostat' showed there was data
started to be written into disks (at about 20 MB/s per disk), right
after I started the sequential workload. Could someone help to explain
this situation? Thanks,
I am running 0.48.2. The related configuration is as follows.
-----------------
[osd]
osd journal size = 10000
osd journal = /dev/shm/journal/$name-journal
journal dio = false
filestore xattr use omap = true
# The maximum interval in seconds for synchronizing the filestore.
filestore min sync interval = 60
filestore max sync interval = 300
-------------
Xing
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html