Kurt,
When you had OS and osd journals co-located, how many osd journals were
on the SSD containing the OS?
You mention you now use a 5:1 ratio. Was the ratio something like 11:1
before (one SSD for OS plus 11 osd journals to 11 OSDs in a 12-disk
chassis)?
Also, what throughput per drive were you seeing on the cluster during
the periods where things got laggy due to backfills, etc?
Last, did you attempt to throttle using ceph config setting in the old
setup? Do you need to throttle in your current setup?
Thanks,
Mike Dawson
On 10/24/2013 10:40 AM, Kurt Bauer wrote:
Hi,
we had a setup like this and ran into trouble, so I would strongly
discourage you from setting it up like this. Under normal circumstances
there's no problem, but when the cluster is under heavy load, for
example when it has a lot of pgs backfilling, for whatever reason
(increasing num of pgs, adding OSDs,..), there's obviously a lot of
entries written to the journals.
What we saw then was extremly laggy behavior of the cluster and when
looking at the iostats of the SSD, they were at 100% most of the time. I
don't exactly know what causes this and why the SSDs can't cope with the
amount of IOs, but seperating OS and journals did the trick. We now have
quick 15k HDDs in Raid1 for OS and Monitor journal and per 5 OSD
journals one SSD with one partition per journal (used as raw partition).
Hope that helps,
best regards,
Kurt
Martin Catudal schrieb:
Hi,
Here my scenario :
I will have a small cluster (4 nodes) with 4 (4 TB) OSD's per node.
I will have OS installed on two SSD in raid 1 configuration.
Is one of you have successfully and efficiently a Ceph cluster that is
built with Journal on a separate partition on the OS SSD's?
I know that it may occur a lot of IO on the Journal SSD and I'm scared
of have my OS suffer from too much IO.
Any background experience?
Martin
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com