Re: Infiniband 40GB

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi again, 
I have done some tests with journals on a real disk, I have same behaviour.

iostat show constant write to journal and write to disks at the same time since the beginning of the benchmark.


maybe can I try to use differents partitions for each journal ? (currently I have 1 partition with 5 journal files of each osd)

-Alexandre



----- Mail original ----- 

De: "Alexandre DERUMIER" <aderumier@xxxxxxxxx> 
À: "Mark Nelson" <mark.nelson@xxxxxxxxxxx> 
Cc: "Amon Ott" <a.ott@xxxxxxxxxxxx>, ceph-devel@xxxxxxxxxxxxxxx, "Yann Dupont" <Yann.Dupont@xxxxxxxxxxxxxx> 
Envoyé: Jeudi 7 Juin 2012 05:11:15 
Objet: Re: Infiniband 40GB 

Hi mark, 
I have attached a blktrace of /dev/sdb1 of node1 (osd.0) 

and also iostat (showing constant writes) 

bench used: 

rados -p pool3 bench 60 write -t 16 


kernel use : 3.4 from intank 

I'll do tests with journal on an xfs partition today 

----- Mail original ----- 

De: "Mark Nelson" <mark.nelson@xxxxxxxxxxx> 
À: "Alexandre DERUMIER" <aderumier@xxxxxxxxx> 
Cc: "Amon Ott" <a.ott@xxxxxxxxxxxx>, ceph-devel@xxxxxxxxxxxxxxx, "Yann Dupont" <Yann.Dupont@xxxxxxxxxxxxxx> 
Envoyé: Mercredi 6 Juin 2012 18:43:50 
Objet: Re: Infiniband 40GB 

Hi Alexandre, 

If you can run blktrace during your test on one of the OSD data disks 
and send me the results I can take a look at them. Also, the rados 
bench settings and output would be useful too. 

Thanks, 
Mark 

On 6/6/12 11:05 AM, Alexandre DERUMIER wrote: 
> Hi, I have rebuild my cluster with ubuntu precise, 
> 
> -kernel 3.2 
> -ceph 0.47.2 
> -libc6 2.15 
> -3 nodes - 5 osd (xfs) by node and 1 tmpfs with 5 journal file. 
> 
> I had launch rados bench, 
> and I see again constant writes to xfs.... 
> 
> Maybe this is related to tmpfs ? 
> 
> 
> I'll retry with kernel 3.4 from intank tomorrow. 
> I'll also try with journal on a physical disk with xfs partition. 
> 
> I'll keep you in touch. 
> 
> 
> ----- Mail original ----- 
> 
> De: "Mark Nelson"<mark.nelson@xxxxxxxxxxx> 
> À: "Alexandre DERUMIER"<aderumier@xxxxxxxxx> 
> Cc: "Amon Ott"<a.ott@xxxxxxxxxxxx>, ceph-devel@xxxxxxxxxxxxxxx, "Yann Dupont"<Yann.Dupont@xxxxxxxxxxxxxx> 
> Envoyé: Lundi 4 Juin 2012 14:59:58 
> Objet: Re: Infiniband 40GB 
> 
> On 6/4/12 6:40 AM, Alexandre DERUMIER wrote: 
>> Hi, 
>> 
>> I'm currently doing some tests with xfs, debian wheezy with standard libc6 (2.11.3-3) and 3.2 kernel. 
>> 
>> I'm doing some iostats(3 nodes with 5 osd), and I see constant writes to disks.(as the datas are flushed each second from journal to disk). 
>> 
>> Journal is big enough (20GB tmpfs) to handle 30s of write. 
>> 
>> Do you think it's related to the missing syncfs() support ? 
>> 
>> -Alexandre 
> 
> Hi Alexandre, 
> 
> I've included some seekwatcher results for rados bench tests using 16 
> concurrent 4MB writes on XFS OSD. One shows ubuntu oneiric and the 
> other precise (ie no syncfs support vs syncfs support in libc). 
> Unfortunately the original test was on 0.46 and the second test was on 
> 0.47.2, so multiple things changed between the tests. Both were tested 
> with kernel 3.4. Interestingly the seeks/second don't seem to drop much 
> but the overall performance has about doubled. This was using a single 
> 7200rpm disk for the OSD data disk and a seperate 7200rpm disk for the 
> journal in both cases. I'd definitely try 0.47.2 with a new libc though 
> and see how that works for you. 
> 
> ceph 0.46/oneiric: 
> http://nhm.ceph.com/movies/mailinglist-tests/xfs-osd0-oneiric-3.4.mpg 
> 
> ceph 0.47.2/precise: 
> http://nhm.ceph.com/movies/mailinglist-tests/xfs-osd0-precise-3.4.mpg 
> 
> Mark 
> 
> 
> 




-- 

-- 




Alexandre D erumier 
Ingénieur Système 
Fixe : 03 20 68 88 90 
Fax : 03 20 68 90 81 
45 Bvd du Général Leclerc 59100 Roubaix - France 
12 rue Marivaux 75002 Paris - France 



-- 

-- 




	Alexandre D erumier 
Ingénieur Système 
Fixe : 03 20 68 88 90 
Fax : 03 20 68 90 81 
45 Bvd du Général Leclerc 59100 Roubaix - France 
12 rue Marivaux 75002 Paris - France 
	
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux