That's the way I did it Sent from my Verizon Wireless 4GLTE smartphone ----- Reply message ----- From: "Shawn Welter" <swelter at pnca.edu> To: "gluster-users at gluster.org" <gluster-users at gluster.org> Subject: 2 nodes replication upgrade to 4 nodes Date: Tue, Jun 19, 2012 7:15 pm I am wondering about almost the exact same thing. I don' know about the add-brick. I know manual the notes when you list s1,s2,s3,s4 in the create volume command it will distribute across s1 and s2 then replicate to s3 and s4. This is the layout I want but currently s1 is replicated to s2. So what would happen in the process of switching s1 and s2 from rep to dist. Maybe replace brick s2 with s3 and then add brick s2 and s4? Followed but a rebalence. Shawn On Tue, Jun 19, 2012 at 2:56 PM, Ran <smtp.test61 at gmail.com<mailto:smtp.test61 at gmail.com>> wrote: We are building a storage that will start as 2 servers in replica mode, I was just wondering if its possible to add another replica set to that volume later and by that make it distributed? let say we have server1 & server2 as replica set 1 by running: gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2 then later on will need to add server3 & server4 as replica set2, so i guess my questune is if at this point we run: gluster peer probe server3 gluster peer probe server4 gluster volume add-brick test-volume server3:/exp3 server4:/exp4 Will also be great if someone here have any expirience with ext4 filesystem in production. Ronald, _______________________________________________ Gluster-users mailing list Gluster-users at gluster.org<mailto:Gluster-users at gluster.org> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users -- Systems Administrator 503-821-8957 | Office 503-226-3587 | Fax PNCA | Pacific Northwest College of Art 1241 NW Johnson St | Portland | Oregon | 97209 swelter at pnca.edu<mailto:swelter at pnca.edu> www.pnca.edu<http://www.pnca.edu> -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://gluster.org/pipermail/gluster-users/attachments/20120619/cde04ea6/attachment.htm>