Re: Looking for use cases / opinions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Disks are SAS disks on the server. No hardware RAID(JBOD), no SSDs,
xfs for brick filesystem.

On Wed, Nov 9, 2016 at 8:28 PM, Alastair Neil <ajneil.tech@xxxxxxxxx> wrote:
> Serkan
>
> I'd be interested to know how your disks are attached (SAS?)?  Do you use
> any hardware RAID, or zfs and do you have and SSDs in there?
>
> On 9 November 2016 at 06:17, Serkan Çoban <cobanserkan@xxxxxxxxx> wrote:
>>
>> Hi, I am using 26x8TB disks per server. There are 60 servers in gluster
>> cluster.
>> Each disk is a brick and configuration is 16+4 EC, 9PB single volume.
>> Clients are using fuse mounts.
>> Even with 1-2K files in a directory, ls from clients takes ~60 secs.
>> So If you are sensitive to metadata operations, I suggest another
>> approach...
>>
>>
>> On Wed, Nov 9, 2016 at 1:05 PM, Frank Rothenstein
>> <f.rothenstein@xxxxxxxxxxxxxxxxxx> wrote:
>> > As you said you want to have 3 or 4 replicas, so i would use the zfs
>> > knowledge and build 1 zpool per node with whatever config you know is
>> > fastest on this kind of hardware and as safe as you need (stripe,
>> > mirror, raidz1..3 - resilvering zfs is faster than healing gluster, I
>> > think) . 1 node -> 1 brick (per gluster volume).
>> >
>> > Frank
>> > Am Dienstag, den 08.11.2016, 19:19 +0000 schrieb Thomas Wakefield:
>> >> We haven’t decided how the JBODS would be configured.  They would
>> >> likely be SAS attached without a raid controller for improved
>> >> performance.  I run large ZFS arrays this way, but only in single
>> >> server NFS setups right now.
>> >> Mounting each hard drive as it’s own brick would probably give the
>> >> most usable space, but would need scripting to manage building all
>> >> the bricks.  But does Gluster handle 1000’s of small bricks?
>> >>
>> >>
>> >>
>> >> > On Nov 8, 2016, at 9:18 AM, Frank Rothenstein <f.rothenstein@bodden
>> >> > -kliniken.de> wrote:
>> >> >
>> >> > Hi Thomas,
>> >> >
>> >> > thats a huge storage.
>> >> > What I can say from my usecase - dont use Gluster directly if the
>> >> > files
>> >> > are small. I dont know, if the file count matters, but if the files
>> >> > are
>> >> > small (few KiB), Gluster takes ages to remove for example. Doing
>> >> > the
>> >> > same in a VM with e.g. ext4 disk on the very same Gluster gives a
>> >> > big
>> >> > speedup.
>> >> > There are many options for a new Gluster volume, like Lindsay
>> >> > mentioned.
>> >> > And there are other options, like Ceph, OrangeFS.
>> >> > How do you want to use the JBODs? I dont think you would use every
>> >> > single drive as a brick... How are these connected to the servers?
>> >> >
>> >> > Im only dealing with about 10TiB Gluster volumes, so by far not at
>> >> > your
>> >> > planned level, but I really would like to see some results, if you
>> >> > go
>> >> > for Gluster!
>> >> >
>> >> > Frank
>> >> >
>> >> >
>> >> > Am Dienstag, den 08.11.2016, 13:49 +0000 schrieb Thomas Wakefield:
>> >> > > I think we are leaning towards erasure coding with 3 or 4
>> >> > > copies.  But open to suggestions.
>> >> > >
>> >> > >
>> >> > > > On Nov 8, 2016, at 8:43 AM, Lindsay Mathieson <lindsay.mathieso
>> >> > > > n@gm
>> >> > > > ail.com> wrote:
>> >> > > >
>> >> > > > On 8/11/2016 11:38 PM, Thomas Wakefield wrote:
>> >> > > > > High Performance Computing, we have a small cluster on campus
>> >> > > > > of
>> >> > > > > about 50 linux compute servers.
>> >> > > > >
>> >> > > >
>> >> > > > D'oh! I should have thought of that.
>> >> > > >
>> >> > > >
>> >> > > > Are you looking at replication (2 or 3)/disperse or pure
>> >> > > > disperse?
>> >> > > >
>> >> > > > --
>> >> > > > Lindsay Mathieson
>> >> > > >
>> >> > >
>> >> > > _______________________________________________
>> >> > > Gluster-users mailing list
>> >> > > Gluster-users@xxxxxxxxxxx
>> >> > > http://www.gluster.org/mailman/listinfo/gluster-users
>> >> >
>> >> >
>> >> >
>> >> >
>> >> >
>> >> > ___________________________________________________________________
>> >> > ___________
>> >> > BODDEN-KLINIKEN Ribnitz-Damgarten GmbH
>> >> > Sandhufe 2
>> >> > 18311 Ribnitz-Damgarten
>> >> >
>> >> > Telefon: 03821-700-0
>> >> > Fax:       03821-700-240
>> >> >
>> >> > E-Mail: info@xxxxxxxxxxxxxxxxxx   Internet: http://www.bodden-klini
>> >> > ken.de
>> >> >
>> >> > Sitz: Ribnitz-Damgarten, Amtsgericht: Stralsund, HRB 2919, Steuer-
>> >> > Nr.: 079/133/40188
>> >> > Aufsichtsratsvorsitzende: Carmen Schröter, Geschäftsführer: Dr.
>> >> > Falko Milski
>> >> >
>> >> > Der Inhalt dieser E-Mail ist ausschließlich für den bezeichneten
>> >> > Adressaten bestimmt. Wenn Sie nicht der vorge-
>> >> > sehene Adressat dieser E-Mail oder dessen Vertreter sein sollten,
>> >> > beachten Sie bitte, dass jede Form der Veröf-
>> >> > fentlichung, Vervielfältigung oder Weitergabe des Inhalts dieser E-
>> >> > Mail unzulässig ist. Wir bitten Sie, sofort den
>> >> > Absender zu informieren und die E-Mail zu löschen.
>> >> >
>> >> >
>> >> >             Bodden-Kliniken Ribnitz-Damgarten GmbH 2016
>> >> > *** Virenfrei durch Kerio Mail Server und Sophos Antivirus ***
>> >> >
>> >>
>> >>
>> >
>> >
>> >
>> >
>> >
>> >
>> > ______________________________________________________________________________
>> > BODDEN-KLINIKEN Ribnitz-Damgarten GmbH
>> > Sandhufe 2
>> > 18311 Ribnitz-Damgarten
>> >
>> > Telefon: 03821-700-0
>> > Fax:       03821-700-240
>> >
>> > E-Mail: info@xxxxxxxxxxxxxxxxxx   Internet:
>> > http://www.bodden-kliniken.de
>> >
>> > Sitz: Ribnitz-Damgarten, Amtsgericht: Stralsund, HRB 2919, Steuer-Nr.:
>> > 079/133/40188
>> > Aufsichtsratsvorsitzende: Carmen Schröter, Geschäftsführer: Dr. Falko
>> > Milski
>> >
>> > Der Inhalt dieser E-Mail ist ausschließlich für den bezeichneten
>> > Adressaten bestimmt. Wenn Sie nicht der vorge-
>> > sehene Adressat dieser E-Mail oder dessen Vertreter sein sollten,
>> > beachten Sie bitte, dass jede Form der Veröf-
>> > fentlichung, Vervielfältigung oder Weitergabe des Inhalts dieser E-Mail
>> > unzulässig ist. Wir bitten Sie, sofort den
>> > Absender zu informieren und die E-Mail zu löschen.
>> >
>> >
>> >              Bodden-Kliniken Ribnitz-Damgarten GmbH 2016
>> > *** Virenfrei durch Kerio Mail Server und Sophos Antivirus ***
>> >
>> > _______________________________________________
>> > Gluster-users mailing list
>> > Gluster-users@xxxxxxxxxxx
>> > http://www.gluster.org/mailman/listinfo/gluster-users
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users@xxxxxxxxxxx
>> http://www.gluster.org/mailman/listinfo/gluster-users
>
>
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux