Re: Public network faster than cluster network

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



No more advices for a new cluster ?

Sorry for these multiple posts but I had some trouble with ML. I'm getting
"Access Denied"
Il giorno ven 11 mag 2018 alle ore 10:21 Gandalf Corvotempesta <
gandalf.corvotempesta@xxxxxxxxx> ha scritto:

> no more advices for a new cluster ?
> Il giorno gio 10 mag 2018 alle ore 10:38 Gandalf Corvotempesta <
> gandalf.corvotempesta@xxxxxxxxx> ha scritto:

> > Il giorno gio 10 mag 2018 alle ore 09:48 Christian Balzer <chibi@xxxxxxx

> > ha scritto:
> > > Without knowing what your use case is (lots of large reads or writes,
or
> > > the more typical smallish I/Os) it's hard to give specific advice.

> > 99% VM hosting.
> > Everything else would be negligible and I don't care if not optimized.

> > > Which would give you 24 servers with up to 20Gb/s per server when both
> > > switches are working, something that's likely to be very close to 100%
> > > of the time.

> > 24 servers between hypervisors and storages, right ?
> > Thus, are you saying to split in this way:

> > switch0.port0 to port 12 as hypervisor, network1
> > switch0.port13 to 24 as storage, network1

> > switch0.port0 to port 12 as hypervisor, network2
> > switch0.port13 to 24 as storage, network2

> > In this case, with 2 switches I can have a fully redundant network,
> > but I also need a ISL to aggregate bandwidth.

> > > That's a very optimistic number, assuming journal/WAL/DB on SSDs _and_
> no
> > > concurrent write activity.
> > > Since you said hypervisors up there one assumes VMs on RBDs and a
mixed
> > > I/O pattern, saturating your disks with IOPS long before bandwidth
> becomes
> > > an issue.

> > Based on a real use-case, how much bandwidth should I expect with 12
SATA
> > spinning disks (7200rpm)
> > in mixed workload ? Obviously, a sequential read would need about
> > 12*100MB/s*8 mbit/s

> > > The biggest argument against the 1GB/s links is the latency as
> mentioned.

> > 10GBe should have 1/10 latency, right ?

> > Now, as I'm evaluating many SDS and Ceph, on the paper, is the most
> > expensive in terms of needed hardware,
> > what do you suggest for a small (scalable) storage, starting with just 3
> > storage servers (12 disks each but not fully populated),
> > 1x 16ports 10GBaseT switch, (many) 24ports Gigabit switch and about 5
> > hypervisors servers ?
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux