Re: low power single disk nodes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Mark,

We added the 2x PCI Drive slot converter, so managed to squeeze 12 OSD's + 2
journals in each tray.

We did look at the E3 based nodes but as our first adventure into Ceph we
were unsure if the single CPU would have enough grunt. Going forward, now we
have some performance data, we might re think this.

Nick

> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of
> Mark Nelson
> Sent: 13 April 2015 17:53
> To: ceph-users@xxxxxxxxxxxxxx
> Subject: Re:  low power single disk nodes
> 
> We have the single-socket version of this chassis with 4 nodes in our test
lab.
> E3-1240v2 CPU with 10 spinners for OSDs, 2 DC S3700s, a 250GB spinner for
> OS, 10GbE, and a SAS2308 HBA + on-board SATA.  They work well but were
> oddly a little slow for sequential reads from what I remember.  Overall
not
> bad though and I think a very reasonable solution, especially if you want
> smaller clusters while maintaining similar (actually slightly better)
drive
> density vs the 36 drive chassis.  They weren't quite able to saturate a
10GbE
> link for writes (about 700MB/s including OSD->OSD replica writes if I
recall).
> Close enough that you won't feel like you are wasting the 10GbE.  Gives
> them a bit of room to grow too as Ceph performance improves.
> 
> Mark
> 
> On 04/13/2015 11:34 AM, Nick Fisk wrote:
> > I went for something similar to the Quantas boxes but 4 stacked in 1x
> > 4U box
> >
> > http://www.supermicro.nl/products/system/4U/F617/SYS-F617H6-
> FTPT_.cfm
> >
> > When you do the maths, even something like a banana pi + disk starts
> > costing a similar amount and you get so much more for your money in
> > temrs of processing power, NIC bandwidth...etc
> >
> >
> >> -----Original Message-----
> >> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf
> >> Of Robert LeBlanc
> >> Sent: 13 April 2015 17:27
> >> To: Jerker Nyberg
> >> Cc: ceph-users@xxxxxxxxxxxxxx
> >> Subject: Re:  low power single disk nodes
> >>
> >> We are getting ready to put the Quantas into production. We looked at
> >> the Supermico Atoms (we have 6 of them), the rails were crap (they
> >> exploded the first time you pull the server out, and they stick out
> >> of the back of
> > the
> >> cabinet about 8 inches, these boxes are already very deep), we also
> >> ran
> > out
> >> of CPU on these boxes and had limited PCI I/O).
> >> They may work fine for really cold data. It may also work fine with
> >> XIO
> > and
> >> Infiniband. The Atoms still had pretty decent performance given these
> >> limitations.
> >>
> >> The Quantas removed some of the issues with NUMA, had much better
> PCI
> >> I/O bandwidth, comes with a 10 Gb NIC on board. The biggest drawback
> >> is that 8 drives is on a SAS controller and 4 drives are on a SATA
> > controller, plus
> >> SATADOM and a free port. So you have to manage two different
> >> controller types and speeds (6Gb SAS and 3Gb SATA).
> >>
> >> I'd say neither is perfect, but we decided on Quanta in the end.
> >>
> >> On Mon, Apr 13, 2015 at 5:17 AM, Jerker Nyberg <jerker@xxxxxxxxxxxx>
> >> wrote:
> >>>
> >>> Hello,
> >>>
> >>> Thanks for all replies! The Banana Pi could work. The built in
> >>> SATA-power in Banana Pi can power a 2.5" SATA disk. Cool. (Not 3.5"
> >>> SATA since that seem to require 12 V too.)
> >>>
> >>> I found this post from Vess Bakalov about the same subject:
> >>> http://millibit.blogspot.se/2015/01/ceph-pi-adding-osd-and-more-perf
> >>> or
> >>> mance.html
> >>>
> >>> For PoE I have only found Intel Galileo Gen 2 or RouterBOARD RB450G
> >>> which are too slow and/or miss IO-expansion. (But good for
> >>> signage/Xibo maybe!)
> >>>
> >>> I found two boxes from Quanta and SuperMicro with single socket Xeon
> >>> or with Intel Atom (Avaton) that might be quite ok. I was only aware
> >>> of the dual-Xeons before.
> >>>
> >>> http://www.quantaqct.com/Product/Servers/Rackmount-
> >> Servers/STRATOS-S10
> >>> 0-L11SL-p151c77c70c83
> >>> http://www.supermicro.nl/products/system/1U/5018/SSG-5018A-
> >> AR12L.cfm
> >>>
> >>> Kind regards,
> >>> Jerker Nyberg
> >>>
> >>>
> >>>
> >>>
> >>> On Thu, 9 Apr 2015, Quentin Hartman wrote:
> >>>
> >>>> I'm skeptical about how well this would work, but a Banana Pi might
> >>>> be a place to start. Like a raspberry pi, but it has a SATA
connector:
> >>>> http://www.bananapi.org/
> >>>>
> >>>> On Thu, Apr 9, 2015 at 3:18 AM, Jerker Nyberg <jerker@xxxxxxxxxxxx>
> >> wrote:
> >>>>
> >>>>>
> >>>>> Hello ceph users,
> >>>>>
> >>>>> Is anyone running any low powered single disk nodes with Ceph now?
> >>>>> Calxeda
> >>>>> seems to be no more according to Wikipedia. I do not think HP
> >>>>> moonshot is what I am looking for - I want stand-alone nodes, not
> >>>>> server cartridges integrated into server chassis. And I do not
> >>>>> want to be locked to a single vendor.
> >>>>>
> >>>>> I was playing with Raspberry Pi 2 for signage when I thought of my
> >>>>> old experiments with Ceph.
> >>>>>
> >>>>> I am thinking of for example Odroid-C1 or Odroid-XU3 Lite or maybe
> >>>>> something with a low-power Intel x64/x86 processor. Together with
> >>>>> one SSD or one low power HDD the node could get all power via PoE
> >>>>> (via splitter or integrated into board if such boards exist). PoE
> >>>>> provide remote power-on power-off even for consumer grade
> nodes.
> >>>>>
> >>>>> The cost for a single low power node should be able to compete
> >>>>> with traditional PC-servers price per disk. Ceph take care of
> redundancy.
> >>>>>
> >>>>> I think simple custom casing should be good enough - maybe just
> >>>>> strap or velcro everything on trays in the rack, at least for the
> > nodes with
> >> SSD.
> >>>>>
> >>>>> Kind regards,
> >>>>> --
> >>>>> Jerker Nyberg, Uppsala, Sweden.
> >>>>> _______________________________________________
> >>>>> ceph-users mailing list
> >>>>> ceph-users@xxxxxxxxxxxxxx
> >>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >>>>>
> >>>>
> >>> _______________________________________________
> >>> ceph-users mailing list
> >>> ceph-users@xxxxxxxxxxxxxx
> >>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >> _______________________________________________
> >> ceph-users mailing list
> >> ceph-users@xxxxxxxxxxxxxx
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> >
> >
> >
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux