Re: RFP for arm64 test nodes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Dan,

It's great to hear that Ceph upstream is planning to move one step further forwards Arm64.
At Linaro, We have done a lot of works for porting/testing for open source projects on Arm64 and if there is some help needed please let me know.

That looks reasonable for almost all the on-market servers to meet the suggested features. The only consideration would be the CPU performance and node density.

On Tue, 12 Oct 2021 at 08:56, Dan Mick <dmick@xxxxxxxxxx> wrote:
We have some experience testing Ceph on x86 VMs; we used to do that a
lot, but have move to mostly physical hosts. I could be wrong, but I
think our experience is that the cross-loading from one swamped VM to
another on the same physical host can skew the load/failure recovery
testing enough that it's attractive for our normal test strategy/load to
have separate physical hosts.

On 10/11/2021 12:00 AM, Martin Verges wrote:
> Hello Dan,
>
> why not using a bit bigger machines and use VMs for tests? We have quite
> good experience with that and it works like a charm. If you plan them as
> hypervisors, you can run a lot of tests simultaneous. Use the 80 core
> ARM, put 512GB or more in them and use some good NVMe like P55XX or so.
> In addition put 2*25GbE/40GbE in the servers and you need only a few of
> them to simulate a lot. This would save costs, makes it easier to
> maintain, and you are much more flexible. For example running tests on
> different OS, injecting latency, simulating errors and more.
>
> --
> Martin Verges
> Managing director
>
> Mobile: +49 174 9335695  | Chat: https://t.me/MartinVerges
> <https://t.me/MartinVerges>
>
> croit GmbH, Freseniusstr. 31h, 81247 Munich
> CEO: Martin Verges - VAT-ID: DE310638492
> Com. register: Amtsgericht Munich HRB 231263
> Web: https://croit.io <https://croit.io> | YouTube:
> https://goo.gl/PGE1Bx <https://goo.gl/PGE1Bx>
>
>
> On Sat, 9 Oct 2021 at 01:25, Dan Mick <dmick@xxxxxxxxxx
> <mailto:dmick@xxxxxxxxxx>> wrote:
>
>     Ceph has been completely ported to build and run on ARM hardware
>     (architecture arm64/aarch64), but we're unable to test it due to
>     lack of
>     hardware.  We propose to purchase a significant number of ARM servers
>     (50+?) to install in our upstream Sepia test lab to use for upstream
>     testing of Ceph, alongside the x86 hardware we already own.
>
>     This message is to start a discussion of what the nature of that
>     hardware should be, and an investigation as to what's available and how
>     much it might cost.  The general idea is to build something arm64-based
>     that is similar to the smithi/gibba nodes:
>
>     https://wiki.sepia.ceph.com/doku.php?id=hardware:gibba
>     <https://wiki.sepia.ceph.com/doku.php?id=hardware:gibba>
>
>     Some suggested features:
>
>     * base hardware/peripheral support for current releases of RHEL,
>     CentOS,
>     Ubuntu
>     * 1 fast and largish (400GB+) NVME drive for OSDs (it will be
>     partitioned into 4-5 subdrives for tests)
>     * 1 large (1TB+) SSD/HDD for boot/system and logs (faster is better but
>     not as crucial as for cluster storage)
>     * Remote/headless management (IPMI?)
>     * At least 1 10G network interface per host
>     * Order of 64GB main memory per host
>
>     Density is valuable to the lab; we have space but not an unlimited
>     amount.
>
>     Any suggestions on vendors or specific server configurations?
>
>     Thanks!
>
>     _______________________________________________
>     Dev mailing list -- dev@xxxxxxx <mailto:dev@xxxxxxx>
>     To unsubscribe send an email to dev-leave@xxxxxxx
>     <mailto:dev-leave@xxxxxxx>
>

_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx


--
Best Regards

Kevin Zhao

Tech Lead, LDCG Cloud Infrastructure

Linaro Vertical Technologies

IRC(freenode): kevinz

Slack(kubernetes.slack.com): kevinz

kevin.zhao@xxxxxxxxxx | Mobile/Direct/Wechat:  +86 18818270915 


_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx

[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux