Re: 5 host setup with NVMe's and HDDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Tino,

Proxmox has a good wiki for this:
https://pve.proxmox.com/wiki/Deploy_Hyper-Converged_Ceph_Cluster

You will run their internal deployment process, which is easy and
painless.  I recommend starting with 3x replication, and setting up your
NVMe root with either UI or scripts.  Here is an example command we used to
create OSDs on all 20TB drives on a given Proxmox VE host:

lsblk|grep 18.2|grep disk|awk '{print $1}'|xargs -I {} pveceph osd create
/dev/{} --encrypted 1 --crush-device-class hdd -db_dev /dev/nvme0n1
--db_dev_size 300

--
Alex Gorbachev
ISS/Storcium
www.iss-integration.com



On Wed, Mar 29, 2023 at 9:47 AM Tino Todino <tinot@xxxxxxxxxxxxxxxxx> wrote:

> Hi folks.
>
> Just looking for some up to date advice please from the collective on how
> best to set up CEPH on 5 Proxmox hosts each configured with the following:
>
> AMD Ryzen 7 5800X CPU
> 64GB RAM
> 2x SSD (as ZFS boot disk for Proxmox)
> 1x 500GB NVMe for DB/WAL
> 1x 1TB NVMe as an OSD
> 1x 16TB SATA HDD as an OSD
> 2x 10GB NIC (One for Public and one for Cluster networks)
> 1 GB NIC for management interface
>
> The CEPH solution will be used primarily for storage of another Proxmox
> cluster's virtual machines and their data. We'd like a fast pool using the
> NVMe's for critical VMs and a slower HDD based pool for VM's that don't
> require such fast disk access and perhaps require more storage capacity.
>
> To expand in the future we will probably add more hosts in the same sort
> of configuration and/or replace NVMe/HDDs OSDs with more capacious ones.
>
> Ideas for configuration welcome please.
>
> Many thanks
>
> Tino
> Coastsense Ltd
>
>
> This E-mail is intended solely for the person or organisation to which it
> is addressed. It may contain privileged or confidential information and, if
> you are not the intended recipient, you must not copy, distribute or take
> any action in reliance upon it. Any views or opinions presented are solely
> those of the author and do not necessarily represent those of Marlan
> Maritime Technologies Ltd. If you have received this E-mail in error,
> please notify us as soon as possible and delete it from your computer.
> Marlan Maritime Technologies Ltd Registered in England & Wales 323 Mariners
> House, Norfolk Street, Liverpool. L1 0BG Company No. 08492427.
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux