Re: some ceph general questions about osd and pg

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Harald,

OSD count means the number of disks you are going to allocate to ceph, you
can change the whole column by clicking on the "OSD #" at the top of the
table.
And there's some predefined recommendation for various use cases named: "Ceph
Use Case Selector:" you can find it on the same page.
I'm not sure if it's really a best practice or not but you may have a look
at "All-in-one" use case because you are going to use openstack and rados
and lvm.

Regards,
Khodayar

On Sat, Apr 18, 2020 at 9:19 PM <harald.freidhof@xxxxxxxxx> wrote:

> Hello togehter
>
> we want to implement a 3 nodes ceph cluster wirh nautilus. i already
> tested some ceph installations in our test enviroment and i have some
> generall questions.
> end of this month we will have three physical server with 256gb ram and
> two cpus and nerly 48 x 6tb disks.
> iam a little bit confused to calculate the pgs with the pgcalc on the ceph
> side.
> in the field osds what exactly meands that? the 3 physicaly osd nodes or
> the disks that will be used for the osds?
> what can you recomment us? we want later connect the ceph with rados and
> openstack and lvm
>
> thx in advance
> hfreidhof
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux