Re: CephFS design

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Peter,

Yeah, went through all, also set the mds_memory_limit.
Collocated with mgr/mon so created 3 mds.
Have enough cpu, it is on ssd. So yeah, even went inside all the cephfs menu points to take the information.

Istvan Szabo
Senior Infrastructure Engineer
---------------------------------------------------
Agoda Services Co., Ltd.
e: istvan.szabo@xxxxxxxxx<mailto:istvan.szabo@xxxxxxxxx>
---------------------------------------------------

On 2021. Jun 11., at 22:36, Peter Sarossy <peter.sarossy@xxxxxxxxx> wrote:


hey Istvan,

The Hardware Recommendations<https://docs.ceph.com/en/latest/start/hardware-recommendations/> page actually has a ton of info on the questions you are asking, did you go though that one yet?

Without massive overkill, I don't think there's a "bulletproof" design, as the actual I/O use cases vary wildly depending on the application.
e.g. you mentioned that "main use case would be k8s users", are there 2000 users that need 500 IOPS each at 100MB/s, OR 5 users that touch the storage every few minutes and store 10MB of data. These are polar opposites with orders of magnitude different requirements.

On Fri, Jun 11, 2021 at 10:56 AM Szabo, Istvan (Agoda) <Istvan.Szabo@xxxxxxxxx<mailto:Istvan.Szabo@xxxxxxxxx>> wrote:
Couple of team want to use cephfs with k8s so the main use case would be k8s users.

Istvan Szabo
Senior Infrastructure Engineer
---------------------------------------------------
Agoda Services Co., Ltd.
e: istvan.szabo@xxxxxxxxx<mailto:istvan.szabo@xxxxxxxxx><mailto:istvan.szabo@xxxxxxxxx<mailto:istvan.szabo@xxxxxxxxx>>
---------------------------------------------------

On 2021. Jun 11., at 17:48, Ansgar Jazdzewski <a.jazdzewski@xxxxxxxxxxxxxx<mailto:a.jazdzewski@xxxxxxxxxxxxxx>> wrote:

Hi,

first of all, check the workload you like to have on the filesystem if
you plan to migrate an old one do some proper performance-testing of
the old storage.

the io500 can give some ideas https://www.vi4io.org/io500/start but it
depends on the use-case of the filesystem

cheers,
Ansgar

Am Fr., 11. Juni 2021 um 10:54 Uhr schrieb Szabo, Istvan (Agoda)
<Istvan.Szabo@xxxxxxxxx<mailto:Istvan.Szabo@xxxxxxxxx>>:

Hi,

Can you suggest me what is a good cephfs design? I've never used it, only rgw and rbd we have, but want to give a try. Howvere in the mail list I saw a huge amount of issues with cephfs so would like to go with some let's say bulletproof best practices.

Like separate the mds from mon and mgr?
Need a lot of memory?
Should be on ssd or nvme?
How many cpu/disk ...

Very appreciate it.

Istvan Szabo
Senior Infrastructure Engineer
---------------------------------------------------
Agoda Services Co., Ltd.
e: istvan.szabo@xxxxxxxxx<mailto:istvan.szabo@xxxxxxxxx><mailto:istvan.szabo@xxxxxxxxx<mailto:istvan.szabo@xxxxxxxxx>>
---------------------------------------------------


________________________________
This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx>
To unsubscribe send an email to ceph-users-leave@xxxxxxx<mailto:ceph-users-leave@xxxxxxx>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx>
To unsubscribe send an email to ceph-users-leave@xxxxxxx<mailto:ceph-users-leave@xxxxxxx>


--
Cheers,
Peter Sarossy
Technical Program Manager
Data Center Data Security - Google LLC.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux