Re: State of Gluster project

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hey Erik,

I actually ment that  there  is  no point  in using controllers with fast storage like SAS SSDs or NVMEs.
They (the controllers) usually have 1-2 GB of RAM  to buffer writes until the risc processor analyzes the requests and stacks them - thus JBOD (in 'replica 3' )makes much more sense for any kind of software defined storage (no matter Gluster, CEPH or Lustre).

Of course, I could be wrong and I would be glad to read benchmark results on this topic.

Best Regards,
Strahil Nikolov




На 22 юни 2020 г. 18:48:43 GMT+03:00, Erik Jacobson <erik.jacobson@xxxxxxx> написа:
>> For NVMe/SSD  - raid controller is pointless ,  so JBOD makes  most
>sense.
>
>I am game for an education lesson here. We're still using spinng drives
>with big RAID caches but we keep discussing SSD in the context of RAID.
>I
>have read for many real-world workloads, RAID0 makes no sense with
>modern SSDs. I get that part. But if your concern is reliability and
>reducing the need to mess with Gluster to recover from a drive failure,
>a RAID1 or or RADI10 (or some other with redundancy) would seem to at
>least make sense from that perspective.
>
>Was your answer a performance answer? Or am I missing something about
>RAIDs for redundancy and SSDs being a bad choice?
>
>Thanks again as always,
>
>Erik
________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux