Re: 2TB useable - small business - help appreciated

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On Mon, 1 Aug 2016 15:03:14 +1000 Richard Thornton wrote:

> Thanks Wido, David and Christian, much appreciated!
> 
> Regarding using SSD for OSD, I don’t want to spend any more so I will
> use the 2TB spinning disks, performance is not a huge issue.  Ceph is
> overkill but I have the hardware lying around.
> 
Your call really.

> It’s a small business, just a few users, no current file server, just
> google drive and apple time machine, I wouldn’t say I have any
> business critical storage requirements.
> 
Well, that sounds different from Vsphere storage, but again, your call.

> It’s going to be slow, what does that mean, it will crawl, saving a
> 10MB file will take an hour, perhaps the combination of Atom CPU and
> spinning disks are a good marriage, 100MB/sec read/write for a large
> file would be amazing, half that would be fine, I would only expect 1
> user to be reading/writing something big at any one time.
> 
Sequential writes (and with proper read-ahead also reads) won't be much of
an issue, I'd expect with SSD journals you'll be able to get to 100MB/s.

It's anything that requires IOPS and competing/concurrent access that will
be painful.

> Can’t I use a single DC S3700 for journal and cache tier?
> 
If you run the numbers and are happy with the results, yes.

Firstly there's endurance, now your S3700 is no longer a 10DWPD but a
3DWPD device (cache-tier, cache-tier journal, HDD journal).

Same goes for speed, more or less. 
In the worst case scenario your're down from 360MB/s to 120MB/s writes.
Still plenty fast considering that you're only talking to one HDD which
is unlikely to keep up, but something to keep in mind.


> I’m about to decommission a Supermicro c2750 and so I could add that
> to the mix, could somebody please help me out with assigning the
> hardware and functions, I have a bunch of RAM sticks and could buy a
> few more:
> 
> Node 1 - c2750(8c),4NIC,8GB,24GB,200GB     - MDS, NFS
Fastest (?) machine, thus primary MON (lowest IP).
Primany MDS, but still don't see the use case (shared data between your
various platforms needs to be on NFS, not CephFS).
Note that MDS has no storage needs by itself, so small SSDs for the OS
should be fine.

Also this could hold your last 200GB DC S3700 for cache-tiering.

> Node 2 - c2550(4c),4NIC,8GB,24GB,200GB,2TB - OSD, Standby MDS, NFS
> Node 3 - c2550(4c),4NIC,4GB,24GB,200GB,2TB - OSD, MON
> Node 4 - c2550(4c),4NIC,4GB,24GB,200GB,2TB - OSD, MON
> Node 5 - c2550(4c),4NIC,4GB,24GB,200GB,2TB - OSD, MON
> 
> I want to keep my power usage low and happy to do that at the expense
> of performance.
> 
Lowest power usage would be 2 nodes and DRBD, obviously. 
With the DRBD bitmap (AL) on SSD you'd be likely faster than your 4 node
Ceph cluster as well.
With one exception, long sequential and concurrent (multiple consumers)
reads, where more than 1 OSD/HDD/network card would be providing data. 

> With all the info provided is DRBD Pacemaker HA Cluster or even
> GlusterFS a better option?
> 
No GlusterFS support for VMware as well last time I checked, only
interfaces via an additional NFS head again, so no advantage here.

Christian
> Thanks again.
> 
> Richard
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Rakuten Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux