USB 3.0 or eSATA for externally mounted OSDs?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

I'm looking at expanding the storage of my cluster with some external
HDDs and was looking for advice on the connection interface.

I have 3 storage nodes that are combined Ceph
monitor+manager+metadata+OSD with 1TB hard drives (HGST
HTS541010A9E680).  The nodes themselves are built on Supermicro
A1SAi-2750F motherboards with 16GB* RAM each and 8-core Intel Atom C2750
CPUs soldered on, with the 4 gigabit Ethernet interfaces bonded into
pairs, one for the private cluster VLAN and one for the public network.

(* actually one with 8GB RAM because a stick died, I have replacement
RAM though.  I'll wait until I have another reason to power the node
down before putting the extra 8GB back in as right now it's going fine
without.)

I'm not making a lot of use of CephFS yet, but I intend to use it with
Docker and for some OpenNebula tasks.  99% of the workload right now is
RBDs for virtual machines.

These have served well, but with the addition of a few VMs, space is
getting just a fraction tight.  The cases can fit two 2.5" HDDs, and
have one 120GB SSD and the aforementioned 1TB HDD fitted.  No more space
inside.  The cases are mounted on a DIN rail.

2TB 2.5" HDDs are an option, but that seems to be where 2.5" HDDs max
out, unless I want to trust my data to Seagate.  (Many times bitten,
quite shy.)  I'm not made of money, so that rules out SSDs at this size.

The other option is to move to 3.5" HDDs, which will have to go in a
separate external case.  4TB HDDs are pretty cheap these days and so
that would see my needs out for a while.

I can buy off-the-shelf USB 3.0 HDD cases that will plug directly in to
the Ceph nodes to provide additional OSDs.  There's two USB 3.0 ports
that I can use for this -- I then just need to add a DIN rail bracket to
the case and rig up a 12V regulator to power it -- not rocket science.

Alternatively, I can go eSATA.  It seems eSATA cases have gone the way
of the dinosaur in favour of USB3 and ThunderBolt (which I do not have).

There's no eSATA ports on the motherboard, but I can buy for AU$20 an
adaptor bracket that has two SATA to eSATA adaptors mounted, so adding
an eSATA port to the server is no problem.  (The cable length will be
less than 1m, so no need for a dedicated eSATA HBA.)

I *think* I might be able to use the same adaptor to go from eSATA back
to the HDD itself, so DIY may be an option here.

My understanding is that eSATA has a lower CPU overhead than USB 3.0.
ceph-osd can be quite CPU intensive at times (recommended dual-core
CPUs), and the nodes also run the metadata storage daemons which are
said to be hungry for CPU (recommended quad-core).

I did do a migration of my cluster from FileStore/btrfs to BlueStore,
and that involved me plugging in a temporary OSD over USB 3.0 (WDC WD10
02FAEX-00Z3A0 in a 3.5" drive dock) and while that worked fine, it's not
really a real-world test.

The question is, is the CPU overhead of USB 3.0 likely to matter at
speed?  Anyone tried it long term and have any comments about
reliability/performance?

Regards,
-- 
Stuart Longland (aka Redhatter, VK4MSL)

I haven't lost my mind...
  ...it's backed up on a tape somewhere.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux