Re: CephFS in the wild

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I'm hoping to implement cephfs in production at some point this year so I'd be interested to hear your progress on this.

Have you considered SSD for your metadata pool? You wouldn't need loads of capacity although even with reliable SSD I'd probably still do x3 replication for metadata. I've been looking at the intel s3610's for this.



On Wed, Jun 1, 2016 at 9:50 PM, Brady Deetz <bdeetz@xxxxxxxxx> wrote:
Question:
I'm curious if there is anybody else out there running CephFS at the scale I'm planning for. I'd like to know some of the issues you didn't expect that I should be looking out for. I'd also like to simply see when CephFS hasn't worked out and why. Basically, give me your war stories.


Problem Details:
Now that I'm out of my design phase and finished testing on VMs, I'm ready to drop $100k on a pilo. I'd like to get some sense of confidence from the community that this is going to work before I pull the trigger.

I'm planning to replace my 110 disk 300TB (usable) Oracle ZFS 7320 with CephFS by this time next year (hopefully by December). My workload is a mix of small and vary large files (100GB+ in size). We do fMRI analysis on DICOM image sets as well as other physio data collected from subjects. We also have plenty of spreadsheets, scripts, etc. Currently 90% of our analysis is I/O bound and generally sequential.

In deploying Ceph, I am hoping to see more throughput than the 7320 can currently provide. I'm also looking to get away from traditional file-systems that require forklift upgrades. That's where Ceph really shines for us.

I don't have a total file count, but I do know that we have about 500k directories.


Planned Architecture:

Storage Interconnect:
Brocade VDX 6940 (40 gig)

Access Switches for clients (servers):
Brocade VDX 6740 (10 gig)

Access Switches for clients (workstations):
Brocade ICX 7450

3x MON:
128GB RAM
2x 200GB SSD for OS
2x 400GB P3700 for LevelDB
2x E5-2660v4
1x Dual Port 40Gb Ethernet

2x MDS:
128GB RAM
2x 200GB SSD for OS
2x 400GB P3700 for LevelDB (is this necessary?)
2x E5-2660v4
1x Dual Port 40Gb Ethernet

8x OSD:
128GB RAM
2x 200GB SSD for OS
2x 400GB P3700 for Journals
24x 6TB Enterprise SATA
2x E5-2660v4
1x Dual Port 40Gb Ethernet

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux