raid0 and ceph?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi, 

We use firefly 0.80.9. 

We have some ceph nodes in our cluster configured to use raid0. The node configuration looks like this:

2xHDD - RAID1 - /dev/sda  -  OS
1xSSD - RAID0 - /dev/sdb  -  ceph journaling disk, usually one for four data disks
1xHDD - RAID0 - /dev/sdc  -  ceph data disk
1xHDD - RAID0 - /dev/sdd  -  ceph data disk
1xHDD - RAID0 - /dev/sde  -  ceph data disk
1xHDD - RAID0 - /dev/sdf   -  ceph data disk
....

We have write cache enabled on raid0. Everything is good while it works, but we had one strange incident with cluster. Looks like SSD disk failed and linux didn't remove it from the system. All data disks which are using this SSD for journaling started to flap (up/down). Cluster performance dropped down terribly. We managed to replace SSD and everything was back to normal.

Could it be related to raid0 usage or we encountered some other bug? We haven't found anything similar on google. Any thoughts would be very appreciated. Thanks in advance.





--
Marius Vaitiekūnas
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux