You need to replace # with ;
02 янв. 2014 г. 19:58 пользователь "xyx" <dba_xyx@xxxxxxxxxxx> написал:
Hello,My Ceph Teacher:I just finished my ceph configuration:Configured as follows:[global]auth cluster required = noneauth service required = noneauth client required = none[osd]osd journal size = 1000osd data = "">osd journal = /home/osd$id/journalfilestore xattr use omap = trueosd mkfs type = xfsosd mkfs options xfs = -f # default for xfs is "-f"osd mount options xfs = rw,noatime # default mount option is "rw,noatime"[mon.a]host = umm-managermon addr = 192.168.1.130:6789mon data= "" /data/mon$id[osd.0]host = umm-osd0devs = /dev/sdb1#[osd.1]# host = umm-osd1# devs = /dev/sdb1[mds.a]host = umm-manageWhen I started this ceph, see ceph condition reported the following error:2014-01-03 05:45:07.378163 7f6658278700 0 - :/ 1001908 >> 192.168.1.130:6789 / 0 pipe (0x7f66480008c0 sd = 3: 0 s = 1 pgs = 0 cs = 0 l = 1 c = 0x7f664800a090) . fault2014-01-03 05:45:10.378105 7f6658177700 0 - :/ 1001908 >> 192.168.1.130:6789 / 0 pipe (0x7f6648002720 sd = 3: 0 s = 1 pgs = 0 cs = 0 l = 1 c = 0x7f66480040e0) . fault2014-01-03 05:45:13.379405 7f6658278700 0 - :/ 1001908 >> 192.168.1.130:6789 / 0 pipe (0x7f66480008c0 sd = 3: 0 s = 1 pgs = 0 cs = 0 l = 1 c = 0x7f664800f750) . fault2014-01-03 05:45:16.379616 7f6658177700 0 - :/ 1001908 >> 192.168.1.130:6789 / 0 pipe (0x7f6648002720 sd = 3: 0 s = 1 pgs = 0 cs = 0 l = 1 c = 0x7f664800d280) . fault2014-01-03 05:45:19.380543 7f6658278700 0 - :/ 1001908 >> 192.168.1.130:6789 / 0 pipe (0x7f66480008c0 sd = 3: 0 s = 1 pgs = 0 cs = 0 l = 1 c = 0x7f6648002db0) . fault2014-01-03 05:45:22.380961 7f6658177700 0 - :/ 1001908 >> 192.168.1.130:6789 / 0 pipe (0x7f6648002720 sd = 3: 0 s = 1 pgs = 0 cs = 0 l = 1 c = 0x7f66480112c0) . fault2014-01-03 05:45:25.382346 7f6658278700 0 - :/ 1001908 >> 192.168.1.130:6789 / 0 pipe (0x7f66480008c0 sd = 3: 0 s = 1 pgs = 0 cs = 0 l = 1 c = 0x7f664800c8b0) . fault2014-01-03 05:45:28.382809 7f6658177700 0 - :/ 1001908 >> 192.168.1.130:6789 / 0 pipe (0x7f6648002720 sd = 3: 0 s = 1 pgs = 0 cs = 0 l = 1 c = 0x7f6648011ca0) . fault2014-01-03 05:45:31.384397 7f6658278700 0 - :/ 1001908 >> 192.168.1.130:6789 / 0 pipe (0x7f66480008c0 sd = 3: 0 s = 1 pgs = 0 cs = 0 l = 1 c = 0x7f664800c770) . fault2014-01-03 05:45:34.386216 7f6658177700 0 - :/ 1001908 >> 192.168.1.130:6789 / 0 pipe (0x7f6648002720 sd = 3: 0 s = 1 pgs = 0 cs = 0 l = 1 c = 0x7f6648011f00) . fault2014-01-03 05:45:37.387132 7f6658278700 0 - :/ 1001908 >> 192.168.1.130:6789 / 0 pipe (0x7f66480008c0 sd = 3: 0 s = 1 pgs = 0 cs = 0 l = 1 c = 0x7f6648006b80) . fault2014-01-03 05:45:40.387746 7f6658177700 0 - :/ 1001908 >> 192.168.1.130:6789 / 0 pipe (0x7f6648011bd0 sd = 3: 0 s = 1 pgs = 0 cs = 0 l = 1 c = 0x7f6648015550) . fault2014-01-03 05:45:43.388043 7f6658278700 0 - :/ 1001908 >> 192.168.1.130:6789 / 0 pipe (0x7f66480008c0 sd = 3: 0 s = 1 pgs = 0 cs = 0 l = 1 c = 0x7f66480018a0) . fault2014-01-03 05:45:46.389099 7f6658177700 0 - :/ 1001908 >> 192.168.1.130:6789 / 0 pipe (0x7f6648011bd0 sd = 3: 0 s = 1 pgs = 0 cs = 0 l = 1 c = 0x7f6648000c00) . fault2014-01-03 05:45:49.390417 7f6658278700 0 - :/ 1001908 >> 192.168.1.130:6789 / 0 pipe (0x7f66480008c0 sd = 3: 0 s = 1 pgs = 0 cs = 0 l = 1 c = 0x7f66480018a0) . fault2014-01-03 05:45:52.391095 7f6658177700 0 - :/ 1001908 >> 192.168.1.130:6789 / 0 pipe (0x7f6648011bd0 sd = 3: 0 s = 1 pgs = 0 cs = 0 l = 1 c = 0x7f6648005c10) . fault2014-01-03 05:45:55.391905 7f6658278700 0 - :/ 1001908 >> 192.168.1.130:6789 / 0 pipe (0x7f66480008c0 sd = 3: 0 s = 1 pgs = 0 cs = 0 l = 1 c = 0x7f6648017460) . fault2014-01-03 05:45:58.392780 7f6658177700 0 - :/ 1001908 >> 192.168.1.130:6789 / 0 pipe (0x7f6648011bd0 sd = 3: 0 s = 1 pgs = 0 cs = 0 l = 1 c = 0x7f6648011f00) . fault2014-01-03 05:46:01.394361 7f6658278700 0 - :/ 1001908 >> 192.168.1.130:6789 / 0 pipe (0x7f6648005490 sd = 3: 0 s = 1 pgs = 0 cs = 0 l = 1 c = 0x7f6648017a80) . fault2014-01-03 05:46:04.394836 7f6658177700 0 - :/ 1001908 >> 192.168.1.130:6789 / 0 pipe (0x7f6648011bd0 sd = 3: 0 s = 1 pgs = 0 cs = 0 l = 1 c = 0x7f66480039e0) . fault2014-01-03 05:46:07.396580 7f6658278700 0 - :/ 1001908 >> 192.168.1.130:6789 / 0 pipe (0x7f6648005490 sd = 3: 0 s = 1 pgs = 0 cs = 0 l = 1 c = 0x7f664800af60) . fault2014-01-03 05:46:10.397038 7f6658177700 0 - :/ 1001908 >> 192.168.1.130:6789 / 0 pipe (0x7f6648011bd0 sd = 3: 0 s = 1 pgs = 0 cs = 0 l = 1 c = 0x7f6648002c10) . fault2014-01-03 05:46:13.398699 7f6658278700 0 - :/ 1001908 >> 192.168.1.130:6789 / 0 pipe (0x7f6648005490 sd = 3: 0 s = 1 pgs = 0 cs = 0 l = 1 c = 0x7f66480025a0) . fault^ CError connecting to cluster: ErrorI find on google for a long time, did not find a solution, we can only come to ask you, hoping to get your advice, thank you!
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
- References:
- Prev by Date: Re: Ceph as offline S3 substitute and peer-to-peer fileshare?
- Next by Date: how to use the function ceph_open_layout
- Previous by thread: 2014-01-02 05:46:13.398699 7f6658278700 0 - :/ 1001908 >> 192.168.1.130:6789 / 0 pipe (0x7f6648005490 sd = 3: 0 s = 1 pgs = 0 cs = 0 l = 1 c = 0x7f66480025a0) . fault
- Next by thread: Is cepf feasible for storing large no. of small files with care of handling OSD failure(disk i/o error....) so it can complete pending replication independent from no. of files
- Index(es):