Problem when building&running cuttlefish from source on Ubuntu 14.04 Server

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello everyone:

Since there's no cuttlefish package for 14.04 server on ceph
repository (only ceph-deploy there), I tried to build cuttlefish from
source on 14.04.

Here's what I did:
Get source by following http://ceph.com/docs/master/install/clone-source/
Enter the sourcecode directory
git checkout cluttlefish
git submodule update
rm -rf src/civetweb/ src/erasure-code/ src/rocksdb/
to get the latest cluttlefish repo.

Build source by following http://ceph.com/docs/master/install/build-ceph/
beside the package this url mentioned for Ubuntu:

sudo apt-get install autotools-dev autoconf automake cdbs gcc g++ git
libboost-dev libedit-dev libssl-dev libtool libfcgi libfcgi-dev
libfuse-dev linux-kernel-headers libcrypto++-dev libcrypto++
libexpat1-dev pkg-config
sudo apt-get install uuid-dev libkeyutils-dev libgoogle-perftools-dev
libatomic-ops-dev libaio-dev libgdata-common libgdata13 libsnappy-dev
libleveldb-dev

I also found it will need

sudo apt-get install libboost-filesystem-dev libboost-thread-dev
libboost-program-options-dev

(And xfsprogs if you need xfs)
after all packages are installed, I start to complie according to the doc:

./autogen.sh
./configure
make -j8

And install following
http://ceph.com/docs/master/install/install-storage-cluster/#installing-a-build

sudo make install

everything seems fine, but I found ceph_common.sh had been putted to
'/usr/local/lib/ceph', and some tools are putted into
/usr/local/usr/local/sbin/ (ceph-disk* and ceph-create-keys). I was
used to use ceph-disk to prepare the disk on other deployment (on
other machines with Emperor), but I can't do it now (and maybe the
path is the reason) so I choose to do do all stuffs manually.

I follow the doc
http://ceph.com/docs/master/install/manual-deployment/ to deploy the
cluster many times, but it turns out different this time.
/etc/ceph isn't there, therefore I sudo mkdir /etc/ceph
Put a ceph.conf into /etc/ceph
Generate all required keys in /etc/ceph instead of /tmp/ to keep them

ceph-authtool --create-keyring /etc/ceph/ceph.mon.keyring --gen-key -n
mon. --cap mon 'allow *'
ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring
--gen-key -n client.admin --set-uid=0 --cap mon 'allow *' --cap osd
'allow *' --cap mds 'allow'
ceph-authtool /etc/ceph/ceph.mon.keyring --import-keyring
/etc/ceph/ceph.client.admin.keyring

Generate monmap with monmaptool

monmaptool --create --add storage01 192.168.11.1 --fsid
9f8fffe3-040d-4641-b35a-ffa90241f723 /etc/ceph/monmap

/var/lib/ceph is not there either

sudo mkdir -p /var/lib/ceph/mon/ceph-storage01
sudo ceph-mon --mkfs -i storage01 --monmap /etc/ceph/monmap --keyring
/etc/ceph/ceph.mon.keyring

log directory are not there, so I create it manually:

sudo mkdir /var/log/ceph

since service doesn't work, I start mon daemon manually:

sudo /usr/local/bin/ceph-mon -i storage01

and ceph -s looks like these:
storage at storage01:~/ceph$ ceph -s
   health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no osds
   monmap e1: 1 mons at {storage01=192.168.11.1:6789/0}, election
epoch 2, quorum 0 storage01
   osdmap e1: 0 osds: 0 up, 0 in
    pgmap v2: 192 pgs: 192 creating; 0 bytes data, 0 KB used, 0 KB / 0 KB avail
   mdsmap e1: 0/0/1 up

And I add disks as osd by following manual commands:
sudo mkfs -t xfs -f /dev/sdb
sudo mkdir /var/lib/ceph/osd/ceph-1
sudo mount /dev/sdb /var/lib/ceph/osd/ceph-1/
sudo ceph-osd -i 1 --mkfs --mkkey
ceph osd create
ceph osd crush add osd.1 1.0 host=storage01
sudo ceph-osd -i 1

for 10 times, and I got:
storage at storage01:~/ceph$ ceph osd tree

# id    weight  type name       up/down reweight
-2      10      host storage01
0       1               osd.0   up      1
1       1               osd.1   up      1
2       1               osd.2   up      1
3       1               osd.3   up      1
4       1               osd.4   up      1
5       1               osd.5   up      1
6       1               osd.6   up      1
7       1               osd.7   up      1
8       1               osd.8   up      1
9       1               osd.9   up      1
-1      0       root default

and

storage at storage01:~/ceph$ ceph -s
   health HEALTH_WARN 192 pgs stuck inactive; 192 pgs stuck unclean
   monmap e1: 1 mons at {storage01=192.168.11.1:6789/0}, election
epoch 2, quorum 0 storage01
   osdmap e32: 10 osds: 10 up, 10 in
    pgmap v56: 192 pgs: 192 creating; 0 bytes data, 10565 MB used,
37231 GB / 37242 GB avail
   mdsmap e1: 0/0/1 up

I use the same method to install on storage02, copy
/etc/ceph/ceph.conf from storage01 and use same method to utilize osd
10~19

And I got
storage at storage02:~/ceph$ ceph osd tree

# id    weight  type name       up/down reweight
-3      10      host storage02
10      1               osd.10  up      1
11      1               osd.11  up      1
12      1               osd.12  up      1
13      1               osd.13  up      1
14      1               osd.14  up      1
15      1               osd.15  up      1
16      1               osd.16  up      1
17      1               osd.17  up      1
18      1               osd.18  up      1
19      1               osd.19  up      1
-2      10      host storage01
0       1               osd.0   up      1
1       1               osd.1   up      1
2       1               osd.2   up      1
3       1               osd.3   up      1
4       1               osd.4   up      1
5       1               osd.5   up      1
6       1               osd.6   up      1
7       1               osd.7   up      1
8       1               osd.8   up      1
9       1               osd.9   up      1
-1      0       root default

storage at storage02:~/ceph$ ceph -s
   health HEALTH_WARN 192 pgs stuck inactive; 192 pgs stuck unclean
   monmap e1: 1 mons at {storage01=192.168.11.1:6789/0}, election
epoch 2, quorum 0 storage01
   osdmap e63: 20 osds: 20 up, 20 in
    pgmap v138: 192 pgs: 192 creating; 0 bytes data, 21140 MB used,
74463 GB / 74484 GB avail
   mdsmap e1: 0/0/1 up

Nothing in progress, it just kept at creating status. How can I debug this?


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux