Re: New Cluster (0.87), Missing Default Pools?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



No !
It would have been a really bad idea. I upgraded without losing my
default pools, hopefully ;)

-- 
Thomas Lemarchand
Cloud Solutions SAS - Responsable des systèmes d'information



On jeu., 2014-12-18 at 10:10 -0800, JIten Shah wrote:
> So what happens if we upgrade from Firefly to Giant? Do we loose the pools?
> 
> —Jiten
> On Dec 18, 2014, at 5:12 AM, Thomas Lemarchand <thomas.lemarchand@xxxxxxxxxxxxxxxxxx> wrote:
> 
> > I remember reading somewhere (maybe in changelogs) that default pools
> > were not created automatically anymore.
> > 
> > You can create pools you need yourself.
> > 
> > -- 
> > Thomas Lemarchand
> > Cloud Solutions SAS - Responsable des systèmes d'information
> > 
> > 
> > 
> > On jeu., 2014-12-18 at 06:52 -0600, Dyweni - Ceph-Users wrote:
> >> Hi All,
> >> 
> >> 
> >> Just setup the monitor for a new cluster based on Giant (0.87) and I 
> >> find that only the 'rbd' pool was created automatically.  I don't see 
> >> the 'data' or 'metadata' pools in 'ceph osd lspools' or the log files.  
> >> I haven't setup any OSDs or MDSs yet.  I'm following the manual 
> >> deployment guide.
> >> 
> >> Would you mind looking over the setup details/logs below and letting me 
> >> know my mistake please?
> >> 
> >> 
> >> 
> >> Here's my /etc/ceph/ceph.conf file:
> >> -----------
> >> [global]
> >>         fsid = xxxxxx
> >> 
> >>         public network = xx.xx.xx.xx/xx
> >>         cluster network = xx.xx.xx.xx/xx
> >> 
> >>         auth cluster required = cephx
> >>         auth service required = cephx
> >>         auth client required = cephx
> >> 
> >>         osd pool default size = 2
> >>         osd pool default min size = 1
> >> 
> >>         osd pool default pg num = 100
> >>         osd pool default pgp num = 100
> >> 
> >> [mon]
> >>         mon initial members = a
> >> 
> >> [mon.a]
> >>         host = xx
> >>         mon addr = xx.xx.xx.xx
> >> -----------
> >> 
> >> 
> >> Here's the commands used to setup the monitor:
> >> -----------
> >> ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. 
> >> --cap mon 'allow *'
> >> ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring 
> >> --gen-key -n client.admin --set-uid=0 --cap mon 'allow *' --cap osd 
> >> 'allow *' --cap mds 'allow'
> >> ceph-authtool /tmp/ceph.mon.keyring --import-keyring 
> >> /etc/ceph/ceph.client.admin.keyring
> >> monmaptool --create --add xx xx.xx.xx.xx --fsid xxxxxx /tmp/monmap
> >> mkdir /var/lib/ceph/mon/ceph-a
> >> ceph-mon --mkfs -i a --monmap /tmp/monmap --keyring 
> >> /tmp/ceph.mon.keyring
> >> /etc/init.d/ceph-mon.a start
> >> -----------
> >> 
> >> 
> >> Here's the ceph-mon.a logfile:
> >> -----------
> >> 2014-12-18 12:35:45.768752 7fb00df94780  0 ceph version 0.87 
> >> (c51c8f9d80fa4e0168aa52685b8de40e42758578), process ceph-mon, pid 3225
> >> 2014-12-18 12:35:45.856851 7fb00df94780  0 mon.a does not exist in 
> >> monmap, will attempt to join an existing cluster
> >> 2014-12-18 12:35:45.857069 7fb00df94780  0 using public_addr 
> >> xx.xx.xx.xx:0/0 -> xx.xx.xx.xx:6789/0
> >> 2014-12-18 12:35:45.857126 7fb00df94780  0 starting mon.a rank -1 at 
> >> xx.xx.xx.xx:6789/0 mon_data /var/lib/ceph/mon/ceph-a fsid xxxxxx
> >> 2014-12-18 12:35:45.857330 7fb00df94780  1 mon.a@-1(probing) e0 preinit 
> >> fsid xxxxxx
> >> 2014-12-18 12:35:45.857402 7fb00df94780  1 mon.a@-1(probing) e0  
> >> initial_members a, filtering seed monmap
> >> 2014-12-18 12:35:45.858322 7fb00df94780  0 mon.a@-1(probing) e0  my rank 
> >> is now 0 (was -1)
> >> 2014-12-18 12:35:45.858360 7fb00df94780  1 mon.a@0(probing) e0 
> >> win_standalone_election
> >> 2014-12-18 12:35:45.859803 7fb00df94780  0 log_channel(cluster) log 
> >> [INF] : mon.a@0 won leader election with quorum 0
> >> 2014-12-18 12:35:45.863846 7fb008d4b700  1 
> >> mon.a@0(leader).paxosservice(pgmap 0..0) refresh upgraded, format 1 -> 0
> >> 2014-12-18 12:35:45.863867 7fb008d4b700  1 mon.a@0(leader).pg v0 
> >> on_upgrade discarding in-core PGMap
> >> 2014-12-18 12:35:45.865662 7fb008d4b700  1 
> >> mon.a@0(leader).paxosservice(auth 0..0) refresh upgraded, format 1 -> 0
> >> 2014-12-18 12:35:45.865719 7fb008d4b700  1 mon.a@0(probing) e1 
> >> win_standalone_election
> >> 2014-12-18 12:35:45.867394 7fb008d4b700  0 log_channel(cluster) log 
> >> [INF] : mon.a@0 won leader election with quorum 0
> >> 2014-12-18 12:35:46.003223 7fb008d4b700  0 log_channel(cluster) log 
> >> [INF] : monmap e1: 1 mons at {a=xx.xx.xx.xx:6789/0}
> >> 2014-12-18 12:35:46.040555 7fb008d4b700  1 
> >> mon.a@0(leader).paxosservice(auth 0..0) refresh upgraded, format 1 -> 0
> >> 2014-12-18 12:35:46.087081 7fb008d4b700  0 log_channel(cluster) log 
> >> [INF] : pgmap v1: 0 pgs: ; 0 bytes data, 0 kB used, 0 kB / 0 kB avail
> >> 2014-12-18 12:35:46.141415 7fb008d4b700  0 mon.a@0(leader).mds e1 
> >> print_map
> >> epoch   1
> >> flags   0
> >> created 0.000000
> >> modified        2014-12-18 12:35:46.038418
> >> tableserver     0
> >> root    0
> >> session_timeout 0
> >> session_autoclose       0
> >> max_file_size   0
> >> last_failure    0
> >> last_failure_osd_epoch  0
> >> compat  compat={},rocompat={},incompat={}
> >> max_mds 0
> >> in
> >> up      {}
> >> failed
> >> stopped
> >> data_pools
> >> metadata_pool   0
> >> inline_data     disabled
> >> 
> >> 2014-12-18 12:35:46.151117 7fb008d4b700  0 log_channel(cluster) log 
> >> [INF] : mdsmap e1: 0/0/0 up
> >> 2014-12-18 12:35:46.152873 7fb008d4b700  1 mon.a@0(leader).osd e1 e1: 0 
> >> osds: 0 up, 0 in
> >> 2014-12-18 12:35:46.154551 7fb008d4b700  0 mon.a@0(leader).osd e1 crush 
> >> map has features 1107558400, adjusting msgr requires
> >> 2014-12-18 12:35:46.154580 7fb008d4b700  0 mon.a@0(leader).osd e1 crush 
> >> map has features 1107558400, adjusting msgr requires
> >> 2014-12-18 12:35:46.154588 7fb008d4b700  0 mon.a@0(leader).osd e1 crush 
> >> map has features 1107558400, adjusting msgr requires
> >> 2014-12-18 12:35:46.154592 7fb008d4b700  0 mon.a@0(leader).osd e1 crush 
> >> map has features 1107558400, adjusting msgr requires
> >> 2014-12-18 12:35:46.157078 7fb008d4b700  0 log_channel(cluster) log 
> >> [INF] : osdmap e1: 0 osds: 0 up, 0 in
> >> 2014-12-18 12:35:46.220701 7fb008d4b700  1 
> >> mon.a@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 1
> >> 2014-12-18 12:35:46.334457 7fb008d4b700  0 log_channel(cluster) log 
> >> [INF] : pgmap v2: 64 pgs: 64 creating; 0 bytes data, 0 kB used, 0 kB / 0 
> >> kB avail
> >> 2014-12-18 12:35:59.648655 7fb002ffd700  0 mon.a@0(leader) e1 
> >> handle_command mon_command({"prefix": "osd lspools"} v 0) v1
> >> 2014-12-18 12:35:59.648713 7fb002ffd700  0 log_channel(audit) log [DBG] 
> >> : from='client.? xx.xx.xx.xx:0/1003269' entity='client.admin' 
> >> cmd=[{"prefix": "osd lspools"}]: dispatch
> >> 2014-12-18 12:36:45.860251 7fb0037fe700  0 
> >> mon.a@0(leader).data_health(1) update_stats avail 86% total 36863 MB, 
> >> used 3764 MB, avail 31997 MB
> >> 2014-12-18 12:37:03.096074 7fb002ffd700  0 mon.a@0(leader) e1 
> >> handle_command mon_command({"prefix": "osd lspools"} v 0) v1
> >> 2014-12-18 12:37:03.096139 7fb002ffd700  0 log_channel(audit) log [DBG] 
> >> : from='client.? xx.xx.xx.xx:0/1003353' entity='client.admin' 
> >> cmd=[{"prefix": "osd lspools"}]: dispatch
> >> 2014-12-18 12:37:45.870381 7fb0037fe700  0 
> >> mon.a@0(leader).data_health(1) update_stats avail 86% total 36863 MB, 
> >> used 3764 MB, avail 31997 MB
> >> 2014-12-18 12:38:45.880492 7fb0037fe700  0 
> >> mon.a@0(leader).data_health(1) update_stats avail 86% total 36863 MB, 
> >> used 3764 MB, avail 31997 MB
> >> 2014-12-18 12:38:47.578345 7fb002ffd700  0 mon.a@0(leader) e1 
> >> handle_command mon_command({"prefix": "osd lspools"} v 0) v1
> >> 2014-12-18 12:38:47.578408 7fb002ffd700  0 log_channel(audit) log [DBG] 
> >> : from='client.? xx.xx.xx.xx:0/1003388' entity='client.admin' 
> >> cmd=[{"prefix": "osd lspools"}]: dispatch
> >> 2014-12-18 12:39:45.880720 7fb0037fe700  0 
> >> mon.a@0(leader).data_health(1) update_stats avail 86% total 36863 MB, 
> >> used 3764 MB, avail 31997 MB
> >> 2014-12-18 12:40:45.890291 7fb0037fe700  0 
> >> mon.a@0(leader).data_health(1) update_stats avail 86% total 36863 MB, 
> >> used 3768 MB, avail 31995 MB
> >> 2014-12-18 12:41:07.510564 7fb002ffd700  0 mon.a@0(leader) e1 
> >> handle_command mon_command({"prefix": "osd lspools"} v 0) v1
> >> 2014-12-18 12:41:07.510616 7fb002ffd700  0 log_channel(audit) log [DBG] 
> >> : from='client.? xx.xx.xx.xx:0/1008022' entity='client.admin' 
> >> cmd=[{"prefix": "osd lspools"}]: dispatch
> >> 2014-12-18 12:41:09.581628 7fb002ffd700  0 mon.a@0(leader) e1 
> >> handle_command mon_command({"prefix": "osd lspools"} v 0) v1
> >> 2014-12-18 12:41:09.581692 7fb002ffd700  0 log_channel(audit) log [DBG] 
> >> : from='client.? xx.xx.xx.xx:0/1008056' entity='client.admin' 
> >> cmd=[{"prefix": "osd lspools"}]: dispatch
> >> 2014-12-18 12:41:45.900237 7fb0037fe700  0 
> >> mon.a@0(leader).data_health(1) update_stats avail 86% total 36863 MB, 
> >> used 3810 MB, avail 31958 MB
> >> 2014-12-18 12:42:45.900437 7fb0037fe700  0 
> >> mon.a@0(leader).data_health(1) update_stats avail 86% total 36863 MB, 
> >> used 3810 MB, avail 31958 MB
> >> 2014-12-18 12:43:45.900680 7fb0037fe700  0 
> >> mon.a@0(leader).data_health(1) update_stats avail 86% total 36863 MB, 
> >> used 3810 MB, avail 31958 MB
> >> -----------
> >> 
> >> 
> >> 
> >> 
> >> -- 
> >> Thanks,
> >> Dyweni
> >> 
> >> 
> >> 
> >> 
> >> _______________________________________________
> >> ceph-users mailing list
> >> ceph-users@xxxxxxxxxxxxxx
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >> 
> > 
> > 
> > -- 
> > This message has been scanned for viruses and
> > dangerous content by MailScanner, and is
> > believed to be clean.
> > 
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> 


-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux