Re: Unable to start libvirt VM when using cache tiering.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

This is my OSD dump below

#######################
osc-mgmt-1:~$ sudo ceph osd dump | grep pool
pool 0 'rbd' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 43 lfor 43 flags hashpspool tiers 1 read_tier 1 write_tier 1 stripe_width 0
pool 1 'ssd' replicated size 3 min_size 2 crush_ruleset 1 object_hash rjenkins pg_num 128 pgp_num 128 last_change 44 flags hashpspool,incomplete_clones tier_of 0 cache_mode writeback hit_set bloom{false_positive_probability: 0.05, target_size: 0, seed: 0} 0s x0 stripe_width 0
 #######################

I have also attached my crushmap (plain text version) if that can provide any detail too.

Thanks

Pieter

On Aug 05, 2015, at 02:02 PM, Burkhard Linke <Burkhard.Linke@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx> wrote:

Hi,

On 08/05/2015 02:54 PM, Pieter Koorts wrote:
Hi Burkhard,

I seemed to have missed that part but even though allowing access (rwx) to the cache pool it still has a similar (not same) problem. The VM process starts but it looks more like a dead or stuck process trying forever to start and has high CPU (for the qemu-system-x86 process). When I kill the process, as it never times out, I get the following error.

internal error: early end of file from monitor: possible problem: libust[6583/6583]: Warning: HOME environment variable not set. Disabling LTTng-UST per-user tracing. (in setup_local_apps() at lttng-ust-comm.c:305) libust[6583/6584]: Error: Error opening shm /lttng-ust-wait-5 (in get_wait_shm() at lttng-ust-comm.c:886) libust[6583/6584]: Error: Error opening shm /lttng-ust-wait-5 (in get_wait_shm() at lttng-ust-comm.c:886) libust[6583/6584]: Error: Error opening shm /lttng-ust-wait-5 (in get_wait_shm() at lttng-ust-comm.c:886) libust[6583/6584]: Error: Error opening shm /lttng-ust-wait-5 (in get_wait_shm() at lttng-ust-comm.c:886) libust[6583/6584]: Error: Error opening shm /lttng-ust-wait-5 (in get_wait_shm() at lttng-ust-comm.c:886) libust[6583/6584]: Error: Error opening shm /lttng-ust-wait-5 (in get_wait_shm() at lttng-ust-comm.c:886) libust[6583/6584]: Error: Error opening shm /lttng-ust-wait-5 (in get_wait_shm() at lttng-ust-comm.c:886)

I understand that there is something similar on launchpad and some replies refer to hammer disabling the feature causing the error with "lttng-ust-wait-5" but I still seem to get it.
At least the libvirt user is able to access both pools now.

Can you post the complete configuration for both pool (eg. ceph osd dump | grep pool)? I remember having some trouble with configuration cache pools for the first time. You need to set all the relevant options (target size/objects etc.).

Best regards,
Burkhard
# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1
tunable straw_calc_version 1

# devices
device 0 osd.0
device 1 osd.1
device 2 osd.2
device 3 osd.3
device 4 osd.4
device 5 osd.5

# types
type 0 osd
type 1 host
type 2 chassis
type 3 rack
type 4 row
type 5 pdu
type 6 pod
type 7 room
type 8 datacenter
type 9 region
type 10 root

# buckets
host osc-mgmt-1-hdd {
	id -2		# do not change unnecessarily
	# weight 0.550
	alg straw
	hash 0	# rjenkins1
	item osd.1 weight 0.450
}
host osc-mgmt-2-hdd {
	id -3		# do not change unnecessarily
	# weight 0.550
	alg straw
	hash 0	# rjenkins1
	item osd.3 weight 0.450
}
host osc-mgmt-3-hdd {
	id -4		# do not change unnecessarily
	# weight 0.550
	alg straw
	hash 0	# rjenkins1
	item osd.5 weight 0.450
}
host osc-mgmt-1-ssd {
        id -5           # do not change unnecessarily
        # weight 0.550
        alg straw
        hash 0  # rjenkins1
        item osd.0 weight 0.100
}
host osc-mgmt-2-ssd {
        id -6           # do not change unnecessarily
        # weight 0.550
        alg straw
        hash 0  # rjenkins1
        item osd.2 weight 0.100
}
host osc-mgmt-3-ssd {
        id -7           # do not change unnecessarily
        # weight 0.550
        alg straw
        hash 0  # rjenkins1
        item osd.4 weight 0.100
}
root osc-mgmt-hdd {
        id -1           # do not change unnecessarily
        # weight 1.650
        alg straw
        hash 0  # rjenkins1
        item osc-mgmt-1-hdd weight 0.550
        item osc-mgmt-2-hdd weight 0.550
        item osc-mgmt-3-hdd weight 0.550
}
root osc-mgmt-ssd {
        id -8           # do not change unnecessarily
        # weight 1.650
        alg straw
        hash 0  # rjenkins1
        item osc-mgmt-1-ssd weight 0.550
        item osc-mgmt-2-ssd weight 0.550
        item osc-mgmt-3-ssd weight 0.550
}
# rules
rule osc-mgmt-hdd {
	ruleset 0
	type replicated
	min_size 1
	max_size 10
	step take osc-mgmt-hdd
	step chooseleaf firstn 0 type host
	step emit
}
rule osc-mgmt-ssd {
        ruleset 1
        type replicated
        min_size 1
        max_size 10
        step take osc-mgmt-ssd
        step chooseleaf firstn 0 type host
        step emit
} 
# end crush map
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux