Re: can cache-mode be set to readproxy for tier cachewith ceph 0.94.9 ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



shinjo, thanks for your help,

#1 How small is actual data?
23K, 24K, 165K , I didn't record all of them.


#2 Is the symptom reproduceable with same size of different data?
no, we have some processes to create files, the  0-byte-files became normal after they were covered by those processes.
It's hard to reproduce, 
the newer kernel has many patches about cephfs, so I even mount the cephfs with another client (kernel 4.9) to wait for the issue, hoping the newer kernel client could read the file correctly. is this possible ?
ps: When we first met this issue, restarting the mds could cure that. (but that was ceph 0.94.1).


#3 can you share your ceph.conf(ceph --show-config)?
some cache configs first.
$ ceph osd pool get data_cache hit_set_count
hit_set_count: 1
ceph osd pool get data_cache min_read_recency_for_promote
min_read_recency_for_promote: 0
$ceph osd pool get data_cache target_max_bytes
target_max_bytes: 500000000000
$ ceph osd pool get data_cache target_max_objects
target_max_objects: 1000000

the whole ceph configs below.
please, ignore the net configs :)

#ceph --show-config
name = client.admin
cluster = ceph
debug_none = 0/5
debug_lockdep = 0/1
debug_context = 0/1
debug_crush = 1/1
debug_mds = 1/5
debug_mds_balancer = 1/5
debug_mds_locker = 1/5
debug_mds_log = 1/5
debug_mds_log_expire = 1/5
debug_mds_migrator = 1/5
debug_buffer = 0/1
debug_timer = 0/1
debug_filer = 0/1
debug_striper = 0/1
debug_objecter = 0/1
debug_rados = 0/5
debug_rbd = 0/5
debug_rbd_replay = 0/5
debug_journaler = 0/5
debug_objectcacher = 0/5
debug_client = 0/5
debug_osd = 0/5
debug_optracker = 0/5
debug_objclass = 0/5
debug_filestore = 1/3
debug_keyvaluestore = 1/3
debug_journal = 1/3
debug_ms = 0/5
debug_mon = 1/5
debug_monc = 0/10
debug_paxos = 1/5
debug_tp = 0/5
debug_auth = 1/5
debug_crypto = 1/5
debug_finisher = 1/1
debug_heartbeatmap = 1/5
debug_perfcounter = 1/5
debug_rgw = 1/5
debug_civetweb = 1/10
debug_javaclient = 1/5
debug_asok = 1/5
debug_throttle = 1/1
debug_refs = 0/0
debug_xio = 1/5
host = localhost
fsid = 477c0de9-fa96-4000-a87b-2f4ba4a15472
public_addr = :/0
cluster_addr = :/0
cluster_network = XXXXXXXXX
num_client = 1
monmap = 
mon_host =  XXXXXXX
lockdep = false
lockdep_force_backtrace = false
run_dir = /var/run/ceph
admin_socket = 
daemonize = false
pid_file = 
chdir = /
max_open_files = 0
restapi_log_level = 
restapi_base_url = 
fatal_signal_handlers = true
log_file = 
log_max_new = 1000
log_max_recent = 500
log_to_stderr = true
err_to_stderr = true
log_to_syslog = false
err_to_syslog = false
log_flush_on_exit = true
log_stop_at_utilization = 0.97
clog_to_monitors = default=true
clog_to_syslog = false
clog_to_syslog_level = info
clog_to_syslog_facility = default=daemon audit=local0
mon_cluster_log_to_syslog = default=false
mon_cluster_log_to_syslog_level = info
mon_cluster_log_to_syslog_facility = daemon
mon_cluster_log_file = default=/var/log/ceph/ceph.$channel.log cluster=/var/log/ceph/ceph.log
mon_cluster_log_file_level = info
enable_experimental_unrecoverable_data_corrupting_features = 
xio_trace_mempool = false
xio_trace_msgcnt = false
xio_trace_xcon = false
xio_queue_depth = 512
xio_mp_min = 128
xio_mp_max_64 = 65536
xio_mp_max_256 = 8192
xio_mp_max_1k = 8192
xio_mp_max_page = 4096
xio_mp_max_hint = 4096
xio_portal_threads = 2
key = 
keyfile = 
keyring = /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin
heartbeat_interval = 5
heartbeat_file = 
heartbeat_inject_failure = 0
perf = true
ms_type = simple
ms_tcp_nodelay = true
ms_tcp_rcvbuf = 0
ms_tcp_prefetch_max_size = 4096
ms_initial_backoff = 0.2
ms_max_backoff = 15
ms_crc_data = true
ms_crc_header = true
ms_die_on_bad_msg = false
ms_die_on_unhandled_msg = false
ms_die_on_old_message = false
ms_die_on_skipped_message = false
ms_dispatch_throttle_bytes = 104857600
ms_bind_ipv6 = false
ms_bind_port_min = 6800
ms_bind_port_max = 7300
ms_bind_retry_count = 3
ms_bind_retry_delay = 5
ms_rwthread_stack_bytes = 1048576
ms_tcp_read_timeout = 900
ms_pq_max_tokens_per_priority = 16777216
ms_pq_min_cost = 65536
ms_inject_socket_failures = 0
ms_inject_delay_type = 
ms_inject_delay_msg_type = 
ms_inject_delay_max = 1
ms_inject_delay_probability = 0
ms_inject_internal_delays = 0
ms_dump_on_send = false
ms_dump_corrupt_message_level = 1
ms_async_op_threads = 2
ms_async_set_affinity = true
ms_async_affinity_cores = 
inject_early_sigterm = false
mon_data = /var/lib/ceph/mon/ceph-admin
mon_initial_members = cephn1
mon_sync_fs_threshold = 5
mon_compact_on_start = false
mon_compact_on_bootstrap = false
mon_compact_on_trim = true
mon_osd_cache_size = 10
mon_tick_interval = 5
mon_subscribe_interval = 300
mon_delta_reset_interval = 10
mon_osd_laggy_halflife = 3600
mon_osd_laggy_weight = 0.3
mon_osd_adjust_heartbeat_grace = true
mon_osd_adjust_down_out_interval = true
mon_osd_auto_mark_in = false
mon_osd_auto_mark_auto_out_in = true
mon_osd_auto_mark_new_in = true
mon_osd_down_out_interval = 300
mon_osd_down_out_subtree_limit = rack
mon_osd_min_up_ratio = 0.3
mon_osd_min_in_ratio = 0.3
mon_osd_max_op_age = 32
mon_osd_max_split_count = 32
mon_osd_allow_primary_temp = false
mon_osd_allow_primary_affinity = false
mon_stat_smooth_intervals = 2
mon_lease = 5
mon_lease_renew_interval = 3
mon_lease_ack_timeout = 10
mon_clock_drift_allowed = 0.05
mon_clock_drift_warn_backoff = 5
mon_timecheck_interval = 300
mon_accept_timeout = 10
mon_timecheck_skew_interval = 30
mon_pg_create_interval = 30
mon_pg_stuck_threshold = 300
mon_pg_warn_min_per_osd = 30
mon_pg_warn_max_per_osd = 300
mon_pg_warn_max_object_skew = 10
mon_pg_warn_min_objects = 10000
mon_pg_warn_min_pool_objects = 1000
mon_cache_target_full_warn_ratio = 0.66
mon_osd_full_ratio = 0.95
mon_osd_nearfull_ratio = 0.85
mon_allow_pool_delete = true
mon_globalid_prealloc = 10000
mon_osd_report_timeout = 900
mon_force_standby_active = true
mon_warn_on_old_mons = true
mon_warn_on_legacy_crush_tunables = true
mon_warn_on_osd_down_out_interval_zero = true
mon_warn_on_cache_pools_without_hit_sets = true
mon_min_osdmap_epochs = 500
mon_max_pgmap_epochs = 500
mon_max_log_epochs = 500
mon_max_mdsmap_epochs = 500
mon_max_osd = 10000
mon_probe_timeout = 2
mon_slurp_timeout = 10
mon_slurp_bytes = 262144
mon_client_bytes = 104857600
mon_daemon_bytes = 419430400
mon_max_log_entries_per_event = 4096
mon_reweight_min_pgs_per_osd = 10
mon_reweight_min_bytes_per_osd = 104857600
mon_reweight_max_osds = 4
mon_reweight_max_change = 0.05
mon_health_data_update_interval = 60
mon_health_to_clog = true
mon_health_to_clog_interval = 3600
mon_health_to_clog_tick_interval = 60
mon_data_avail_crit = 5
mon_data_avail_warn = 30
mon_data_size_warn = 16106127360
mon_config_key_max_entry_size = 4096
mon_sync_timeout = 60
mon_sync_max_payload_size = 1048576
mon_sync_debug = false
mon_sync_debug_leader = -1
mon_sync_debug_provider = -1
mon_sync_debug_provider_fallback = -1
mon_inject_sync_get_chunk_delay = 0
mon_osd_min_down_reporters = 1
mon_osd_min_down_reports = 3
mon_osd_force_trim_to = 0
mon_mds_force_trim_to = 0
crushtool = crushtool
mon_debug_dump_transactions = false
mon_debug_dump_location = /var/log/ceph/ceph-client.admin.tdump
mon_inject_transaction_delay_max = 10
mon_inject_transaction_delay_probability = 0
mon_sync_provider_kill_at = 0
mon_sync_requester_kill_at = 0
mon_force_quorum_join = false
mon_keyvaluedb = leveldb
mon_debug_unsafe_allow_tier_with_nonempty_snaps = false
paxos_stash_full_interval = 25
paxos_max_join_drift = 10
paxos_propose_interval = 1
paxos_min_wait = 0.05
paxos_min = 500
paxos_trim_min = 250
paxos_trim_max = 500
paxos_service_trim_min = 250
paxos_service_trim_max = 500
paxos_kill_at = 0
clock_offset = 0
auth_cluster_required = none
auth_service_required = none
auth_client_required = none
auth_supported = 
cephx_require_signatures = false
cephx_cluster_require_signatures = false
cephx_service_require_signatures = false
cephx_sign_messages = true
auth_mon_ticket_ttl = 43200
auth_service_ticket_ttl = 3600
auth_debug = false
mon_client_hunt_interval = 3
mon_client_ping_interval = 10
mon_client_ping_timeout = 30
mon_client_hunt_interval_backoff = 2
mon_client_hunt_interval_max_multiple = 10
mon_client_max_log_entries_per_message = 1000
mon_max_pool_pg_num = 65536
mon_pool_quota_warn_threshold = 0
mon_pool_quota_crit_threshold = 0
client_cache_size = 16384
client_cache_mid = 0.75
client_use_random_mds = false
client_mount_timeout = 300
client_tick_interval = 1
client_trace = 
client_readahead_min = 131072
client_readahead_max_bytes = 0
client_readahead_max_periods = 4
client_snapdir = .snap
client_mountpoint = /
client_notify_timeout = 10
osd_client_watch_timeout = 30
client_caps_release_delay = 5
client_quota = false
client_oc = true
client_oc_size = 209715200
client_oc_max_dirty = 104857600
client_oc_target_dirty = 8388608
client_oc_max_dirty_age = 5
client_oc_max_objects = 1000
client_debug_force_sync_read = false
client_debug_inject_tick_delay = 0
client_max_inline_size = 4096
client_inject_release_failure = false
fuse_use_invalidate_cb = false
fuse_allow_other = true
fuse_default_permissions = true
fuse_big_writes = true
fuse_atomic_o_trunc = true
fuse_debug = false
fuse_multithreaded = true
client_try_dentry_invalidate = true
client_die_on_failed_remount = true
client_check_pool_perm = true
crush_location = 
objecter_tick_interval = 5
objecter_timeout = 10
objecter_inflight_op_bytes = 104857600
objecter_inflight_ops = 1024
objecter_completion_locks_per_session = 32
objecter_inject_no_watch_ping = false
journaler_allow_split_entries = true
journaler_write_head_interval = 15
journaler_prefetch_periods = 10
journaler_prezero_periods = 5
journaler_batch_interval = 0.001
journaler_batch_max = 0
mds_data = /var/lib/ceph/mds/ceph-admin
mds_max_file_size = 1099511627776
mds_cache_size = 100000
mds_cache_mid = 0.7
mds_max_file_recover = 32
mds_mem_max = 1048576
mds_dir_max_commit_size = 10
mds_decay_halflife = 5
mds_beacon_interval = 4
mds_beacon_grace = 15
mds_enforce_unique_name = true
mds_blacklist_interval = 1440
mds_session_timeout = 60
mds_revoke_cap_timeout = 60
mds_recall_state_timeout = 60
mds_freeze_tree_timeout = 30
mds_session_autoclose = 300
mds_health_summarize_threshold = 10
mds_reconnect_timeout = 45
mds_tick_interval = 5
mds_dirstat_min_interval = 1
mds_scatter_nudge_interval = 5
mds_client_prealloc_inos = 1000
mds_early_reply = true
mds_default_dir_hash = 2
mds_log = true
mds_log_skip_corrupt_events = false
mds_log_max_events = -1
mds_log_events_per_segment = 1024
mds_log_segment_size = 0
mds_log_max_segments = 30
mds_log_max_expiring = 20
mds_bal_sample_interval = 3
mds_bal_replicate_threshold = 8000
mds_bal_unreplicate_threshold = 0
mds_bal_frag = false
mds_bal_split_size = 10000
mds_bal_split_rd = 25000
mds_bal_split_wr = 10000
mds_bal_split_bits = 3
mds_bal_merge_size = 50
mds_bal_merge_rd = 1000
mds_bal_merge_wr = 1000
mds_bal_interval = 10
mds_bal_fragment_interval = 5
mds_bal_idle_threshold = 0
mds_bal_max = -1
mds_bal_max_until = -1
mds_bal_mode = 0
mds_bal_min_rebalance = 0.1
mds_bal_min_start = 0.2
mds_bal_need_min = 0.8
mds_bal_need_max = 1.2
mds_bal_midchunk = 0.3
mds_bal_minchunk = 0.001
mds_bal_target_removal_min = 5
mds_bal_target_removal_max = 10
mds_replay_interval = 1
mds_shutdown_check = 0
mds_thrash_exports = 0
mds_thrash_fragments = 0
mds_dump_cache_on_map = false
mds_dump_cache_after_rejoin = false
mds_verify_scatter = false
mds_debug_scatterstat = false
mds_debug_frag = false
mds_debug_auth_pins = false
mds_debug_subtrees = false
mds_kill_mdstable_at = 0
mds_kill_export_at = 0
mds_kill_import_at = 0
mds_kill_link_at = 0
mds_kill_rename_at = 0
mds_kill_openc_at = 0
mds_kill_journal_at = 0
mds_kill_journal_expire_at = 0
mds_kill_journal_replay_at = 0
mds_journal_format = 1
mds_kill_create_at = 0
mds_inject_traceless_reply_probability = 0
mds_wipe_sessions = false
mds_wipe_ino_prealloc = false
mds_skip_ino = 0
max_mds = 1
mds_standby_for_name = 
mds_standby_for_rank = -1
mds_standby_replay = false
mds_enable_op_tracker = true
mds_op_history_size = 20
mds_op_history_duration = 600
mds_op_complaint_time = 30
mds_op_log_threshold = 5
mds_snap_min_uid = 0
mds_snap_max_uid = 65536
mds_verify_backtrace = 1
mds_action_on_write_error = 1
osd_compact_leveldb_on_mount = false
osd_max_backfills = 10
osd_min_recovery_priority = 0
osd_backfill_full_ratio = 0.85
osd_backfill_retry_interval = 10
osd_agent_max_ops = 4
osd_agent_min_evict_effort = 0.1
osd_agent_quantize_effort = 0.1
osd_agent_delay_time = 5
osd_find_best_info_ignore_history_les = false
osd_agent_hist_halflife = 1000
osd_agent_slop = 0.02
osd_uuid = 00000000-0000-0000-0000-000000000000
osd_data = /var/lib/ceph/osd/ceph-admin
osd_journal = /var/lib/ceph/osd/ceph-admin/journal
osd_journal_size = 10000
osd_max_write_size = 90
osd_max_pgls = 1024
osd_client_message_size_cap = 524288000
osd_client_message_cap = 100
osd_pg_bits = 6
osd_pgp_bits = 6
osd_crush_chooseleaf_type = 1
osd_pool_use_gmt_hitset = true
osd_pool_default_crush_rule = -1
osd_pool_default_crush_replicated_ruleset = 0
osd_pool_erasure_code_stripe_width = 4096
osd_pool_default_size = 2
osd_pool_default_min_size = 1
osd_pool_default_pg_num = 512
osd_pool_default_pgp_num = 512
osd_pool_default_erasure_code_directory = /usr/lib/ceph/erasure-code
osd_pool_default_erasure_code_profile = plugin=jerasure technique=reed_sol_van k=2 m=1 
osd_erasure_code_plugins = jerasure lrc isa
osd_allow_recovery_below_min_size = true
osd_pool_default_flags = 0
osd_pool_default_flag_hashpspool = true
osd_pool_default_flag_nodelete = false
osd_pool_default_flag_nopgchange = false
osd_pool_default_flag_nosizechange = false
osd_pool_default_hit_set_bloom_fpp = 0.05
osd_pool_default_cache_target_dirty_ratio = 0.4
osd_pool_default_cache_target_full_ratio = 0.8
osd_pool_default_cache_min_flush_age = 0
osd_pool_default_cache_min_evict_age = 0
osd_hit_set_min_size = 1000
osd_hit_set_max_size = 100000
osd_hit_set_namespace = .ceph-internal
osd_tier_default_cache_mode = writeback
osd_tier_default_cache_hit_set_count = 4
osd_tier_default_cache_hit_set_period = 1200
osd_tier_default_cache_hit_set_type = bloom
osd_tier_default_cache_min_read_recency_for_promote = 1
osd_map_dedup = true
osd_map_max_advance = 200
osd_map_cache_size = 500
osd_map_message_max = 100
osd_map_share_max_epochs = 100
osd_inject_bad_map_crc_probability = 0
osd_inject_failure_on_pg_removal = false
osd_op_threads = 2
osd_peering_wq_batch_size = 20
osd_op_pq_max_tokens_per_priority = 4194304
osd_op_pq_min_cost = 65536
osd_disk_threads = 1
osd_disk_thread_ioprio_class = 
osd_disk_thread_ioprio_priority = -1
osd_recovery_threads = 1
osd_recover_clone_overlap = true
osd_op_num_threads_per_shard = 2
osd_op_num_shards = 5
osd_read_eio_on_bad_digest = true
osd_recover_clone_overlap_limit = 10
osd_backfill_scan_min = 64
osd_backfill_scan_max = 512
osd_op_thread_timeout = 15
osd_op_thread_suicide_timeout = 150
osd_recovery_thread_timeout = 30
osd_recovery_thread_suicide_timeout = 300
osd_snap_trim_thread_timeout = 3600
osd_snap_trim_thread_suicide_timeout = 36000
osd_snap_trim_sleep = 0
osd_scrub_thread_timeout = 60
osd_scrub_thread_suicide_timeout = 300
osd_scrub_finalize_thread_timeout = 600
osd_scrub_invalid_stats = true
osd_remove_thread_timeout = 3600
osd_remove_thread_suicide_timeout = 36000
osd_command_thread_timeout = 600
osd_age = 0.8
osd_age_time = 0
osd_command_thread_suicide_timeout = 900
osd_heartbeat_addr = :/0
osd_heartbeat_interval = 6
osd_heartbeat_grace = 20
osd_heartbeat_min_peers = 10
osd_heartbeat_use_min_delay_socket = false
osd_pg_max_concurrent_snap_trims = 2
osd_heartbeat_min_healthy_ratio = 0.33
osd_mon_heartbeat_interval = 30
osd_mon_report_interval_max = 120
osd_mon_report_interval_min = 5
osd_pg_stat_report_interval_max = 500
osd_mon_ack_timeout = 30
osd_default_data_pool_replay_window = 45
osd_preserve_trimmed_log = false
osd_auto_mark_unfound_lost = false
osd_recovery_delay_start = 0
osd_recovery_max_active = 15
osd_recovery_max_single_start = 5
osd_recovery_max_chunk = 8388608
osd_copyfrom_max_chunk = 8388608
osd_push_per_object_cost = 1000
osd_max_push_cost = 8388608
osd_max_push_objects = 10
osd_recovery_forget_lost_objects = false
osd_max_scrubs = 1
osd_scrub_begin_hour = 0
osd_scrub_end_hour = 24
osd_scrub_load_threshold = 0.5
osd_scrub_min_interval = 86400
osd_scrub_max_interval = 604800
osd_scrub_interval_randomize_ratio = 0.5
osd_scrub_chunk_min = 5
osd_scrub_chunk_max = 25
osd_scrub_sleep = 0
osd_deep_scrub_interval = 604800
osd_deep_scrub_stride = 524288
osd_deep_scrub_update_digest_min_age = 7200
osd_scan_list_ping_tp_interval = 100
osd_auto_weight = false
osd_class_dir = /usr/lib/rados-classes
osd_open_classes_on_start = true
osd_check_for_log_corruption = false
osd_use_stale_snap = false
osd_rollback_to_cluster_snap = 
osd_default_notify_timeout = 30
osd_kill_backfill_at = 0
osd_pg_epoch_persisted_max_stale = 200
osd_min_pg_log_entries = 3000
osd_max_pg_log_entries = 10000
osd_pg_log_trim_min = 100
osd_op_complaint_time = 30
osd_command_max_records = 256
osd_max_pg_blocked_by = 16
osd_op_log_threshold = 5
osd_verify_sparse_read_holes = false
osd_debug_drop_ping_probability = 0
osd_debug_drop_ping_duration = 0
osd_debug_drop_pg_create_probability = 0
osd_debug_drop_pg_create_duration = 1
osd_debug_drop_op_probability = 0
osd_debug_op_order = false
osd_debug_scrub_chance_rewrite_digest = 0
osd_debug_verify_snaps_on_info = false
osd_debug_verify_stray_on_activate = false
osd_debug_skip_full_check_in_backfill_reservation = false
osd_debug_reject_backfill_probability = 0
osd_debug_inject_copyfrom_error = false
osd_enable_op_tracker = true
osd_num_op_tracker_shard = 32
osd_op_history_size = 20
osd_op_history_duration = 600
osd_target_transaction_size = 30
osd_failsafe_full_ratio = 0.97
osd_failsafe_nearfull_ratio = 0.9
osd_pg_object_context_cache_count = 64
osd_tracing = false
osd_debug_pg_log_writeout = false
threadpool_default_timeout = 60
threadpool_empty_queue_max_wait = 2
leveldb_write_buffer_size = 8388608
leveldb_cache_size = 134217728
leveldb_block_size = 0
leveldb_bloom_size = 0
leveldb_max_open_files = 0
leveldb_compression = true
leveldb_paranoid = false
leveldb_log = /dev/null
leveldb_compact_on_mount = false
kinetic_host = 
kinetic_port = 8123
kinetic_user_id = 1
kinetic_hmac_key = asdfasdf
kinetic_use_ssl = false
rocksdb_compact_on_mount = false
rocksdb_write_buffer_size = 0
rocksdb_target_file_size_base = 0
rocksdb_cache_size = 0
rocksdb_block_size = 0
rocksdb_bloom_size = 0
rocksdb_write_buffer_num = 0
rocksdb_background_compactions = 0
rocksdb_background_flushes = 0
rocksdb_max_open_files = 0
rocksdb_compression = 
rocksdb_paranoid = false
rocksdb_log = /dev/null
rocksdb_level0_file_num_compaction_trigger = 0
rocksdb_level0_slowdown_writes_trigger = 0
rocksdb_level0_stop_writes_trigger = 0
rocksdb_disableDataSync = true
rocksdb_disableWAL = false
rocksdb_num_levels = 0
rocksdb_wal_dir = 
rocksdb_info_log_level = info
osd_client_op_priority = 63
osd_recovery_op_priority = 10
osd_recovery_op_warn_multiple = 16
osd_mon_shutdown_timeout = 5
osd_max_object_size = 107374182400
osd_max_object_name_len = 2048
osd_max_attr_name_len = 100
osd_max_attr_size = 0
osd_objectstore = filestore
osd_objectstore_tracing = false
osd_debug_override_acting_compat = false
osd_bench_small_size_max_iops = 100
osd_bench_large_size_max_throughput = 104857600
osd_bench_max_block_size = 67108864
osd_bench_duration = 30
memstore_device_bytes = 1073741824
filestore_omap_backend = leveldb
filestore_debug_disable_sharded_check = false
filestore_wbthrottle_enable = true
filestore_wbthrottle_btrfs_bytes_start_flusher = 41943040
filestore_wbthrottle_btrfs_bytes_hard_limit = 419430400
filestore_wbthrottle_btrfs_ios_start_flusher = 500
filestore_wbthrottle_btrfs_ios_hard_limit = 5000
filestore_wbthrottle_btrfs_inodes_start_flusher = 500
filestore_wbthrottle_xfs_bytes_start_flusher = 41943040
filestore_wbthrottle_xfs_bytes_hard_limit = 419430400
filestore_wbthrottle_xfs_ios_start_flusher = 500
filestore_wbthrottle_xfs_ios_hard_limit = 5000
filestore_wbthrottle_xfs_inodes_start_flusher = 500
filestore_wbthrottle_btrfs_inodes_hard_limit = 5000
filestore_wbthrottle_xfs_inodes_hard_limit = 5000
filestore_index_retry_probability = 0
filestore_debug_inject_read_err = false
filestore_debug_omap_check = false
filestore_omap_header_cache_size = 1024
filestore_max_inline_xattr_size = 0
filestore_max_inline_xattr_size_xfs = 65536
filestore_max_inline_xattr_size_btrfs = 2048
filestore_max_inline_xattr_size_other = 512
filestore_max_inline_xattrs = 0
filestore_max_inline_xattrs_xfs = 10
filestore_max_inline_xattrs_btrfs = 10
filestore_max_inline_xattrs_other = 2
filestore_sloppy_crc = false
filestore_sloppy_crc_block_size = 65536
filestore_max_alloc_hint_size = 1048576
filestore_max_sync_interval = 5
filestore_min_sync_interval = 0.01
filestore_btrfs_snap = true
filestore_btrfs_clone_range = true
filestore_zfs_snap = false
filestore_fsync_flushes_journal_data = false
filestore_fiemap = false
filestore_fadvise = true
filestore_xfs_extsize = false
filestore_journal_parallel = false
filestore_journal_writeahead = false
filestore_journal_trailing = false
filestore_queue_max_ops = 50
filestore_queue_max_bytes = 104857600
filestore_queue_committing_max_ops = 500
filestore_queue_committing_max_bytes = 104857600
filestore_op_threads = 2
filestore_op_thread_timeout = 60
filestore_op_thread_suicide_timeout = 180
filestore_commit_timeout = 600
filestore_fiemap_threshold = 4096
filestore_merge_threshold = 10
filestore_split_multiple = 2
filestore_update_to = 1000
filestore_blackhole = false
filestore_fd_cache_size = 128
filestore_fd_cache_shards = 16
filestore_dump_file = 
filestore_kill_at = 0
filestore_inject_stall = 0
filestore_fail_eio = true
filestore_debug_verify_split = false
journal_dio = true
journal_aio = true
journal_force_aio = false
keyvaluestore_queue_max_ops = 50
keyvaluestore_queue_max_bytes = 104857600
keyvaluestore_debug_check_backend = false
keyvaluestore_op_threads = 2
keyvaluestore_op_thread_timeout = 60
keyvaluestore_op_thread_suicide_timeout = 180
keyvaluestore_default_strip_size = 4096
keyvaluestore_max_expected_write_size = 16777216
keyvaluestore_header_cache_size = 4096
keyvaluestore_backend = leveldb
journal_max_corrupt_search = 10485760
journal_block_align = true
journal_write_header_frequency = 0
journal_max_write_bytes = 10485760
journal_max_write_entries = 100
journal_queue_max_ops = 300
journal_queue_max_bytes = 33554432
journal_align_min_size = 65536
journal_replay_from = 0
journal_zero_on_create = false
journal_ignore_corruption = false
journal_discard = false
rados_mon_op_timeout = 0
rados_osd_op_timeout = 0
rados_tracing = false
rbd_op_threads = 1
rbd_op_thread_timeout = 60
rbd_non_blocking_aio = true
rbd_cache = true
rbd_cache_writethrough_until_flush = true
rbd_cache_size = 33554432
rbd_cache_max_dirty = 25165824
rbd_cache_target_dirty = 16777216
rbd_cache_max_dirty_age = 1
rbd_cache_max_dirty_object = 0
rbd_cache_block_writes_upfront = false
rbd_concurrent_management_ops = 10
rbd_balance_snap_reads = false
rbd_localize_snap_reads = false
rbd_balance_parent_reads = false
rbd_localize_parent_reads = true
rbd_readahead_trigger_requests = 10
rbd_readahead_max_bytes = 524288
rbd_readahead_disable_after_bytes = 52428800
rbd_clone_copy_on_read = false
rbd_blacklist_on_break_lock = true
rbd_blacklist_expire_seconds = 0
rbd_request_timed_out_seconds = 30
rbd_tracing = false
rbd_validate_pool = true
rbd_default_format = 1
rbd_default_order = 22
rbd_default_stripe_count = 0
rbd_default_stripe_unit = 0
rbd_default_features = 3
nss_db_path = 
rgw_max_chunk_size = 524288
rgw_max_put_size = 5368709120
rgw_override_bucket_index_max_shards = 0
rgw_bucket_index_max_aio = 8
rgw_enable_quota_threads = true
rgw_enable_gc_threads = true
rgw_data = /var/lib/ceph/radosgw/ceph-admin
rgw_enable_apis = s3, swift, swift_auth, admin
rgw_cache_enabled = true
rgw_cache_lru_size = 10000
rgw_socket_path = 
rgw_host = 
rgw_port = 
rgw_dns_name = 
rgw_content_length_compat = false
rgw_script_uri = 
rgw_request_uri = 
rgw_swift_url = 
rgw_swift_url_prefix = swift
rgw_swift_auth_url = 
rgw_swift_auth_entry = auth
rgw_swift_tenant_name = 
rgw_swift_enforce_content_length = false
rgw_keystone_url = 
rgw_keystone_admin_token = 
rgw_keystone_admin_user = 
rgw_keystone_admin_password = 
rgw_keystone_admin_tenant = 
rgw_keystone_accepted_roles = Member, admin
rgw_keystone_token_cache_size = 10000
rgw_keystone_revocation_interval = 900
rgw_s3_auth_use_rados = true
rgw_s3_auth_use_keystone = false
rgw_admin_entry = admin
rgw_enforce_swift_acls = true
rgw_swift_token_expiration = 86400
rgw_print_continue = true
rgw_remote_addr_param = REMOTE_ADDR
rgw_op_thread_timeout = 600
rgw_op_thread_suicide_timeout = 0
rgw_thread_pool_size = 100
rgw_num_control_oids = 8
rgw_num_rados_handles = 1
rgw_zone = 
rgw_zone_root_pool = .rgw.root
rgw_region = 
rgw_region_root_pool = .rgw.root
rgw_default_region_info_oid = default.region
rgw_log_nonexistent_bucket = false
rgw_log_object_name = %Y-%m-%d-%H-%i-%n
rgw_log_object_name_utc = false
rgw_usage_max_shards = 32
rgw_usage_max_user_shards = 1
rgw_enable_ops_log = false
rgw_enable_usage_log = false
rgw_ops_log_rados = true
rgw_ops_log_socket_path = 
rgw_ops_log_data_backlog = 5242880
rgw_usage_log_flush_threshold = 1024
rgw_usage_log_tick_interval = 30
rgw_intent_log_object_name = %Y-%m-%d-%i-%n
rgw_intent_log_object_name_utc = false
rgw_init_timeout = 300
rgw_mime_types_file = /etc/mime.types
rgw_gc_max_objs = 32
rgw_gc_obj_min_wait = 7200
rgw_gc_processor_max_time = 3600
rgw_gc_processor_period = 3600
rgw_s3_success_create_obj_status = 0
rgw_resolve_cname = false
rgw_obj_stripe_size = 4194304
rgw_extended_http_attrs = 
rgw_exit_timeout_secs = 120
rgw_get_obj_window_size = 16777216
rgw_get_obj_max_req_size = 4194304
rgw_relaxed_s3_bucket_names = false
rgw_defer_to_bucket_acls = 
rgw_list_buckets_max_chunk = 1000
rgw_md_log_max_shards = 64
rgw_num_zone_opstate_shards = 128
rgw_opstate_ratelimit_sec = 30
rgw_curl_wait_timeout_ms = 1000
rgw_copy_obj_progress = true
rgw_copy_obj_progress_every_bytes = 1048576
rgw_data_log_window = 30
rgw_data_log_changes_size = 1000
rgw_data_log_num_shards = 128
rgw_data_log_obj_prefix = data_log
rgw_replica_log_obj_prefix = replica_log
rgw_bucket_quota_ttl = 600
rgw_bucket_quota_soft_threshold = 0.95
rgw_bucket_quota_cache_size = 10000
rgw_bucket_default_quota_max_objects = -1
rgw_bucket_default_quota_max_size = -1
rgw_expose_bucket = false
rgw_frontends = fastcgi, civetweb port=7480
rgw_user_quota_bucket_sync_interval = 180
rgw_user_quota_sync_interval = 86400
rgw_user_quota_sync_idle_users = false
rgw_user_quota_sync_wait_time = 86400
rgw_user_default_quota_max_objects = -1
rgw_user_default_quota_max_size = -1
rgw_multipart_min_part_size = 5242880
rgw_olh_pending_timeout_sec = 3600
rgw_user_max_buckets = 1000
mutex_perf_counter = false
throttler_perf_counter = true
internal_safe_to_start_threads = false 



 
------------------ Original ------------------
Date:  Tue, Dec 13, 2016 06:21 PM
To:  "JiaJia Zhong"<zhongjiajia@xxxxxxxxxxxx>;
Cc:  "CEPH list"<ceph-users@xxxxxxxxxxxxxx>; "ukernel"<ukernel@xxxxxxxxx>;
Subject:  Re: [ceph-users] can cache-mode be set to readproxy for tier cachewith ceph 0.94.9 ?
 


On Tue, Dec 13, 2016 at 4:38 PM, JiaJia Zhong <zhongjiajia@xxxxxxxxxxxx> wrote:
hi cephers:
    we are using ceph hammer 0.94.9,  yes, It's not the latest ( jewel),
    with some ssd osds for tiering,  cache-mode is set to readproxy, everything seems to be as expected,
    but when reading some small files from cephfs, we got 0 bytes.

Would you be able to share:

 #1 How small is actual data?
 #2 Is the symptom reproduceable with same size of different data?
 #3 can you share your ceph.conf(ceph --show-config)?
 
    
    I did some search and got the below link, 
    that's almost the same as what we are suffering from except  the cache-mode in the link is writeback, ours is readproxy.
    
    that bug shall have been FIXED in 0.94.9 (http://tracker.ceph.com/issues/12551
    but we still can encounter with that occasionally :(

   enviroment:
     - ceph: 0.94.9
     - kernel client: 4.2.0-36-generic ( ubuntu 14.04 )
     - any others needed ?

   Question:
   1.  does readproxy mode work on ceph0.94.9 ? since there are only writeback and readonly in  the document for hammer.
   2.  any one with (Jewel or Hammer) met the same issue ?


    loop Yan, Zheng
   Quote from the link for convince.
 """

I am experiencing an issue with CephFS with cache tiering where the kernel
clients are reading files filled entirely with 0s.

The setup:
ceph 0.94.3
create cephfs_metadata replicated pool
create cephfs_data replicated pool
cephfs was created on the above two pools, populated with files, then:
create cephfs_ssd_cache replicated pool,
then adding the tiers:
ceph osd tier add cephfs_data cephfs_ssd_cache
ceph osd tier cache-mode cephfs_ssd_cache writeback
ceph osd tier set-overlay cephfs_data cephfs_ssd_cache

While the cephfs_ssd_cache pool is empty, multiple kernel clients on
different hosts open the same file (the size of the file is small, <10k) at
approximately the same time. A number of the clients from the OS level see
the entire file being empty. I can do a rados -p {cache pool} ls for the
list of files cached, and do a rados -p {cache pool} get {object} /tmp/file
and see the complete contents of the file.
I can repeat this by setting cache-mode to forward, rados -p {cache pool}
cache-flush-evict-all, checking no more objects in cache with rados -p
{cache pool} ls, resetting cache-mode to writeback with an empty pool, and
doing the multiple same file opens.

Has anyone seen this issue? It seems like what may be a race condition
where the object is not yet completely loaded into the cache pool so the
cache pool serves out an incomplete object.
If anyone can shed some light or any suggestions to help debug this issue,
that would be very helpful.

Thanks,
Arthur"""



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux