Problem formatting erasure coded image

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

 

I’m seeing errors in Windows VM guests’s event logs, for example:

The IO operation at logical block address 0x607bf7 for Disk 1 (PDO name \Device\0000001e) was retried

Log Name: System

Source: Disk

Event ID: 153

Level: Warning

 

Initialising the disk to use GPT is successful but attempting to create a standard NTFS volume eventually times out and fails.

 

 

Pretty sure this is in production in numerous environments, so I must be doing something wrong… Could anyone please validate that a rbd cached erasure coded image can be used as a Windows VM data disc?

 

 

Running Ceph Nautilus 14.2.4 with kernel 5.0.21

 

Created new erasure coded pool backed by spinners and a new replicated ssd pool for metadata:

ceph osd erasure-code-profile set ec32_hdd \

  plugin=jerasure k=3 m=2 technique=reed_sol_van \

  crush-root=default crush-failure-domain=host crush-device-class=hdd \

  directory=/usr/lib/ceph/erasure-code;

ceph osd pool create ec_hdd 64 erasure ec32_hdd;

ceph osd pool set ec_hdd allow_ec_overwrites true;

ceph osd pool application enable ec_hdd rbd;

 

ceph osd crush rule create-replicated replicated_ssd default host ssd;

ceph osd pool create rbd_ssd 64 64 replicated replicated_ssd;

ceph osd pool application enable rbd_ssd rbd;

 

rbd create rbd_ssd/surveylance-recordings --size 1T --data-pool ec_hdd;

 

Added a caching tier:

ceph osd pool create ec_hdd_cache 64 64 replicated replicated_ssd;

ceph osd tier add ec_hdd ec_hdd_cache;

ceph osd tier cache-mode ec_hdd_cache writeback;

ceph osd tier set-overlay ec_hdd ec_hdd_cache;

ceph osd pool set ec_hdd_cache hit_set_type bloom;

 

ceph osd pool set ec_hdd_cache hit_set_count 12

ceph osd pool set ec_hdd_cache hit_set_period 14400

ceph osd pool set ec_hdd_cache target_max_bytes $[128*1024*1024*1024]

ceph osd pool set ec_hdd_cache min_read_recency_for_promote 2

ceph osd pool set ec_hdd_cache min_write_recency_for_promote 2

ceph osd pool set ec_hdd_cache cache_target_dirty_ratio 0.4

ceph osd pool set ec_hdd_cache cache_target_dirty_high_ratio 0.6

ceph osd pool set ec_hdd_cache cache_target_full_ratio 0.8

 

 

Image appears to have been created correctly:

rbd ls rbd_ssd -l

NAME                   SIZE  PARENT FMT PROT LOCK

surveylance-recordings 1 TiB          2

 

rbd info rbd_ssd/surveylance-recordings

rbd image 'surveylance-recordings':

        size 1 TiB in 262144 objects

        order 22 (4 MiB objects)

        snapshot_count: 0

        id: 7341cc54df71f

        data_pool: ec_hdd

        block_name_prefix: rbd_data.2.7341cc54df71f

        format: 2

        features: layering, data-pool

        op_features:

        flags:

        create_timestamp: Sun Sep 22 17:47:30 2019

        access_timestamp: Sun Sep 22 17:47:30 2019

        modify_timestamp: Sun Sep 22 17:47:30 2019

 

Ceph appears healthy:

ceph -s

  cluster:

    id:     31f6ea46-12cb-47e8-a6f3-60fb6bbd1782

    health: HEALTH_OK

 

  services:

    mon: 3 daemons, quorum kvm1a,kvm1b,kvm1c (age 5d)

    mgr: kvm1c(active, since 5d), standbys: kvm1b, kvm1a

    mds: cephfs:1 {0=kvm1c=up:active} 2 up:standby

    osd: 24 osds: 24 up (since 4d), 24 in (since 4d)

 

  data:

    pools:   9 pools, 417 pgs

    objects: 325.04k objects, 1.1 TiB

    usage:   3.3 TiB used, 61 TiB / 64 TiB avail

    pgs:     417 active+clean

 

  io:

    client:   25 KiB/s rd, 2.7 MiB/s wr, 17 op/s rd, 306 op/s wr

    cache:    0 op/s promote

 

ceph df

  RAW STORAGE:

    CLASS     SIZE        AVAIL       USED        RAW USED     %RAW USED

    hdd        62 TiB      59 TiB     2.9 TiB      2.9 TiB          4.78

    ssd       2.4 TiB     2.1 TiB     303 GiB      309 GiB         12.36

    TOTAL      64 TiB      61 TiB     3.2 TiB      3.3 TiB          5.07

 

  POOLS:

    POOL                      ID     STORED      OBJECTS     USED        %USED     MAX AVAIL

    rbd_hdd                    1     995 GiB     289.54k     2.9 TiB      5.23        18 TiB

    rbd_ssd                    2        17 B           4      48 KiB         0       666 GiB

    rbd_hdd_cache              3      99 GiB      34.91k     302 GiB     13.13       666 GiB

    cephfs_data                4     2.1 GiB         526     6.4 GiB      0.01        18 TiB

    cephfs_metadata            5     767 KiB          22     3.7 MiB         0        18 TiB

    device_health_metrics      6     5.9 MiB          24     5.9 MiB         0        18 TiB

    ec_hdd                    10     4.0 MiB           3     7.5 MiB         0        32 TiB

    ec_hdd_cache              11      67 MiB          30     200 MiB         0       666 GiB

 

 

 

Regards

David Herselman

 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux