Re: How to see the content of an EC Pool after recreate the SSD-Cache tier?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Greg,

On 26.03.2015 18:46, Gregory Farnum wrote:
> I don't know why you're mucking about manually with the rbd directory;
> the rbd tool and rados handle cache pools correctly as far as I know.
that's because I deleted the cache tier pool, so the files like 
rbd_header.2cfc7ce74b0dc51 and rbd_directory are gone.
The whole vm-disk data are in the ec pool (rbd_data.2cfc7ce74b0dc51.*)

I can't see or recreate the VM-disk, because rados setomapval don't like
binary-data and the rbd-tool can't (re)create an rbd-disk with an given
hash (like 2cfc7ce74b0dc51).

The only way I see in the moment, is to create new rbd-disks and copy
all blocks with rados get -> file -> rados put.
The problem is the time it's take (days to weeks for 3 * 16TB)...

Udo

> -Greg
>
> On Thu, Mar 26, 2015 at 8:56 AM, Udo Lembke <ulembke@xxxxxxxxxxxx> wrote:
>> Hi Greg,
>> ok!
>>
>> It's looks like, that my problem is more setomapval-related...
>>
>> I must o something like
>> rados -p ssd-archiv setomapval rbd_directory name_vm-409-disk-2 "\0x0f\0x00\0x00\0x00"2cfc7ce74b0dc51
>>
>> but "rados setomapval" don't use the hexvalues - instead of this I got
>> rados -p ssd-archiv listomapvals rbd_directory
>> name_vm-409-disk-2
>> value: (35 bytes) :
>> 0000 : 5c 30 78 30 66 5c 30 78 30 30 5c 30 78 30 30 5c : \0x0f\0x00\0x00\
>> 0010 : 30 78 30 30 32 63 66 63 37 63 65 37 34 62 30 64 : 0x002cfc7ce74b0d
>> 0020 : 63 35 31                                        : c51
>>
>>
>> hmm, strange. With  "rados -p ssd-archiv getomapval rbd_directory name_vm-409-disk-2 name_vm-409-disk-2"
>> I got the binary inside the file name_vm-409-disk-2, but reverse do an
>> "rados -p ssd-archiv setomapval rbd_directory name_vm-409-disk-2 name_vm-409-disk-2"
>> fill the variable with name_vm-409-disk-2 and not with the content of the file...
>>
>> Are there other tools for the rbd_directory?
>>
>> regards
>>
>> Udo
>>
>> Am 26.03.2015 15:03, schrieb Gregory Farnum:
>>> You shouldn't rely on "rados ls" when working with cache pools. It
>>> doesn't behave properly and is a silly operation to run against a pool
>>> of any size even when it does. :)
>>>
>>> More specifically, "rados ls" is invoking the "pgls" operation. Normal
>>> read/write ops will go query the backing store for objects if they're
>>> not in the cache tier. pgls is different — it just tells you what
>>> objects are present in the PG on that OSD right now. So any objects
>>> which aren't in cache won't show up when listing on the cache pool.
>>> -Greg
>>>
>>> On Thu, Mar 26, 2015 at 3:43 AM, Udo Lembke <ulembke@xxxxxxxxxxxx> wrote:
>>>> Hi all,
>>>> due an very silly approach, I removed the cache tier of an filled EC pool.
>>>>
>>>> After recreate the pool and connect with the EC pool I don't see any content.
>>>> How can I see the rbd_data and other files through the new ssd cache tier?
>>>>
>>>> I think, that I must recreate the rbd_directory (and fill with setomapval), but I don't see anything yet!
>>>>
>>>> $ rados ls -p ecarchiv | more
>>>> rbd_data.2e47de674b0dc51.0000000000390074
>>>> rbd_data.2e47de674b0dc51.000000000020b64f
>>>> rbd_data.2fbb1952ae8944a.000000000016184c
>>>> rbd_data.2cfc7ce74b0dc51.0000000000363527
>>>> rbd_data.2cfc7ce74b0dc51.000000000004c35f
>>>> rbd_data.2fbb1952ae8944a.000000000008db43
>>>> rbd_data.2cfc7ce74b0dc51.000000000015895a
>>>> rbd_data.31229f0238e1f29.00000000000135eb
>>>> ...
>>>>
>>>> $ rados ls -p ssd-archiv
>>>> #### nothing ####
>>>>
>>>> generation of the cache tier:
>>>> $ rados mkpool ssd-archiv
>>>> $ ceph osd pool set ssd-archiv crush_ruleset 5
>>>> $ ceph osd tier add ecarchiv ssd-archiv
>>>> $ ceph osd tier cache-mode ssd-archiv writeback
>>>> $ ceph osd pool set ssd-archiv hit_set_type bloom
>>>> $ ceph osd pool set ssd-archiv hit_set_count 1
>>>> $ ceph osd pool set ssd-archiv hit_set_period 3600
>>>> $ ceph osd pool set ssd-archiv target_max_bytes 50000000000
>>>>
>>>>
>>>> rule ssd {
>>>>         ruleset 5
>>>>         type replicated
>>>>         min_size 1
>>>>         max_size 10
>>>>         step take ssd
>>>>         step choose firstn 0 type osd
>>>>         step emit
>>>> }
>>>>
>>>>
>>>> Are there any "magic" (or which command I missed?) to see the excisting data throug the cache tier?
>>>>
>>>>
>>>> regards - and hoping for answers
>>>>
>>>> Udo
>>>> _______________________________________________
>>>> ceph-users mailing list
>>>> ceph-users@xxxxxxxxxxxxxx
>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux