Cephfs: sporadic damages uploaded files

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi
I think that it isn't cause. I verified in the next method:

source file

toor at lw01p01-node01:~$ dd if=XXX.iso bs=1M count=1000 | md5sum
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 2.05409 s, 510 MB/s
da3e06b7a6d361aab4a7f63a8831ccd8  -

toor at lw01p01-node01:~$ dd if=XXX.iso bs=1M | md5sum
3099+1 records in
3099+1 records out
3249803264 bytes (3.2 GB) copied, 6.3012 s, 516 MB/s
5488d85797cd53d1d1562e73122522c1  -


and destination one

root at lw01p01-mgmt01:/export/secondary# dd if=XXX.iso bs=1M count=1000 | md5sum
1000+0 records in
1000+0 records out
da3e06b7a6d361aab4a7f63a8831ccd8  -
1048576000 bytes (1.0 GB) copied, 6.56459 s, 160 MB/s

root at lw01p01-mgmt01:/export/secondary# dd if=XXX.iso bs=1M | md5sum
3099+1 records in
3099+1 records out
3249803264 bytes (3.2 GB) copied, 21.1506 s, 154 MB/s
45b940c6cb76ed0e76c9fac4cba01c3c  -

2014-08-27 12:24 GMT+03:00 Yan, Zheng <ukernel at gmail.com>:
> I suspect the client does not have permission to write to pool 3.
> could you check if the contents of XXX.iso.2 are all zeros.
>
> Yan, Zheng
>
> On Wed, Aug 27, 2014 at 5:05 PM, Michael Kolomiets
> <michael.kolomiets at gmail.com> wrote:
>> Hi!
>> I use ceph pool mounted via cephfs for cloudstack secondary storage
>> and have problem with consistency of files stored on it.
>> I have uploaded file for three time and checked it, but at each time i
>> have got different checksum (at second time it was a valid checksum).
>> Each try of upload gave permanent result (I checked twice each time),
>> but each next try did different one.
>> Please help me with finding of PoF.
>>
>> root at lw01p01-mgmt01:/export/secondary# uname -a
>> Linux lw01p01-mgmt01 3.14.1-031401-generic #201404141220 SMP Mon Apr
>> 14 16:21:48 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
>>
>>
>> root at lw01p01-mgmt01:/export/secondary# ceph status
>>     cluster e405d974-3fb6-42c8-b34a-a0ac5a1fef3a
>>      health HEALTH_OK
>>      monmap e1: 3 mons at
>> {lw01p01-node01=10.0.15.1:6789/0,lw01p01-node02=10.0.15.2:6789/0,lw01p01-node03=10.0.15.3:6789/0},
>> election epoch 52, quorum 0,1,2
>> lw01p01-node01,lw01p01-node02,lw01p01-node03
>>      mdsmap e17: 1/1/1 up {0=lw01p01-node01=up:active}
>>      osdmap e338: 20 osds: 20 up, 20 in
>>       pgmap v160161: 656 pgs, 6 pools, 30505 MB data, 8159 objects
>>             61377 MB used, 25512 GB / 25572 GB avail
>>                  656 active+clean
>>   client io 0 B/s rd, 1418 B/s wr, 1 op/s
>>
>>
>> root at lw01p01-mgmt01:/export/secondary# mount | grep ceph
>> 10.0.15.1:/ on /export/secondary type ceph
>> (name=cloudstack,key=client.cloudstack)
>>
>>
>> root at lw01p01-mgmt01:/export/secondary# cephfs /export/secondary show_layout
>> layout.data_pool:     3
>> layout.object_size:   4194304
>> layout.stripe_unit:   4194304
>> layout.stripe_count:  1
>>
>>
>> root at lw01p01-mgmt01:/export/secondary# wget
>> http://lw01p01-templates01.example.com/ISO/XXX.iso
>> --2014-08-27 10:12:39--  http://lw01p01-templates01.example.com/ISO/XXX.iso
>> Resolving lw01p01-templates01.example.com
>> (lw01p01-templates01.example.com)... 10.0.15.1
>> Connecting to lw01p01-templates01.example.com
>> (lw01p01-templates01.example.com)|10.0.15.1|:80... connected.
>> HTTP request sent, awaiting response... 200 OK
>> Length: 3249803264 (3.0G) [application/x-iso9660-image]
>> Saving to: 'XXX.iso'
>>
>> 100%[=====================================================>]
>> 3,249,803,264  179MB/s   in 18s
>>
>> 2014-08-27 10:12:57 (173 MB/s) - 'XXX.iso' saved [3249803264/3249803264]
>>
>> root at lw01p01-mgmt01:/export/secondary# md5sum XXX.iso
>> 45b940c6cb76ed0e76c9fac4cba01c3c  XXX.iso
>> root at lw01p01-mgmt01:/export/secondary# md5sum XXX.iso
>> 45b940c6cb76ed0e76c9fac4cba01c3c  XXX.iso
>>
>> root at lw01p01-mgmt01:/export/secondary# wget
>> http://lw01p01-templates01.example.com/ISO/XXX.iso
>> --2014-08-27 10:14:11--  http://lw01p01-templates01.example.com/ISO/XXX.iso
>> Resolving lw01p01-templates01.example.com
>> (lw01p01-templates01.example.com)... 10.0.15.1
>> Connecting to lw01p01-templates01.example.com
>> (lw01p01-templates01.example.com)|10.0.15.1|:80... connected.
>> HTTP request sent, awaiting response... 200 OK
>> Length: 3249803264 (3.0G) [application/x-iso9660-image]
>> Saving to: 'XXX.iso.1'
>>
>> 100%[=====================================================>]
>> 3,249,803,264  154MB/s   in 19s
>>
>> 2014-08-27 10:14:30 (161 MB/s) - 'XXX.iso.1' saved [3249803264/3249803264]
>>
>> root at lw01p01-mgmt01:/export/secondary# md5sum XXX.iso.1
>> 5488d85797cd53d1d1562e73122522c1  XXX.iso.1
>> root at lw01p01-mgmt01:/export/secondary# md5sum XXX.iso.1
>> 5488d85797cd53d1d1562e73122522c1  XXX.iso.1
>>
>> root at lw01p01-mgmt01:/export/secondary# wget
>> http://lw01p01-templates01.example.com/ISO/XXX.iso
>> --2014-08-27 10:15:23--  http://lw01p01-templates01.example.com/ISO/XXX.iso
>> Resolving lw01p01-templates01.example.com
>> (lw01p01-templates01.example.com)... 10.0.15.1
>> Connecting to lw01p01-templates01.example.com
>> (lw01p01-templates01.example.com)|10.0.15.1|:80... connected.
>> HTTP request sent, awaiting response... 200 OK
>> Length: 3249803264 (3.0G) [application/x-iso9660-image]
>> Saving to: 'XXX.iso.2'
>>
>> 100%[=====================================================>]
>> 3,249,803,264  160MB/s   in 20s
>>
>> 2014-08-27 10:15:44 (152 MB/s) - 'XXX.iso.2' saved [3249803264/3249803264]
>>
>> root at lw01p01-mgmt01:/export/secondary# md5sum XXX.iso.2
>> 5e28d425f828440b025d769609c5bb41  XXX.iso.2
>> root at lw01p01-mgmt01:/export/secondary# md5sum XXX.iso.2
>> 5e28d425f828440b025d769609c5bb41  XXX.iso.2
>>
>> --
>> Michael Kolomiets
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users at lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Michael Kolomiets


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux