Is possible to use Ramdisk for Ceph journal ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for your reply.
I have found and test a way myself.. and now share to others


>>>>>Begin>>>  On Debian >>>
root at ceph01-vm:~# modprobe brd rd_nr=1 rd_size=4194304 max_part=0
root at ceph01-vm:~# mkdir /mnt/ramdisk
root at ceph01-vm:~# mkfs.btrfs /dev/ram0

WARNING! - Btrfs Btrfs v0.19 IS EXPERIMENTAL
WARNING! - see http://btrfs.wiki.kernel.org before using

fs created label (null) on /dev/ram0
        nodesize 4096 leafsize 4096 sectorsize 4096 size 4.00GB
Btrfs Btrfs v0.19
root at ceph01-vm:~# mount /dev/ram0 /mnt/ramdisk/
root at ceph01-vm:~# update-rc.d ramdisk defaults 10 99
cd /etc/rc0.d/
 mv K01ramdisk K99ramdisk
 cd ../rc1.d/
 mv K01ramdisk K99ramdisk
cd ../rc6.d/
mv K01ramdisk K99ramdisk
 cd ../rc2.d/
mv S17ramdisk S08ramdisk
cd ../rc3.d/
mv S17ramdisk S08ramdisk
 cd ../rc4.d/
 mv S17ramdisk S08ramdisk
 cd ../rc5.d/
 mv S17ramdisk S08ramdisk
update-rc.d: using dependency based boot sequencing
root at ceph01-vm:~# cd /etc/rc0.d/
root at ceph01-vm:/etc/rc0.d#  mv K01ramdisk K99ramdisk
root at ceph01-vm:/etc/rc0.d#  cd ../rc1.d/
root at ceph01-vm:/etc/rc1.d#  mv K01ramdisk K99ramdisk
root at ceph01-vm:/etc/rc1.d# cd ../rc6.d/
root at ceph01-vm:/etc/rc6.d# mv K01ramdisk K99ramdisk
root at ceph01-vm:/etc/rc6.d#  cd ../rc2.d/
root at ceph01-vm:/etc/rc2.d# mv S17ramdisk S08ramdisk
root at ceph01-vm:/etc/rc2.d# cd ../rc3.d/
root at ceph01-vm:/etc/rc3.d# mv S17ramdisk S08ramdisk
root at ceph01-vm:/etc/rc3.d#  cd ../rc4.d/
root at ceph01-vm:/etc/rc4.d#  mv S17ramdisk S08ramdisk
root at ceph01-vm:/etc/rc4.d#  cd ../rc5.d/
root at ceph01-vm:/etc/rc5.d#  mv S17ramdisk S08ramdisk
root at ceph01-vm:/etc/rc5.d# service ceph status
=== mon.ceph01-vm ===
mon.ceph01-vm: running {"version":"0.80.5"}
=== osd.2 ===
osd.2: running {"version":"0.80.5"}
=== mds.ceph01-vm ===
mds.ceph01-vm: running {"version":"0.80.5"}
root at ceph01-vm:/etc/rc5.d# service ceph stop osd.2
=== osd.2 ===
Stopping Ceph osd.2 on ceph01-vm...kill 10457...done
root at ceph01-vm:/etc/rc5.d# ceph-osd -i 2 --flush-journal
sh: 1: /sbin/hdparm: not found
2014-08-04 00:40:44.544251 7f5438b7a780 -1 journal _check_disk_write_cache:
pclose failed: (61) No data available
sh: 1: /sbin/hdparm: not found
2014-08-04 00:40:44.568660 7f5438b7a780 -1 journal _check_disk_write_cache:
pclose failed: (61) No data available
2014-08-04 00:40:44.570047 7f5438b7a780 -1 flushed journal
/var/lib/ceph/osd/ceph-2/journal for object store /var/lib/ceph/osd/ceph-2
root at ceph01-vm:/etc/rc5.d# vi /etc/ceph/ceph.conf

put this config in to /etc/ceph/ceph.conf

[osd]
journal dio = false
osd journal size = 3072
[osd.2]
host = ceph01-vm
osd journal = /mnt/ramdisk/journal


root at ceph01-vm:/etc/rc5.d# ceph-osd -c /etc/ceph/ceph.conf -i 2 --mkjournal
2014-08-04 00:41:37.706925 7fa84b9dd780 -1 journal FileJournal::_open: aio
not supported without directio; disabling aio
2014-08-04 00:41:37.707975 7fa84b9dd780 -1 journal FileJournal::_open_file
: unable to preallocation journal to 5368709120 bytes: (28) No space left
on device
2014-08-04 00:41:37.708020 7fa84b9dd780 -1
filestore(/var/lib/ceph/osd/ceph-2) mkjournal error creating journal on
/mnt/ramdisk/journal: (28) No space left on device
2014-08-04 00:41:37.708050 7fa84b9dd780 -1  ** ERROR: error creating fresh
journal /mnt/ramdisk/journal for object store /var/lib/ceph/osd/ceph-2:
(28) No space left on device
root at ceph01-vm:/etc/rc5.d# ceph-osd -c /etc/ceph/ceph.conf -i 2 --mkjournal
2014-08-04 00:41:39.033908 7fd7e7627780 -1 journal FileJournal::_open: aio
not supported without directio; disabling aio
2014-08-04 00:41:39.034067 7fd7e7627780 -1 journal check: ondisk fsid
00000000-0000-0000-0000-000000000000 doesn't match expected
6b619888-6ce4-4028-b7b3-a3af2cf0c6c9, invalid (someone else's?) journal
2014-08-04 00:41:39.034252 7fd7e7627780 -1 created new journal
/mnt/ramdisk/journal for object store /var/lib/ceph/osd/ceph-2
root at ceph01-vm:/etc/rc5.d# service ceph start osd.2
=== osd.2 ===
create-or-move updated item name 'osd.2' weight 0.09 at location
{host=ceph01-vm,root=default} to crush map
Starting Ceph osd.2 on ceph01-vm...
starting osd.2 at :/0 osd_data /var/lib/ceph/osd/ceph-2 /mnt/ramdisk/journal
root at ceph01-vm:/etc/rc5.d# service ceph status
=== mon.ceph01-vm ===
mon.ceph01-vm: running {"version":"0.80.5"}
=== osd.2 ===
osd.2: running {"version":"0.80.5"}
=== mds.ceph01-vm ===
mds.ceph01-vm: running {"version":"0.80.5"}
=== osd.2 ===
osd.2: running {"version":"0.80.5"}

<<<<<<End<<<<


2014-08-06 7:14 GMT+07:00 Craig Lewis <clewis at centraldesktop.com>:


2014-08-06 7:14 GMT+07:00 Craig Lewis <clewis at centraldesktop.com>:

> Try this (adjust the size param as needed):
> mount -t tmpfs -o size=256m tmpfs /mnt/ramdisk
> ceph-deploy osd  prepare ceph04-vm:/dev/sdb:/mnt/ramdisk/journal.osd0
>
>
>
> On Sun, Aug 3, 2014 at 7:13 PM, debian Only <onlydebian at gmail.com> wrote:
>
>> anyone can help?
>>
>>
>> 2014-07-31 23:55 GMT+07:00 debian Only <onlydebian at gmail.com>:
>>
>> Dear ,
>>>
>>> i have one test environment  Ceph Firefly 0.80.4, on Debian 7.5 .
>>> i do not have enough  SSD for each OSD.
>>> I want to test speed Ceph perfermance by put journal in a Ramdisk or
>>> tmpfs, but when to add new osd use separate disk for OSD data and journal
>>> ,it is failure.
>>>
>>> first , i have test Ram mount to a filesystem and made it to persistent.
>>>  , i have tested it, it can recovery data from last archive when system
>>> boot.
>>> >>>>>  ramdisk.sh >>>>
>>> #! /bin/sh
>>> ### BEGIN INIT INFO
>>> # Provides:             Ramdisk
>>> # Required-Start:       $remote_fs $syslog
>>> # Required-Stop:        $remote_fs $syslog
>>> # Default-Start:        2 3 4 5
>>> # Default-Stop:         0 1 6
>>> # Short-Description:    Ramdisk
>>> ### END INIT INFO
>>> # /etc/init.d/ramdisk.sh
>>> #
>>>
>>> case "$1" in
>>>  start)
>>>    echo "Copying files to ramdisk"
>>>    cd /mnt
>>>    mkfs.btrfs /dev/ram0 >> /var/log/ramdisk_sync.log
>>>    mount /dev/ram0 /mnt/ramdisk/
>>>    tar --lzop -xvf ramdisk-backup.tar.lzop >> /var/log/ramdisk_sync.log
>>>    echo [`date +"%Y-%m-%d %H:%M"`] Ramdisk Synched from HD >>
>>> /var/log/ramdisk_s
>>> ync.log
>>>    ;;
>>>  sync)
>>>    echo "Synching files from ramdisk to Harddisk"
>>>    echo [`date +"%Y-%m-%d %H:%M"`] Ramdisk Synched to HD >>
>>> /var/log/ramdisk_syn
>>> c.log
>>>    cd /mnt
>>>    mv -f ramdisk-backup.tar.lzop ramdisk-backup-old.tar.lzop
>>>    tar --lzop -cvf ramdisk-backup.tar.lzop ramdisk >>
>>> /var/log/ramdisk_sync.log
>>>    ;;
>>>  stop)
>>>    echo "Synching logfiles from ramdisk to Harddisk"
>>>    echo [`date +"%Y-%m-%d %H:%M"`] Ramdisk Synched to HD >>
>>> /var/log/ramdisk_syn
>>> c.log
>>>    tar --lzop -cvf ramdisk-backup.tar.lzop ramdisk >>
>>> /var/log/ramdisk_sync.log
>>>    ;;
>>>  *)
>>>    echo "Usage: /etc/init.d/ramdisk {start|stop|sync}"
>>>    exit 1
>>>    ;;
>>> esac
>>>
>>> exit 0
>>>
>>> #####
>>>
>>> then i want to add new OSD use ramdisk for journal.
>>>
>>> i have tried 3 ways.  all failed.
>>> 1. ceph-deploy osd --zap-disk --fs-type btrfs create
>>> ceph04-vm:/dev/sdb:/dev/ram0 (use device way)
>>> 2. ceph-deploy osd  prepare ceph04-vm:/mnt/osd:/mnt/ramdisk  (use
>>> direcotry way)
>>> 3. ceph-deploy osd  prepare ceph04-vm:/dev/sdb:/mnt/ramdisk
>>>
>>> could some expert give me some guide on it ???
>>>
>>> #### some log#####
>>> root at ceph-admin:~/my-cluster# ceph-deploy osd --zap-disk --fs-type
>>> btrfs create ceph04-vm:/dev/sdb:/dev/ram0
>>> [ceph_deploy.conf][DEBUG ] found configuration file at:
>>> /root/.cephdeploy.conf
>>> [ceph_deploy.cli][INFO  ] Invoked (1.5.9): /usr/bin/ceph-deploy osd
>>> --zap-disk --fs-type btrfs create ceph04-vm:/dev/sdb:/dev/ram0
>>> [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks
>>> ceph04-vm:/dev/sdb:/dev/ram0
>>> [ceph04-vm][DEBUG ] connected to host: ceph04-vm
>>> [ceph04-vm][DEBUG ] detect platform information from remote host
>>> [ceph04-vm][DEBUG ] detect machine type
>>> [ceph_deploy.osd][INFO  ] Distro info: debian 7.6 wheezy
>>> [ceph_deploy.osd][DEBUG ] Deploying osd to ceph04-vm
>>> [ceph04-vm][DEBUG ] write cluster configuration to
>>> /etc/ceph/{cluster}.conf
>>> [ceph04-vm][INFO  ] Running command: udevadm trigger
>>> --subsystem-match=block --action=add
>>> [ceph_deploy.osd][DEBUG ] Preparing host ceph04-vm disk /dev/sdb journal
>>> /dev/ram0 activate True
>>> [ceph04-vm][INFO  ] Running command: ceph-disk-prepare --zap-disk
>>> --fs-type btrfs --cluster ceph -- /dev/sdb /dev/ram0
>>> [ceph04-vm][DEBUG ]
>>> ****************************************************************************
>>> [ceph04-vm][DEBUG ] Caution: Found protective or hybrid MBR and corrupt
>>> GPT. Using GPT, but disk
>>> [ceph04-vm][DEBUG ] verification and recovery are STRONGLY recommended.
>>> [ceph04-vm][DEBUG ]
>>> ****************************************************************************
>>> [ceph04-vm][DEBUG ] GPT data structures destroyed! You may now partition
>>> the disk using fdisk or
>>> [ceph04-vm][DEBUG ] other utilities.
>>> [ceph04-vm][DEBUG ] The operation has completed successfully.
>>> [ceph04-vm][DEBUG ] Creating new GPT entries.
>>> [ceph04-vm][DEBUG ] Information: Moved requested sector from 34 to 2048
>>> in
>>> [ceph04-vm][DEBUG ] order to align on 2048-sector boundaries.
>>> [ceph04-vm][WARNIN] Caution: invalid backup GPT header, but valid main
>>> header; regenerating
>>> [ceph04-vm][WARNIN] backup header from main header.
>>> [ceph04-vm][WARNIN]
>>> [ceph04-vm][WARNIN] WARNING:ceph-disk:OSD will not be hot-swappable if
>>> journal is not the same device as the osd data
>>> [ceph04-vm][WARNIN] Could not create partition 2 from 34 to 10485793
>>> [ceph04-vm][WARNIN] Unable to set partition 2's name to 'ceph journal'!
>>> [ceph04-vm][WARNIN] Could not change partition 2's type code to
>>> 45b0969e-9b03-4f30-b4c6-b4b80ceff106!
>>> [ceph04-vm][WARNIN] Error encountered; not saving changes.
>>> [ceph04-vm][WARNIN] ceph-disk: Error: Command '['/sbin/sgdisk',
>>> '--new=2:0:+5120M', '--change-name=2:ceph journal',
>>> '--partition-guid=2:ea326680-d389-460d-bef1-3c6bd0ab83c5',
>>> '--typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106', '--mbrtogpt', '--',
>>> '/dev/ram0']' returned non-zero exit status 4
>>> [ceph04-vm][ERROR ] RuntimeError: command returned non-zero exit status:
>>> 1
>>> [ceph_deploy.osd][ERROR ] Failed to execute command: ceph-disk-prepare
>>> --zap-disk --fs-type btrfs --cluster ceph -- /dev/sdb /dev/ram0
>>> [ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs
>>>
>>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users at lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140806/78d46ace/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux