Is possible to use Ramdisk for Ceph journal ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear ,

i have one test environment  Ceph Firefly 0.80.4, on Debian 7.5 .
i do not have enough  SSD for each OSD.
I want to test speed Ceph perfermance by put journal in a Ramdisk or tmpfs,
but when to add new osd use separate disk for OSD data and journal ,it is
failure.

first , i have test Ram mount to a filesystem and made it to persistent.
 , i have tested it, it can recovery data from last archive when system
boot.
>>>>>  ramdisk.sh >>>>
#! /bin/sh
### BEGIN INIT INFO
# Provides:             Ramdisk
# Required-Start:       $remote_fs $syslog
# Required-Stop:        $remote_fs $syslog
# Default-Start:        2 3 4 5
# Default-Stop:         0 1 6
# Short-Description:    Ramdisk
### END INIT INFO
# /etc/init.d/ramdisk.sh
#

case "$1" in
 start)
   echo "Copying files to ramdisk"
   cd /mnt
   mkfs.btrfs /dev/ram0 >> /var/log/ramdisk_sync.log
   mount /dev/ram0 /mnt/ramdisk/
   tar --lzop -xvf ramdisk-backup.tar.lzop >> /var/log/ramdisk_sync.log
   echo [`date +"%Y-%m-%d %H:%M"`] Ramdisk Synched from HD >>
/var/log/ramdisk_s
ync.log
   ;;
 sync)
   echo "Synching files from ramdisk to Harddisk"
   echo [`date +"%Y-%m-%d %H:%M"`] Ramdisk Synched to HD >>
/var/log/ramdisk_syn
c.log
   cd /mnt
   mv -f ramdisk-backup.tar.lzop ramdisk-backup-old.tar.lzop
   tar --lzop -cvf ramdisk-backup.tar.lzop ramdisk >>
/var/log/ramdisk_sync.log
   ;;
 stop)
   echo "Synching logfiles from ramdisk to Harddisk"
   echo [`date +"%Y-%m-%d %H:%M"`] Ramdisk Synched to HD >>
/var/log/ramdisk_syn
c.log
   tar --lzop -cvf ramdisk-backup.tar.lzop ramdisk >>
/var/log/ramdisk_sync.log
   ;;
 *)
   echo "Usage: /etc/init.d/ramdisk {start|stop|sync}"
   exit 1
   ;;
esac

exit 0

#####

then i want to add new OSD use ramdisk for journal.

i have tried 3 ways.  all failed.
1. ceph-deploy osd --zap-disk --fs-type btrfs create
ceph04-vm:/dev/sdb:/dev/ram0 (use device way)
2. ceph-deploy osd  prepare ceph04-vm:/mnt/osd:/mnt/ramdisk  (use direcotry
way)
3. ceph-deploy osd  prepare ceph04-vm:/dev/sdb:/mnt/ramdisk

could some expert give me some guide on it ???

#### some log#####
root at ceph-admin:~/my-cluster# ceph-deploy osd --zap-disk --fs-type btrfs
create ceph04-vm:/dev/sdb:/dev/ram0
[ceph_deploy.conf][DEBUG ] found configuration file at:
/root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.9): /usr/bin/ceph-deploy osd
--zap-disk --fs-type btrfs create ceph04-vm:/dev/sdb:/dev/ram0
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks
ceph04-vm:/dev/sdb:/dev/ram0
[ceph04-vm][DEBUG ] connected to host: ceph04-vm
[ceph04-vm][DEBUG ] detect platform information from remote host
[ceph04-vm][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: debian 7.6 wheezy
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph04-vm
[ceph04-vm][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph04-vm][INFO  ] Running command: udevadm trigger
--subsystem-match=block --action=add
[ceph_deploy.osd][DEBUG ] Preparing host ceph04-vm disk /dev/sdb journal
/dev/ram0 activate True
[ceph04-vm][INFO  ] Running command: ceph-disk-prepare --zap-disk --fs-type
btrfs --cluster ceph -- /dev/sdb /dev/ram0
[ceph04-vm][DEBUG ]
****************************************************************************
[ceph04-vm][DEBUG ] Caution: Found protective or hybrid MBR and corrupt
GPT. Using GPT, but disk
[ceph04-vm][DEBUG ] verification and recovery are STRONGLY recommended.
[ceph04-vm][DEBUG ]
****************************************************************************
[ceph04-vm][DEBUG ] GPT data structures destroyed! You may now partition
the disk using fdisk or
[ceph04-vm][DEBUG ] other utilities.
[ceph04-vm][DEBUG ] The operation has completed successfully.
[ceph04-vm][DEBUG ] Creating new GPT entries.
[ceph04-vm][DEBUG ] Information: Moved requested sector from 34 to 2048 in
[ceph04-vm][DEBUG ] order to align on 2048-sector boundaries.
[ceph04-vm][WARNIN] Caution: invalid backup GPT header, but valid main
header; regenerating
[ceph04-vm][WARNIN] backup header from main header.
[ceph04-vm][WARNIN]
[ceph04-vm][WARNIN] WARNING:ceph-disk:OSD will not be hot-swappable if
journal is not the same device as the osd data
[ceph04-vm][WARNIN] Could not create partition 2 from 34 to 10485793
[ceph04-vm][WARNIN] Unable to set partition 2's name to 'ceph journal'!
[ceph04-vm][WARNIN] Could not change partition 2's type code to
45b0969e-9b03-4f30-b4c6-b4b80ceff106!
[ceph04-vm][WARNIN] Error encountered; not saving changes.
[ceph04-vm][WARNIN] ceph-disk: Error: Command '['/sbin/sgdisk',
'--new=2:0:+5120M', '--change-name=2:ceph journal',
'--partition-guid=2:ea326680-d389-460d-bef1-3c6bd0ab83c5',
'--typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106', '--mbrtogpt', '--',
'/dev/ram0']' returned non-zero exit status 4
[ceph04-vm][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy.osd][ERROR ] Failed to execute command: ceph-disk-prepare
--zap-disk --fs-type btrfs --cluster ceph -- /dev/sdb /dev/ram0
[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140731/c28e69a7/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux