Ceph, make (22nd-sept unstable) fails, and slow write issues.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all. (sorry Sage, previous msg got rejected due to the html contain)

I have several questions on Ceph configuration and setups.

Using Ubuntu 10.04 LTS i386, mainly following the guide on the Wiki.
Configuration 4 similar PCs (Duo core, 2gb+ ram) 1Gbps , MTU=9000:
1xMDS
1xMonitor
2xOSD (total 2 HDDs, each OSD has 1TB SATA HDD for osd0, and osd1)
7200rpm Seagate. (you know this model :p)

The mounting point was also isolated using the 5th PC (so this just
had the client)

I'm a beginning user, I'd really appreciate any helps

Question 1.
After the latest git update, i.e., 22nd-Sept (unstable)
The 'make' breaks down. Here's the last line.

> /bin/bash ../libtool --tag=CXX   --mode=link g++ -Wall -D__CEPH__ -D_FILE_OFFSET_BITS=64 -D_REENTRANT -D_THREAD_SAFE -rdynamic -g -O2 -latomic_ops  -o cmon cmon.o SimpleMessenger.o libmon.a libcrush.a libcommon.a -lpthread -lm -lcrypto
> libtool: link: g++ -Wall -D__CEPH__ -D_FILE_OFFSET_BITS=64 -D_REENTRANT -D_THREAD_SAFE -rdynamic -g -O2 -o cmon cmon.o SimpleMessenger.o  -latomic_ops libmon.a libcrush.a libcommon.a -lpthread -lm -lcrypto
> g++ -DHAVE_CONFIG_H -I.    -Wall -D__CEPH__ -D_FILE_OFFSET_BITS=64 -D_REENTRANT -D_THREAD_SAFE -rdynamic "-fno-builtin-malloc -fno-builtin-calloc -fno-builtin-realloc -fno-builtin-free" -g -O2 -MT cosd-cosd.o -MD -MP -MF .deps/cosd-cosd.Tpo -c -o cosd-cosd.o `test -f 'cosd.cc' || echo './'`cosd.cc
> cosd.cc: In function ‘int main(int, const char**)’:
> cosd.cc:65: error: ‘IsHeapProfilerRunning’ was not declared in this scope
> cosd.cc:310: warning: ignoring return value of ‘int chdir(const char*)’, declared with attribute warn_unused_result
> make[2]: *** [cosd-cosd.o] Error 1
> make[2]: Leaving directory `/home/aaa/ceph/ceph/src'
> make[1]: *** [all] Error 2
> make[1]: Leaving directory `/home/aaa/ceph/ceph/src'
> make: *** [all-recursive] Error 1
> aaa@s3:~/ceph/ceph$


Question 2.
Performance issue: I'm mainly interested in the speed, all
authentications have been disabled.

Very simple /etc/ceph/ceph.conf

>> [global]
>> pid file = /var/run/ceph/$name.pid
>>   debug ms = 3
>> sudo = true
>> [osd]
>> [osd0]
>> host = s1
>> btrfs devs = /dev/sdb
>> osd data = /data/osd0
>> osd journal = /data/osd0/journal
>> osd journal size = 1000
>> [osd1]
>> host = s2
>> btrfs devs = /dev/sdb
>> osd data = /data/osd1
>> osd journal = /data/osd1/journal
>> osd journal size = 1000
>> [mds]
>> [mds0]
>> host = s3
>> [mon]
>> debug ms = 3
>> [mon0]
>> host = s4
>> mon data = /data/mon0
>> mon addr = 192.168.50.5:6789

Ceph seems to work okay (mounted at /ceph  5th PC)
I ran fio benchmark (4 tests, random read/write sequential read/write). (4k)

I find that 'read' seems just okay,, random read = 55 iops, sequential
read = 3000 iops (but I don't think it is that fast?)
But I find that 'write' is very slow, I get both random/sequential
write about 7 to 10 iops. I really suspect something is wrong. :(
Are there any tweaks that I must do to increase the iops? I'm not
really interested in the large block size.

Also, when I was just copying a large file to it, the speed quickly
deteriorated, it first reported about 40MB/s but later, goes to less
than 8MB/s or worse, sometimes freeze..
Were there further setups that I need to configure? such as
crush/replication levels? (I haven't touched any of these)

Question 3. (miscellaneous)
When I initialize Ceph, I have a script (similar the fetch script) to
make sure all configurations are the same, including ceph.conf,
basically re-initializing everything to across all PCs.
Do you guys also do this for deploying (updating to latest git) and
re-make re-install, and finally re-initialize re-mount so as to run
the tests? Is there a faster ways? e.g., quickly testing for the
performance?

Ultimately, I'm trying to do a benchmark/comparison with different
disk configurations, and possibly with different file systems.


Thanks a lot in advance
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux