Re: [RFC] makedumpfile, crash: LZO compression support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> > Hello Hatayama-san,
> > 
> > Thank you for your work.
> > 
> >> Performance Comparison:
> >> 
> >>   Sample Data
> >> 
> >>     Ideally, I must have measured the performance for many enough
> >>     vmcores generated from machines that was actually running, but now
> >>     I don't have enough sample vmcores, I couldn't do so. So this
> >>     comparison doesn't answer question on I/O time improvement. This
> >>     is TODO for now.
> > 
> > I'll measure the performance for actual vmcores by makedumpfile.
> > Please wait for a while.

I measured the performance of makedumpfile for some vmcores.
Please see below.


Sample Data
  
  To simulate a working server, I captured VMCOREs while almost all pages 
  were alloceted and filled with random data. (See attached file "fill_random.c")

  I captured the VMCOREs of 5GB, 7.5GB and 10GB in the same condition.

How to measure

  I measured the total execution time and the size of output file.

  $ time makedumpfile --message-level 16 [-c|-l| ] vmcore dumpfile

Result

    See attached file "result.txt".


This time, lzo's compression was the quickest, and lzo's compression ratio is
almost the same(only a bit worse) as zlib's.
It seems good, and I will merge the patch set into the makedumpfile.

What is your opinion, Dave?


Thanks.
KUMAGAI, Atsushi

> 
> That's very helpful. Thanks in advance.
> 
> But of course I'm also still looking for alternative way.
> 
> Thanks.
> HATAYAMA, Daisuke
> 
				lzo		zlib		non-compress
     5GBytes
total time(sec)			238		464		252
total time ratio(%)		94.4		184.1		100
output size(KBytes)		4812695		4771490		5156005
compression size ratio(%)	93.2		92.4		100


     7.5GBytes
total time(sec)			358		685		376
total time ratio(%)		95.2		182.1		100
output size(KBytes)		7193681		7130501		7782665
compression size ratio(%)	92.4		91.6		100


     10GBytes
total time(sec)			493		929		527
total time ratio(%)		93.5		176.3		100
output size(KBytes)		9980424		9932498		10460825
compression size ratio(%)	95.4		94.9		100



[about sample data]

Sample data is random, and it has deviation in compression ratio.
I show the number of pages below which were compressed.
(See attached patch "compressed_page_report.patch")

					lzo			zlib
	5GByte
compressed page				120052			122238
non-compressed page			1505548			1503362
compression page ratio(%)   		7.39			7.52


	7.5GByte
compressed page				206481			209758
non-compressed page			2071919			2068642
compression page ratio(%)   		9.06			9.21


	10GByte	
compressed page				160327			161229
non-compressed page			2783673			2782771
compression page ratio(%)		5.45			5.48

Attachment: compressed_page_report.patch
Description: Binary data

#include <stdio.h>
#include <sys/types.h>
#include <unistd.h>
#include <stdlib.h>
#include <errno.h>
#include <string.h>
#include <sys/time.h>

#define MEGABYTES	(1024*1024)

int pagefault_megabyte(char **ptr)
{
	char *ptr_local;
	int j;

	ptr_local = *ptr;

	srand((unsigned) time(NULL));	
	for (j = 0; j < MEGABYTES; j++, ptr_local++) {
  	        memset(ptr_local, rand()%256, 1);
	}
}

main(int argc, char *argv[])
{
	int i, mem_mega_bytes;
	char *ptr[1024*1024];

	if (argc != 2) {
		printf("Invalid param\n");
		return 1;
	}
	mem_mega_bytes = atoi(argv[1]);

	for (i = 0; i < mem_mega_bytes; i++) {
		ptr[i] = malloc(MEGABYTES);
		if (ptr[i] == NULL) {
			printf("malloc error. (%s)\n", strerror(errno));
			exit(1);
		}
	}

retry:
	for (i = 0; i < mem_mega_bytes; i++) {
		pagefault_megabyte(&ptr[i]);
	}
	goto retry;
}
--
Crash-utility mailing list
Crash-utility@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/crash-utility

[Index of Archives]     [Fedora Development]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [KDE Users]     [Fedora Tools]

 

Powered by Linux