I made a ramfs partition and try to write in pcap file, so to avoid much processing otherwise required in json,etc. I am getting performance benefit but I am able to go upto 15K - 16K/sec and at times following error comes: <7> ulogd_inppkt_ULOG.c:253 ipulog_read = -1! ipulog_errno = 6 (Error during netlink receive), errno = 105 (No buffer space available) This leads to loss of certain packets in log file. So, how to enhance performace ? I want to test if we can handle 100,000 pkts/sec some how but I have run out of options . I am having 64GB DDR3 1600-ECC ram and 16 core CPU @2.3GHz. Please help ... On Fri, Nov 21, 2014 at 10:21 PM, Neal Murphy <neal.p.murphy@xxxxxxxxxxxx> wrote: > 10 000 PPS is, at worst, around 15 000 000 bytes/s. Even today's lamest SATA > drives should be able to handle that data rate; USB thumb drives can be a > different story. > > Turn off O_SYNC. Let Linux do its job; it handles disk I/O very well. Write to > the file cache and let Linux handle flushing the data to disk. Be sure you > have plenty of RAM to cache plenty of data. And if you expect to read the data > simultaneously, you'll want even *more* RAM to give the system half a chance > of keeping some of the data cached. If you're worried about power > inerruptions, use a UPS. If you're worried about FS corruption, use a > journalled FS. > > N > > > On Friday, November 21, 2014 07:43:09 AM Joel Gerber wrote: >> What type of storage are you writing to? The slow-down might be disk >> access. >> >> If you really want to log traffic in the 10,000 pkt/sec range, I would >> start by writing to memory storage, and then have a process in the >> background sync to disk. Hopefully the sync will be able to keep up >> relatively well. >> >> You'll also want a writing system with very little overhead. Writing >> directly to a file with no abstraction layer, using binary output, might >> be the fastest thing possible, short of writing to a raw partition table. >> >> The other thing to consider, is even with all of the data written, how will >> you read it? Having a file constantly open for writing, precludes being >> able to reliably read from it. >> >> Joel Gerber >> Network Specialist >> Network Operations >> Eastlink >> E: Joel.Gerber@xxxxxxxxxxxxxxxx T: 519.786.1241 >> >> >> -----Original Message----- >> From: netfilter-owner@xxxxxxxxxxxxxxx >> [mailto:netfilter-owner@xxxxxxxxxxxxxxx] On Behalf Of Akshat Kakkar Sent: >> November-21-14 7:36 AM >> To: netfilter@xxxxxxxxxxxxxxx >> Subject: iptables logging using ulog : which can handle high traffic, >> writing in db or json or xml? >> >> i want to do logging of traffic iptables rules. >> Traffic can go upto 10000pkts/sec. >> >> While using ulog and writing in mysql db, I could not get good results. Max >> around 300 pkts/sec were able to handle. Further traffic was delayed in >> db. >> >> Then I used json, it was able to handle around 1700 pkts/sec, beyond that >> there is traffic loss in json file. >> >> Is there any other mode which can be faster than json without losing any >> traffic? -- >> To unsubscribe from this list: send the line "unsubscribe netfilter" in the >> body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at >> http://vger.kernel.org/majordomo-info.html >> N�����r��y����b�X��ǧv�^�){.n�+���z����{ay� ʇڙ�,j ��f���h���z� �w��� ���j >> :+v���w�j�m���� ����zZ+�����ݢj"��!�i > -- > To unsubscribe from this list: send the line "unsubscribe netfilter" in > the body of a message to majordomo@xxxxxxxxxxxxxxx > More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line "unsubscribe netfilter" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html