On 02/27/2014 05:06 PM, Russell Beall
wrote:
Thanks again for your comments on this.
I tried an alternate approach in which I deleted a number of
relevant indexes that theoretically should be needed all across
the ACIs. I was shocked to find out that the processing time
was totally unaffected. Then I took it a step further and
deleted all of our custom indexes. Then even further to delete
uniquemember and uid indexing. Again, performance was
unaffected, and the results of ACI filtering still seem to be
accurate.
It seems this means that the ACI processing is not correctly
using the indexes, or I didn't create them properly.
It could mean
1) the aci performance is not related to indexing
2) we don't know what indexes it is using
I have run db2index.pl and it successfully reprocessed the
entire set of indexes. I also ran dbverify to successful
completion. I can tell that the indexes are being used in
simple searches (based on the fact that I get the
"Administrative Limit Exceeded" error when there is no index).
Does this information point to anything that I should look
into further?
If the aci processing is doing internal searches that don't show up
in logconv.pl, then turning on access logging for internal searches
should show those unindexed internal searches, which should show up
using logconv.pl
Thanks,
Russ.
On 02/27/2014 12:49 PM,
Russell Beall wrote:
Hi Rich,
Thanks for the data. I've been continuing to
experiment and work on this and especially making sure
that everything that might be used in the ACIs is
indexed. All the indexes appear to be in order, but I
am confused by one thing… It looks like there is no
entryDN index and only an entryrdn index.
Correct.
This new format will be fully workable for
complicated dn lookups in the ACIs, correct? (We have
a lot of "groupdn=" and "userdn=" restrictions).
Correct. groupdn= and userdn= do not use the entrydn
index.
There is no one single ACI which degrades
performance, but I did notice that when adding back in
certain of the ACIs, performance does degrade quicker
than should be expected just for the cost of
processing only one additional ACI. I believe there
may definitely be a problem with the indexes as you
suggested but It is hiding well...
You could enable access logging of internal operations.
https://access.redhat.com/site/documentation/en-US/Red_Hat_Directory_Server/9.0/html/Configuration_Command_and_File_Reference/Core_Server_Configuration_Reference.html#cnconfig-nsslapd_accesslog_level
An additional symptom which may point to a server
configuration problem is a strange inability to import
or reindex quickly. My small dev VM can import
several hundred entries at a time, but this server
will only import or reindex at a rate of 18-30 records
per second. I've ensured that there is plenty of
import memory as well as cachememsize which should
enable a very speedy import, but even though all 32
cores are burning bright, the import speed seems
incredibly slow. (This is of course after all indexes
are created and it is trying to index while importing.
Import speed with no indexes is fairly fast).
Any obvious clues I'm missing?
No, not sure what's going on.
Thanks,
Russ.
On 02/19/2014 04:56
PM, Russell Beall wrote:
Hi all,
We've just set up monster-sized server
nodes to run 389 as a replacement to Sun DS.
I've been running my tests and I am pleased
to report that the memory issue seems to be
in check with growth only up to double the
initial memory usage after large quantities
of ldapmodify calls. We have plenty of room
in these boxes to accommodate caching the
entire database.
The key blocker on this is still the ACL
processing times for which I have been
unable to find a decent resolution. We have
135 ACIs at the root of the suffix. When I
comment out most of them but leave one
service account active, processing times are
very nicely fast. When I leave them all on,
that same service account takes 2.5 seconds
to respond when only one request is pending.
A new kink in the puzzle here which is
probably going to be a deal breaker is that
if I run the same request on multiple
threads, each thread takes proportionately
longer to respond depending on the number of
threads. If I have 12 threads going doing a
simple lookup, each thread responds in 45-55
seconds. If I have 24 threads going, each
thread takes 1m45s - 1m55s to respond. The
box has 32 cores available. While
processing, each thread is burning 100% of
an available CPU thread for the entire time.
Theoretically when up to 32 requests are
simultaneously processing, each thread
should return in 2.5 seconds just as if it
were one thread.
Note that the directory server performance does
not scale linearly with the number of cores. At
some point you will run into thread contention.
Also things like replication compete for thread
resources.
Since all threads are burning 100% the
entire time, it doesn't seem like that would
be caused by simple thread locking where
some threads are waiting for others.
No, see below.
I'm thinking the system is not properly
configured in some way and there is a system
bottleneck blocking the processing. When
burning the CPU there is very little
percentage allocated to the user percentage,
most of the CPU usage is listed under the
system CPU usage. Is this normal, or is
this indicative of some system layer that is
bottlenecking the processing?
Sounds like the ACI may be doing some sort of
unindexed internal search.
Have you narrowed it down to a particular ACI
that is causing the problem?
Another question I posed earlier is
whether or not it is possible to replicate
three subtrees independently and then keep
the aci entry at the root suffix independent
so it can be set separately for multiple
downstream replicants. That way we could
possibly subdivide the service accounts
across different nodes. Is that possible?
No.
Thanks,
Russ.
==============================
Russell Beall
Systems Programmer IV
Enterprise
Identity Management
University of Southern
California
==============================
--
389 users mailing list
389-users@xxxxxxxxxxxxxxxxxxxxxxx
https://admin.fedoraproject.org/mailman/listinfo/389-users
--
389 users mailing list
389-users@xxxxxxxxxxxxxxxxxxxxxxx
https://admin.fedoraproject.org/mailman/listinfo/389-users
--
389 users mailing list
389-users@xxxxxxxxxxxxxxxxxxxxxxx
https://admin.fedoraproject.org/mailman/listinfo/389-users
--
389 users mailing list
389-users@xxxxxxxxxxxxxxxxxxxxxxx
https://admin.fedoraproject.org/mailman/listinfo/389-users
--
389 users mailing list
389-users@xxxxxxxxxxxxxxxxxxxxxxx
https://admin.fedoraproject.org/mailman/listinfo/389-users
|