On 02/11/12 16:21, Alexander Motin wrote:
> I've heavily rewritten the patch already. So at least some of the ideas
> are already addressed. :) At this moment I am mostly satisfied with
> results and after final tests today I'll probably publish new version.
The patch is more complicated then previous one both logically and
computationally, but with growing CPU power and complexity I think we
can possibly spend some more time deciding how to spend time. :)
Patch formalizes several ideas of the previous code about how to select
CPU for running a thread and adds some new. It's main idea is that I've
moved from comparing raw integer queue lengths to higher-resolution
flexible values. That additional 8-bit precision allows same time take
into account many factors affecting performance. Beside just choosing
best from equally-loaded CPUs, with new code it may even happen that
because of SMT, cache affinity, etc, CPU with more threads on it's queue
will be reported as less loaded and opposite.
New code takes into account such factors:
- SMT sharing penalty.
- Cache sharing penalty.
- Cache affinity (with separate coefficients for last-level and other
level caches) to the:
- other running threads of it's process,
- previous CPU where it was running,
- current CPU (usually where it was called from).
All of these factors are configurable via sysctls, but I think
reasonable defaults should fit most.
Also, comparing to previous patch, I've resurrected optimized shortcut
in CPU selection for the case of SMT. Comparing to original code having
problems with this, I've added check for other logical cores load that
should make it safe and still very fast when there are less running
threads then physical cores.
I've tested in on Core i7 and Atom systems, but more interesting would
be to test it on multi-socket system with properly detected topology to
check benefits from affinity.
At this moment the main issue I see is that this patch affects only time
when thread is starting. If thread runs continuously, it will stay where
it was, even if due to situation change that is not very effective
(causes SMT sharing, etc). I haven't looked much on periodic load
balancer yet, but probably it could also be somehow improved.
What is your opinion, is it too over-engineered, or it is the right way