On Wed 25 Apr 2012 22:39, ludo@gnu.... (Ludovic Courtès) writes:
>> So, those are the problems: benchmarks running for inappropriate,
>> inconsistent durations;
> I don’t really see such a problem. It doesn’t matter to me if
> ‘arithmetic.bm’ takes 2mn while ‘vlists.bm’ takes 40s, since I’m not
> comparing them.
Running a benchmark for 2 minutes is not harmful to the results, but it
is a bit needless. One second is enough.
However, running a benchmark for just a few milliseconds is not very
>> My proposal is to rebase the iteration count in 0-reference.bm to run
>> for 0.5s on some modern machine, and adjust all benchmarks to match,
>> removing those benchmarks that do not measure anything useful.
> Sounds good. However, adjusting iteration counts of the benchmarks
> themselves should be done rarely, as it breaks performance tracking like
I think we've established that this isn't the case -- modulo the effect
that such a change would have on GC (process image size, etc)
>> Finally we should perhaps enable automatic scaling of the iteration
>> count. What do folks think about that?
>> On the positive side, all of our benchmarks are very clear that they are
>> a time per number of iterations, and so this change should not affect
>> users that measure time per iteration.
> If the reported time is divided by the global iteration count, then
> automatic scaling of the global iteration count would be good, yes.