[1/2] test-types-stability: parallelize test case alternatives
Commit Message
Commit a9f5fb408946 ("Add --no-write-default-sizes option.") introduced
a new test variant for test-types-stability that is actually independent
of the original test case in terms of execution. Hence it can be
expressed as a separate test case. So, do that by parametrizing the
test_task struct with a new no_default_sizes flag and schedule a
separate test case in the main loop.
That test runs now ~twice as fast dropping from roughly 20s on my
machine to 10s. That effectively removes it from the critical path of
make check, which is now back to about 15s on my machine with my
configuration.
* tests/test-types-stability.cc (test_task): add field no_default_sizes
(test_task::perform) Switch on the new flag to test a different
behaviour.
(main): Schedule an additional test case to test with the new flag.
Cc: Mark Wielaard <mark@klomp.org>
Signed-off-by: Matthias Maennich <maennich@google.com>
---
tests/test-types-stability.cc | 29 +++++++++++++++--------------
1 file changed, 15 insertions(+), 14 deletions(-)
Comments
On Wed, Apr 29, 2020 at 05:28:12PM +0200, Matthias Maennich wrote:
> Commit a9f5fb408946 ("Add --no-write-default-sizes option.") introduced
> a new test variant for test-types-stability that is actually independent
> of the original test case in terms of execution. Hence it can be
> expressed as a separate test case. So, do that by parametrizing the
> test_task struct with a new no_default_sizes flag and schedule a
> separate test case in the main loop.
The patch looks correct to me.
> That test runs now ~twice as fast dropping from roughly 20s on my
> machine to 10s. That effectively removes it from the critical path of
> make check, which is now back to about 15s on my machine with my
> configuration.
On my machine time make check TESTS=runtesttypesstability goes from:
real 0m32.983s
user 0m37.328s
sys 0m0.477s
to (with this patch):
real 0m26.590s
user 0m57.014s
sys 0m0.581s
So it is definitely faster in real time, but it also seems to use more
cpu/user time/resources. Is that expected? Or maybe it is just that
measuring this with time is not very accurate?
Cheers,
Mark
Hi Mark!
On Wed, Apr 29, 2020 at 10:44:50PM +0200, Mark Wielaard wrote:
>On Wed, Apr 29, 2020 at 05:28:12PM +0200, Matthias Maennich wrote:
>> Commit a9f5fb408946 ("Add --no-write-default-sizes option.") introduced
>> a new test variant for test-types-stability that is actually independent
>> of the original test case in terms of execution. Hence it can be
>> expressed as a separate test case. So, do that by parametrizing the
>> test_task struct with a new no_default_sizes flag and schedule a
>> separate test case in the main loop.
>
>The patch looks correct to me.
Thanks for having a look :-)
>
>> That test runs now ~twice as fast dropping from roughly 20s on my
>> machine to 10s. That effectively removes it from the critical path of
>> make check, which is now back to about 15s on my machine with my
>> configuration.
>
>On my machine time make check TESTS=runtesttypesstability goes from:
>
>real 0m32.983s
>user 0m37.328s
>sys 0m0.477s
>
>to (with this patch):
>
>real 0m26.590s
>user 0m57.014s
>sys 0m0.581s
>
>So it is definitely faster in real time, but it also seems to use more
>cpu/user time/resources. Is that expected? Or maybe it is just that
>measuring this with time is not very accurate?
Your numbers certainly look a bit odd.
That are my measurements:
Without the patch:
tests/runtesttypesstability 22.66s user 0.57s system 112% cpu 20.643 total
tests/runtesttypesstability 22.59s user 0.46s system 111% cpu 20.582 total
tests/runtesttypesstability 23.44s user 0.47s system 111% cpu 21.371 total
With the patch:
tests/runtesttypesstability 23.31s user 0.58s system 218% cpu 10.948 total
tests/runtesttypesstability 23.22s user 0.51s system 222% cpu 10.674 total
tests/runtesttypesstability 23.15s user 0.67s system 218% cpu 10.916 total
I would say there is a minimal gain in real time (<1s) that is expected
as we are running up to double the amount of threads now. Other than
that the measurement looks pretty consistent with the patch.
I also measured when limiting to a maximum of 8 workers:
without patch:
tests/runtesttypesstability 23.10s user 0.50s system 111% cpu 21.112 total
with patch:
tests/runtesttypesstability 22.74s user 0.53s system 224% cpu 10.351 total
I ran the test isolated for measuring. Maybe running it in `make check`
had some side effect on your results.
Cheers,
Matthias
>
>Cheers,
>
>Mark
Hello Matthias,
Matthias Maennich <maennich@google.com> a ?crit:
> Commit a9f5fb408946 ("Add --no-write-default-sizes option.") introduced
> a new test variant for test-types-stability that is actually independent
> of the original test case in terms of execution. Hence it can be
> expressed as a separate test case. So, do that by parametrizing the
> test_task struct with a new no_default_sizes flag and schedule a
> separate test case in the main loop.
>
> That test runs now ~twice as fast dropping from roughly 20s on my
> machine to 10s. That effectively removes it from the critical path of
> make check, which is now back to about 15s on my machine with my
> configuration.
>
> * tests/test-types-stability.cc (test_task): add field no_default_sizes
> (test_task::perform) Switch on the new flag to test a different
> behaviour.
> (main): Schedule an additional test case to test with the new
> flag.
Applied to master, thanks!
Cheers,
@@ -70,7 +70,8 @@ const char* elf_paths[] =
/// passed to the constructor of the task.
struct test_task : public abigail::workers::task
{
- string path;
+ const string path;
+ const bool no_default_sizes;
string error_message;
bool is_ok;
@@ -78,8 +79,9 @@ struct test_task : public abigail::workers::task
///
/// @param elf_path the path to the elf binary on which we are
/// supposed to launch abidw --abidiff.
- test_task(const string& elf_path)
+ test_task(const string& elf_path, bool no_default_sizes)
: path(elf_path),
+ no_default_sizes(no_default_sizes),
is_ok(true)
{}
@@ -96,18 +98,14 @@ struct test_task : public abigail::workers::task
string abidw = string(get_build_dir()) + "/tools/abidw";
string elf_path = string(get_src_dir()) + "/tests/" + path;
- string cmd = abidw + " --abidiff " + elf_path;
+ string cmd = abidw + " --abidiff "
+ + (no_default_sizes ? "--no-write-default-sizes " : "")
+ + elf_path;
if (system(cmd.c_str()))
{
- error_message = "IR stability issue detected for binary " + elf_path;
- is_ok = false;
- }
-
- cmd = abidw + " --abidiff --no-write-default-sizes " + elf_path;
- if (system(cmd.c_str()))
- {
- error_message = "IR stability issue detected for binary " + elf_path
- + " with --no-write-default-sizes";
+ error_message =
+ "IR stability issue detected for binary " + elf_path
+ + (no_default_sizes ? " with --no-write-default-sizes" : "");
is_ok = false;
}
}
@@ -129,7 +127,7 @@ main()
/// Create a task queue. The max number of worker threads of the
/// queue is the number of the concurrent threads supported by the
/// processor of the machine this code runs on.
- const size_t num_tests = sizeof(elf_paths) / sizeof (char*) - 1;
+ const size_t num_tests = (sizeof(elf_paths) / sizeof(char*) - 1) * 2;
size_t num_workers = std::min(get_number_of_threads(), num_tests);
queue task_queue(num_workers);
@@ -138,7 +136,10 @@ main()
/// a worker thread that starts working on the task.
for (const char** p = elf_paths; p && *p; ++p)
{
- test_task_sptr t(new test_task(*p));
+ test_task_sptr t(new test_task(*p, false));
+ ABG_ASSERT(task_queue.schedule_task(t));
+
+ t.reset(new test_task(*p, true));
ABG_ASSERT(task_queue.schedule_task(t));
}