[DejaGNU/GCC,0/1] Support per-test execution timeout factor

Message ID alpine.DEB.2.20.2312111744390.5892@tpp.orcam.me.uk
Headers
Series Support per-test execution timeout factor |

Message

Maciej W. Rozycki Dec. 12, 2023, 2:04 p.m. UTC
  Hi,

 This patch quasi-series makes it possible for individual test cases 
identified as being slow to request more time via the GCC test harness by 
providing a test execution timeout factor, applied to the tool execution 
timeout set globally for all the test cases.  This is to avoid excessive 
testsuite run times where other test cases do hang as it would be the 
case if the timeout set globally was to be increased.

 The test execution timeout is different from the tool execution timeout 
where it is GCC execution that is being guarded against taking excessive 
amount of time on the test host rather than the resulting test case 
executable run on the target afterwards, as concerned here.  GCC already 
has a `dg-timeout-factor' setting for the tool execution timeout, but has 
no means to increase the test execution timeout.  The GCC side of these 
changes adds a corresponding `dg-test-timeout-factor' setting.

 As the two changes are independent from each other, they can be applied 
in any order with the feature becoming active once both have been placed 
in a given system.  I chose to submit them together so as to give an 
opportunity to both DejaGNU and GCC developers to chime in.

 The DejaGNU side of this patch quasi-series relies on that patch series: 
<https://lists.gnu.org/archive/html/dejagnu/2023-12/msg00003.html> to be 
applied first, however I chose to post the two parts separately so as not 
to clutter the GCC mailing list with changes solely for DejaGNU.

 This has been verified with the GCC testsuite in a couple of environments 
using the Unix protocol, both locally and remotely, the GDB stub protocol, 
and the sim protocol, making sure that timeout settings are respected.  I 
found no obvious way to verify the remaining parts, but the changes follow 
the same pattern, so they're expected to behave consistently.

 Let me know if you have any questions, comments or concerns.  Otherwise 
please apply/approve respectively the DejaGNU/GCC side.

  Maciej
  

Comments

Hans-Peter Nilsson Jan. 3, 2024, 5:15 a.m. UTC | #1
On Tue, 12 Dec 2023, Maciej W. Rozycki wrote:

> Hi,
> 
>  This patch quasi-series makes it possible for individual test cases 
> identified as being slow to request more time via the GCC test harness by 
> providing a test execution timeout factor, applied to the tool execution 
> timeout set globally for all the test cases.  This is to avoid excessive 
> testsuite run times where other test cases do hang as it would be the 
> case if the timeout set globally was to be increased.
> 
>  The test execution timeout is different from the tool execution timeout 
> where it is GCC execution that is being guarded against taking excessive 
> amount of time on the test host rather than the resulting test case 
> executable run on the target afterwards, as concerned here.  GCC already 
> has a `dg-timeout-factor' setting for the tool execution timeout, but has 
> no means to increase the test execution timeout.  The GCC side of these 
> changes adds a corresponding `dg-test-timeout-factor' setting.

Hmm.  I think it would be more correct to emphasize that the 
existing dg-timeout-factor affects both the tool execution *and* 
the test execution, whereas your new dg-test-timeout-factor only 
affects the test execution.  (And still measured on the host.)

Usually the compilation time is close to 0, so is this based on 
an actual need more than an itchy "wart"?

Or did I miss something?

brgds, H-P
  
Maciej W. Rozycki Jan. 3, 2024, 4:38 p.m. UTC | #2
On Wed, 3 Jan 2024, Hans-Peter Nilsson wrote:

> >  The test execution timeout is different from the tool execution timeout 
> > where it is GCC execution that is being guarded against taking excessive 
> > amount of time on the test host rather than the resulting test case 
> > executable run on the target afterwards, as concerned here.  GCC already 
> > has a `dg-timeout-factor' setting for the tool execution timeout, but has 
> > no means to increase the test execution timeout.  The GCC side of these 
> > changes adds a corresponding `dg-test-timeout-factor' setting.
> 
> Hmm.  I think it would be more correct to emphasize that the 
> existing dg-timeout-factor affects both the tool execution *and* 
> the test execution, whereas your new dg-test-timeout-factor only 
> affects the test execution.  (And still measured on the host.)

 Not really, `dg-timeout-factor' is only applied to tool execution and it 
doesn't affect test execution.  Timeout value reporting used to be limited 
in DejaGNU, but you can enable it easily now by adding the DejaGNU patch 
series referred in the cover letter and see that `dg-timeout-factor' is 
ignored for test execution.

> Usually the compilation time is close to 0, so is this based on 
> an actual need more than an itchy "wart"?
> 
> Or did I miss something?

 Compilation is usually quite fast, but this is not always the case.  If 
you look at the tests that do use `dg-timeout-factor' in GCC, and some 
commits that added the setting, then you ought to find actual use cases.  
I saw at least one such a test that takes an awful lot of time here on a 
reasonably fast host machine and still passes where GCC has been built 
with optimisation enabled, but does time out in the compilation phase if 
the compiler has been built at -O0 for debugging purposes.  I'd have to 
chase it though if you couldn't find it as I haven't written the name 
down.

 So yes, `dg-timeout-factor' does have its use, but it is different from 
that of `dg-test-timeout-factor', hence the need for a separate setting.

  Maciej
  
Richard Sandiford Jan. 3, 2024, 11 p.m. UTC | #3
"Maciej W. Rozycki" <macro@embecosm.com> writes:
> On Wed, 3 Jan 2024, Hans-Peter Nilsson wrote:
>
>> >  The test execution timeout is different from the tool execution timeout 
>> > where it is GCC execution that is being guarded against taking excessive 
>> > amount of time on the test host rather than the resulting test case 
>> > executable run on the target afterwards, as concerned here.  GCC already 
>> > has a `dg-timeout-factor' setting for the tool execution timeout, but has 
>> > no means to increase the test execution timeout.  The GCC side of these 
>> > changes adds a corresponding `dg-test-timeout-factor' setting.
>> 
>> Hmm.  I think it would be more correct to emphasize that the 
>> existing dg-timeout-factor affects both the tool execution *and* 
>> the test execution, whereas your new dg-test-timeout-factor only 
>> affects the test execution.  (And still measured on the host.)
>
>  Not really, `dg-timeout-factor' is only applied to tool execution and it 
> doesn't affect test execution.  Timeout value reporting used to be limited 
> in DejaGNU, but you can enable it easily now by adding the DejaGNU patch 
> series referred in the cover letter and see that `dg-timeout-factor' is 
> ignored for test execution.
>
>> Usually the compilation time is close to 0, so is this based on 
>> an actual need more than an itchy "wart"?
>> 
>> Or did I miss something?
>
>  Compilation is usually quite fast, but this is not always the case.  If 
> you look at the tests that do use `dg-timeout-factor' in GCC, and some 
> commits that added the setting, then you ought to find actual use cases.  
> I saw at least one such a test that takes an awful lot of time here on a 
> reasonably fast host machine and still passes where GCC has been built 
> with optimisation enabled, but does time out in the compilation phase if 
> the compiler has been built at -O0 for debugging purposes.  I'd have to 
> chase it though if you couldn't find it as I haven't written the name 
> down.

Sounds like it could be the infamous gcc.c-torture/compile/20001226-1.c :)

Richard

>  So yes, `dg-timeout-factor' does have its use, but it is different from 
> that of `dg-test-timeout-factor', hence the need for a separate setting.
>
>   Maciej
  
Jacob Bachmeyer Jan. 4, 2024, 3:18 a.m. UTC | #4
Maciej W. Rozycki wrote:
> On Wed, 3 Jan 2024, Hans-Peter Nilsson wrote:
>
>   
>>>  The test execution timeout is different from the tool execution timeout 
>>> where it is GCC execution that is being guarded against taking excessive 
>>> amount of time on the test host rather than the resulting test case 
>>> executable run on the target afterwards, as concerned here.  GCC already 
>>> has a `dg-timeout-factor' setting for the tool execution timeout, but has 
>>> no means to increase the test execution timeout.  The GCC side of these 
>>> changes adds a corresponding `dg-test-timeout-factor' setting.
>>>       
>> Hmm.  I think it would be more correct to emphasize that the 
>> existing dg-timeout-factor affects both the tool execution *and* 
>> the test execution, whereas your new dg-test-timeout-factor only 
>> affects the test execution.  (And still measured on the host.)
>>     
>
>  Not really, `dg-timeout-factor' is only applied to tool execution and it 
> doesn't affect test execution.  Timeout value reporting used to be limited 
> in DejaGNU, but you can enable it easily now by adding the DejaGNU patch 
> series referred in the cover letter and see that `dg-timeout-factor' is 
> ignored for test execution.
>   

Then we need a better name for this new feature that more clearly 
indicates that it applies to running executables compiled as part of a 
test.  Also, 'test_timeout' is documented as a knob for site 
configuration to twiddle, not for testsuites to adjust.  I support 
adding scale factors for testsuites to indicate "this test takes longer 
than usual" but these will need to be thought through.  This quick hack 
will cause future maintenance problems.

>> Usually the compilation time is close to 0, so is this based on 
>> an actual need more than an itchy "wart"?
>>
>> Or did I miss something?
>>     
>
>  Compilation is usually quite fast, but this is not always the case.  If 
> you look at the tests that do use `dg-timeout-factor' in GCC, and some 
> commits that added the setting, then you ought to find actual use cases.  
> I saw at least one such a test that takes an awful lot of time here on a 
> reasonably fast host machine and still passes where GCC has been built 
> with optimisation enabled, but does time out in the compilation phase if 
> the compiler has been built at -O0 for debugging purposes.  I'd have to 
> chase it though if you couldn't find it as I haven't written the name 
> down.
>
>  So yes, `dg-timeout-factor' does have its use, but it is different from 
> that of `dg-test-timeout-factor', hence the need for a separate setting.

This name has already caused confusion and the patch has not even been 
accepted yet.  The feature is desirable but this implementation is not 
acceptable.

At the moment, there are two blocking issues with this patch:

1.  The global variable name 'test_timeout_factor' is not acceptable 
because it has already caused confusion, apparently among GCC developers 
who should be familiar with the GCC testsuite.  If it already confuses 
GCC testsuite domain experts, its meaning is too unclear for general 
use.  While looking for alternative names, I found the fundamental 
problem with this proposed implementation:  test phases (such as running 
a test program versus running the tool itself) are defined by the 
testsuite, not by the framework.  DejaGnu therefore cannot explicitly 
support this as offered because the proposal violates encapsulation both 
ways.

2.  New code in DejaGnu using expr(n) is to have the expression braced 
as recommended in the expr(n) manpage, unless it actually uses the 
semantics provided by unbraced expr expressions, in which case it 
*needs* a comment explaining and justifying that.

The second issue is trivially fixable, but the first appears fatal.


There is a new "testcase" mulitplex command in Git master, which will be 
included in the next release, that is intended for testsuites to express 
dynamic state.  The original planned use was to support hierarchical 
test groups, for which a "testcase group" command is currently defined.  
In the future, dg.exp will be extended to use "testcase group" to 
delimit each testcase that it processes, and the framework will itself 
explicitly track each test script as a group.  (DejaGnu's current 
semantics implicitly group tests by test scripts, but only by (*.exp) 
scripts.)  Could this multiplex be a suitable place to put this API feature?

Using a command also has the advantage that it will cause a hard failure 
if the framework does not implement it, unlike a variable that a test 
script can set for the framework to silently ignore, leading to 
hard-to-reproduce test (timeout) failures if an older framework is used 
with a testsuite expecting this feature.  The semantics of "testcase 
patience" or similar would be defined to extend to the end of the group 
(or test script in versions of DejaGnu that do not fully implement 
groups) in which it is executed.  This limited scope is needed because 
allowing timeout scale factors to "bleed over" to the next test script 
would play havoc with the planned native parallel testing support, where 
the "next" script could have already started in another process.

I suggest a few possible commands off the top of my head:
    testcase ask patience WHAT FACTOR
    testcase declare patience WHAT FACTOR
    testcase patience WHAT FACTOR

The FACTOR is a scale factor, similar to the proposed 
'test_timeout_factor' or possibly the keyword "reset" (or special value 
0?) to clear a previous factor before leaving a group.  Multiple 
invocations stack:  the effective scale factor is the product of all 
applicable scale factors.  (This will have straightforward interactions 
with groups:  leaving a group will restore the scale factor in effect 
when the group was entered.  The initial scale factor at top-level is 1, 
for any WHAT.)

The WHAT is a keyword from a to-be-determined set.  There is a 
possibility that parts of the framework might eventually respond to 
certain WHAT values, but for now, would "dg-run" be suitable to express 
a timeout for running a test program and "dg-compile" for the timeout on 
running GCC itself?  This could lead to reserving dg-* WHAT values for 
dg.exp based testsuites to define, with a convention that dg-WHAT scales 
the timeout for "dg-do WHAT".

Leaving the definition of WHAT to the testsuite is not an insurmountable 
barrier, as providing an inquiry command for the testsuite to use would 
not be difficult.  This seems to lead towards a "testcase declare 
patience WHAT FACTOR" and "testcase inquire patience WHAT" pair.  The 
former multiplies the current WHAT scale factor by FACTOR, while the 
latter returns the appropriate running product.

All this provides a nice way to add upstream support for dg-patience ("{ 
dg-patience dg-run 3 }" or "{ dg-patience dg-compile 2 }") or a similar 
tag to dg.exp, but still leaves the issue of communicating /which/ scale 
factor to use to the various command execution procedures.  Here we come 
back to the same problem, since the current API shape (not changing 
anytime soon) does not provide a way to pass a timeout value or scale 
factor, other than using a "magic" variable.  So we are back to 
'timeout_scale_factor', but documented in the procedure documentation 
for the remote_* procedures.  In this case, the framework could use 
uplevel to read the variable as a local variable in the caller's frame, 
so the gcc-dg-test procedure would only need to do {set 
timeout_scale_factor [testcase inquire patience dg-run]} before using 
remote_load to run the test program.  (Expect does similar things, 
according to its manpage.)

The *_load procedures in the config/*.exp are not documented and 
config/README specifically says that they are to be called using the 
remote_* procedures.  While using a "magic" variable would require some 
neat tricks with uplevel/upvar, it should work as long as testsuites use 
the documented entrypoints.  (The *_load procedures from config/*.exp 
are likely to disappear into Tcl namespaces and/or parent interpreters 
in the future anyway.)

Comments before I start on an implementation?


-- Jacob
  
Hans-Peter Nilsson Jan. 4, 2024, 4:52 a.m. UTC | #5
On Wed, 3 Jan 2024, Maciej W. Rozycki wrote:

> On Wed, 3 Jan 2024, Hans-Peter Nilsson wrote:
> 
> > >  The test execution timeout is different from the tool execution timeout 
> > > where it is GCC execution that is being guarded against taking excessive 
> > > amount of time on the test host rather than the resulting test case 
> > > executable run on the target afterwards, as concerned here.  GCC already 
> > > has a `dg-timeout-factor' setting for the tool execution timeout, but has 
> > > no means to increase the test execution timeout.  The GCC side of these 
> > > changes adds a corresponding `dg-test-timeout-factor' setting.
> > 
> > Hmm.  I think it would be more correct to emphasize that the 
> > existing dg-timeout-factor affects both the tool execution *and* 
> > the test execution, whereas your new dg-test-timeout-factor only 
> > affects the test execution.  (And still measured on the host.)
> 
>  Not really, `dg-timeout-factor' is only applied to tool execution and it 
> doesn't affect test execution.

Let's stop here: that statement is just incorrect.

There might be a use for separating timeouts, but the premise of 
dg-timeout-factor not affecting the execution of an (executable) 
test is plain wrong.  Something is off here.  Are we using the 
same terminology?

Please revisit the setup where the patch made a difference and 
report it for others to repeat, something like the following:
(Beware of typos, I didn't copy-paste like I usually do.)

A recent observation is me testing MMIX, where gcc+newlib is 
build and with --prefix and $PATH pointing at pre-installed 
binutils and simulator.  I test it all with "make -k check 
--target_board=mmixware-sim".  (This should all be familiar; 
pick another target and baseboard or use a native+unix.exp if 
you prefer.  Also JFTR I usually do it by means of 
contrib/regression/btest-gcc.sh - except of course when 
inspecting and manual testing.)

Anyway a repeatable case where dg-timeout-factor then makes a 
difference for the timeout is for libstdc++-v3 test-case 
20_util/hash/quality.cc.  I recently committed a patch adding 
dg-timeout-factor 3 for that test (26fe2808d8).  Let's consider 
the situation *before* that commit.

For the mmix simulator and the particular host where I ran that 
test, the test normally executes in very close to 6 minutes, and 
as the default timeout of 360 seconds, it sometimes times out 
when the machine is busy.  To make *sure* it times out for case 
of proof here, I edit the -DNTESTS=1 simulator setting to 
-DNTESTS=2.  I execute just this test by for example "make 
check-target-libstdc++-v3 
RUNTESTFLAGS=--target_board=mmixware-sim\ 
conformance.exp=quality.cc" (beware of quoting issues - which 
should be familiar to you).

That NTESTS=2 makes the execution time go up to 13 minutes 
elapsed time and the test gets a "WARNING: program timed out" 
and a failure for that test.  I also see a (timeout = 360) in 
the libstdc++.log - admittedly for the compilation line, but 
the timeout is consistent with being applied to the execution 
as well.

Then, apply the commit, which adds a line with dg-timeout-factor 3
(bah, I had to do it manually because of that edited -DNTESTS=2 line).

After, when I un the same command-line, the test *does not time 
out, it passes* and I see a (timeout = 1080) next to the 
compilation line in the .log - but it's apparently applied to 
the test run as well.

That, as well as numerous previous commits, is consistent with 
dg-timeout-factor affecting the execution time, not just the 
compilation time.

Of course, there may be some sub-test-suite that has a bug. I'm 
*guessing* you misinterpret observations that lead up to this 
patch-set, perhaps a bug in some sub-test.exp.

>  Timeout value reporting used to be limited 
> in DejaGNU, but you can enable it easily now by adding the DejaGNU patch 
> series referred in the cover letter and see that `dg-timeout-factor' is 
> ignored for test execution.

Please state a case where I can observe it being ignored.

> > Usually the compilation time is close to 0, so is this based on 
> > an actual need more than an itchy "wart"?
> > 
> > Or did I miss something?
> 
>  Compilation is usually quite fast, but this is not always the case.  If 
> you look at the tests that do use `dg-timeout-factor' in GCC, and some 
> commits that added the setting, then you ought to find actual use cases.  

I've not only looked at such commits, I've done quite a few 
myself.  I'd say most such commits are for test execution, some 
are for compilation.  Did you miss the ones where the commit log 
mentions "slow simulator" or "slow board"?

> I saw at least one such a test that takes an awful lot of time here on a 
> reasonably fast host machine and still passes where GCC has been built 
> with optimisation enabled, but does time out in the compilation phase if 
> the compiler has been built at -O0 for debugging purposes.  I'd have to 
> chase it though if you couldn't find it as I haven't written the name 
> down.

Maybe the one Richard Sandiford mentioned in a reply.  But, 
that's compile only.

>  So yes, `dg-timeout-factor' does have its use, but it is different from 
> that of `dg-test-timeout-factor', hence the need for a separate setting.

No.  They overlap: dg-timeout-factor is for both.

(If you want to *remove* that overlap, please make sure you 
migrate the right subset to use dg-test-timeout-factor.)

brgds, H-P
  
Hans-Peter Nilsson Jan. 4, 2024, 4:59 a.m. UTC | #6
On Wed, 3 Jan 2024, Jacob Bachmeyer wrote:
> Comments before I start on an implementation?

I'd suggest to await the conclusion of the debate: I *think* 
I've proved that dg-timeout-factor is already active as intended 
(all parts of a test), specifically when the compilation result 
is executed (for the applicable tests).  Notably, modulo bugs in 
the test-suites.

Of course, it may be useful to separate different timeouts of 
separable parts of a test - compilation and execution being the 
topic at hand.  But IMHO, YAGNI.  Having said that, don't let 
that stand in the way of a fun hack!

brgds, H-P
  
Jacob Bachmeyer Jan. 5, 2024, 2:27 a.m. UTC | #7
Hans-Peter Nilsson wrote:
> On Wed, 3 Jan 2024, Jacob Bachmeyer wrote:
>   
>> Comments before I start on an implementation?
>>     
>
> I'd suggest to await the conclusion of the debate: I *think* 
> I've proved that dg-timeout-factor is already active as intended 
> (all parts of a test), specifically when the compilation result 
> is executed (for the applicable tests).  Notably, modulo bugs in 
> the test-suites.
>   

The dg-timeout-factor tag is a GCC testsuite feature; the dg-patience 
tag will be an upstream DejaGnu framework feature using shared 
infrastructure also available to tests not using dg.exp.  Improved 
timeout handling will also eventually include per-target timeout 
defaults and scale factors, to allow testing sites to adjust timeouts 
for slow (or fast) targets.

> Of course, it may be useful to separate different timeouts of 
> separable parts of a test - compilation and execution being the 
> topic at hand.  But IMHO, YAGNI.  Having said that, don't let 
> that stand in the way of a fun hack!

It will go on the TODO list either way; the only difference is the 
priority it will have.


-- Jacob
  
Maciej W. Rozycki Feb. 1, 2024, 8:18 p.m. UTC | #8
On Wed, 3 Jan 2024, Hans-Peter Nilsson wrote:

> > > Hmm.  I think it would be more correct to emphasize that the 
> > > existing dg-timeout-factor affects both the tool execution *and* 
> > > the test execution, whereas your new dg-test-timeout-factor only 
> > > affects the test execution.  (And still measured on the host.)
> > 
> >  Not really, `dg-timeout-factor' is only applied to tool execution and it 
> > doesn't affect test execution.
> 
> Let's stop here: that statement is just incorrect.
> 
> There might be a use for separating timeouts, but the premise of 
> dg-timeout-factor not affecting the execution of an (executable) 
> test is plain wrong.  Something is off here.  Are we using the 
> same terminology?

 So I found some time finally and did a little bit of investigation and I 
think I can see what's been going on here.

 The thing is the timeouts are treated differently depending on whether
tests are run locally or on a remote target board.  Then I've been working 
with remote boards most of the time and obviously wasn't vigilant enough 
on this occasion to notice the different semantics.

 This boils down to how the handling of `dg-timeout-factor' has been 
done in gcc/testsuite/lib/timeout.exp, by overriding `standard_wait', 
which however is only used by DejaGNU for commands that are run locally.

 So taking gcc/testsuite/gcc.c-torture/execute/20000112-1.c as an example 
and a boring remote unix target board with the following settings (among 
others; leaving out compilation options as irrelevant):

load_generic_config "unix"
set_board_info rsh_prog ssh
set_board_info rcp_prog scp
set_board_info username macro
set_board_info hostname www.xxx.yyy.zzz
set_board_info timeout 500
set_board_info gcc,timeout 700

(say this is foo.exp) I run:

$ make -C obj/gcc RUNTESTFLAGS="--target_board foo execute.exp=20000112-1.c" -k -i check-gcc-c

and get these timeouts (among others) reported in gcc.log:

Executing on host: .../obj/gcc/gcc/xgcc -B.../obj/gcc/gcc/ .../src/gcc/gcc/testsuite/gcc.c-torture/execute/20000112-1.c    -fdiagnostics-plain-output -O0  -w -lm  -o ./20000112-1.exe    (timeout = 700)
Executing on foo: /tmp/runtest.35135/20000112-1.exe    (timeout = 300)

-- as you can see the general board timeout (at 500) is ignored, the GCC 
execution timeout is correctly set (to 700), and the test execution 
timeout is taken from the default (at 300) buried and hardcoded in 
`remote_exec' in the absence of an override supplied by `unix_load' 
calling it.

 Now with 20000112-1.c modified to include:

/* { dg-timeout-factor 5 } */

and the same testsuite invocation I get these timeouts reported instead:

Executing on host: .../obj/gcc/gcc/xgcc -B.../obj/gcc/gcc/ .../src/gcc/gcc/testsuite/gcc.c-torture/execute/20000112-1.c    -fdiagnostics-plain-output -O0  -w -lm  -o ./20000112-1.exe    (timeout = 3500)
Executing on foo: /tmp/runtest.35836/20000112-1.exe    (timeout = 300)

-- so the GCC execution timeout is correctly multiplied by 5 (to 3500), 
however the test execution timeout is unchanged (at 300).

 Then with a simulator board, such as one using these settings:

set_board_info slow_simulator 0
load_generic_config "sim"
set_board_info sim "qemu-riscv64 -cpu rv64"
set_board_info timeout 500
set_board_info gcc,timeout 700
set_board_info sim_time_limit 900

I can see by patching `standard_wait' that the test execution timeout does 
get multiplied by `dg-timeout-factor':

spawn qemu-riscv64 -cpu rv64 ./20000112-1.exe
Running standard_wait on bar (timeout = 3500)

(I've double-checked with the same patch that `standard_wait' indeed does 
not get called in the remote case).  So it seems like we're both right in 
the relevant areas, and the mess is perhaps even worse.

 I think Jacob has had a valid concern about encapsulation and we can 
start from there.  Indeed the same remote execution timeout applies to 
maintenance commands used to prepare a remote executable beforehand and 
then delete it afterwards (which I don't normally make use of in my test 
environments though, so I had to realise that first and then make explicit 
changes to my setup to obtain these entries), more specifically:

PASS: gcc.c-torture/execute/20000112-1.c   -O0  (test for excess errors)
Executing on foo: mkdir -p /tmp/runtest.36823   (timeout = 300)
spawn [open ...]
XYZ0ZYX
Executing on foo: chmod +x /tmp/runtest.36823/20000112-1.exe    (timeout = 300)
spawn [open ...]
XYZ0ZYX
Executing on foo: /tmp/runtest.36823/20000112-1.exe    (timeout = 300)
spawn [open ...]
XYZ0ZYX
Executing on foo: rm -f  /tmp/runtest.36823/20000112-1.exe.o /tmp/runtest.36823/20000112-1.exe    (timeout = 300)
spawn [open ...]
XYZ0ZYX
PASS: gcc.c-torture/execute/20000112-1.c   -O0  execution test

so indeed more changes are required to get this mess sorted properly, as 
all these commands except from the one to run the test case itself can be 
assumed to take a nominal amount of time to complete.  I'll see if I can 
read through the proposals posted and come up with any conclusions.

 In any case thank you for bringing my attention to the flaw in my 
observations.

  Maciej