Message ID | alpine.DEB.2.20.2312111744390.5892@tpp.orcam.me.uk |
---|---|
Headers |
Return-Path: <gcc-patches-bounces+patchwork=sourceware.org@gcc.gnu.org> X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id F0CFD385782D for <patchwork@sourceware.org>; Tue, 12 Dec 2023 14:04:43 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from mail-lj1-x235.google.com (mail-lj1-x235.google.com [IPv6:2a00:1450:4864:20::235]) by sourceware.org (Postfix) with ESMTPS id 2FDD53858C78 for <gcc-patches@gcc.gnu.org>; Tue, 12 Dec 2023 14:04:18 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 2FDD53858C78 Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=embecosm.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=embecosm.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org 2FDD53858C78 Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=2a00:1450:4864:20::235 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1702389868; cv=none; b=ESF1WCR6YrI/IpgXajuLlmpGpRcVLDRzIjUvYglw0aA3otzWby3p+AGPiMSQnRfXKY3ipczC7PwluV88aRop3q0DvlUdevWf9E9ATdEdeShk91+XYEy86n7/ZvgzpmK2bRkgqqm8z4Siy0hZmE4vHGWWofQo/f6CBFIeaeMBlcg= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1702389868; c=relaxed/simple; bh=BXLKmqhpKA+eSViYTDMAAokMuLNQP4M1gQLapf1kuqU=; h=DKIM-Signature:Date:From:To:Subject:Message-ID:MIME-Version; b=r7Z48rFyzbbgoO8QOGXEviKy2Krf2VLtxQBSKZPfBnTL7nkW1peeTeEbvhQDg5O+Kk1AFeSE062ICVAD4cJa5SPkTHO2y4CgHi2nElZ4gXamdzKtGGQltMXTq97gBrRY8jCYTl1HMFAcvZw8e1mvzIHP6bev1bGk2FNhGfWP03g= ARC-Authentication-Results: i=1; server2.sourceware.org Received: by mail-lj1-x235.google.com with SMTP id 38308e7fff4ca-2cb54ab7ffeso39398931fa.3 for <gcc-patches@gcc.gnu.org>; Tue, 12 Dec 2023 06:04:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=embecosm.com; s=google; t=1702389856; x=1702994656; darn=gcc.gnu.org; h=mime-version:user-agent:message-id:subject:to:from:date:from:to:cc :subject:date:message-id:reply-to; bh=4qCKzBOJ15sKjz/1n36+TOjtig1I+JdquWqA1Hy0fN4=; b=dB7XQStlNOgC4gPvr5gCb5P/yxx/OJPFv8jEBcyqLw+Xkm1MRd324x0JuZ3f6uLI86 ii8wqDvby4uulSRfMqNZROI7wUre21eOHKdIpAoK5zgxuEpXXjY8EZe3W5bjsJLiVINV /domk8HKale4eEZnTFirlOq92Z8fR5DvYtt02fdGrV9sHfaN+2X+/ibEQhrK+DBBiHTo oEUcBlERcEBA8eyoigvfbVeI9Y/AS7Pa7oyXbSWzymZYTtYhoIjfF960FXMhCOPv2uhE mXhXQ6c/+AE7CyynuvGxC6rXO+y2hCws2E1odczKUoQ1i+oIDwg9huiRcJhCZrhDaHkB T+sQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702389856; x=1702994656; h=mime-version:user-agent:message-id:subject:to:from:date :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=4qCKzBOJ15sKjz/1n36+TOjtig1I+JdquWqA1Hy0fN4=; b=WZ4A2QO1RSFbM1txbEKP4yk0586STihSgWjOgSOXRSYJV6H3AMdJYL7n3hMSSYb6fP zF4b7bxxeGeeYcKrC/CEUeWXbVyMQxS3z0MH1aIB2lOUPhctzSXOQakejBTy1JEudzhi idgZDLbDK4CZ4VZMbdzj+bzWhzb+qPr2/jV2E3MMUIbPi0QkNa3Vn6jAKEvkMQEnZqgX sCNxfHjyBKvTmTxTmVT4v79uu5M+I3A/hlIq9rgY+1PeAHIJB4fKKVBa/PZL2NEQ52Vr jJ5Egy5Col9iKL8NdMkNMkEAeLFz5xNG6PDVgSC3l2mDC4EmFs7IArmsFfHCSaCV9Nbz 1Wzw== X-Gm-Message-State: AOJu0Yxvx7BMdzj/yTs9ipERawsYQwmB4XVkKS8J0reUrx3rxlKcasM3 1XQeb7DnGTR7UZHENjMkX5z5+g== X-Google-Smtp-Source: AGHT+IFE6lBFTWkZFoGtjX7pyjaJBV8V5czZasEbDK/XHNhsl5gLwPO3QQvqx09/xjcTUljhPTHgJA== X-Received: by 2002:a2e:9650:0:b0:2ca:1611:58e0 with SMTP id z16-20020a2e9650000000b002ca161158e0mr2531564ljh.71.1702389856231; Tue, 12 Dec 2023 06:04:16 -0800 (PST) Received: from tpp.orcam.me.uk (tpp.orcam.me.uk. [81.187.245.177]) by smtp.gmail.com with ESMTPSA id ts7-20020a170907c5c700b00a1dd58874b8sm6296316ejc.119.2023.12.12.06.04.15 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 12 Dec 2023 06:04:15 -0800 (PST) Date: Tue, 12 Dec 2023 14:04:13 +0000 (GMT) From: "Maciej W. Rozycki" <macro@embecosm.com> To: dejagnu@gnu.org, gcc-patches@gcc.gnu.org Subject: [PATCH DejaGNU/GCC 0/1] Support per-test execution timeout factor Message-ID: <alpine.DEB.2.20.2312111744390.5892@tpp.orcam.me.uk> User-Agent: Alpine 2.20 (DEB 67 2015-01-07) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII X-Spam-Status: No, score=-2.4 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Gcc-patches mailing list <gcc-patches.gcc.gnu.org> List-Unsubscribe: <https://gcc.gnu.org/mailman/options/gcc-patches>, <mailto:gcc-patches-request@gcc.gnu.org?subject=unsubscribe> List-Archive: <https://gcc.gnu.org/pipermail/gcc-patches/> List-Post: <mailto:gcc-patches@gcc.gnu.org> List-Help: <mailto:gcc-patches-request@gcc.gnu.org?subject=help> List-Subscribe: <https://gcc.gnu.org/mailman/listinfo/gcc-patches>, <mailto:gcc-patches-request@gcc.gnu.org?subject=subscribe> Errors-To: gcc-patches-bounces+patchwork=sourceware.org@gcc.gnu.org |
Series |
Support per-test execution timeout factor
|
|
Message
Maciej W. Rozycki
Dec. 12, 2023, 2:04 p.m. UTC
Hi, This patch quasi-series makes it possible for individual test cases identified as being slow to request more time via the GCC test harness by providing a test execution timeout factor, applied to the tool execution timeout set globally for all the test cases. This is to avoid excessive testsuite run times where other test cases do hang as it would be the case if the timeout set globally was to be increased. The test execution timeout is different from the tool execution timeout where it is GCC execution that is being guarded against taking excessive amount of time on the test host rather than the resulting test case executable run on the target afterwards, as concerned here. GCC already has a `dg-timeout-factor' setting for the tool execution timeout, but has no means to increase the test execution timeout. The GCC side of these changes adds a corresponding `dg-test-timeout-factor' setting. As the two changes are independent from each other, they can be applied in any order with the feature becoming active once both have been placed in a given system. I chose to submit them together so as to give an opportunity to both DejaGNU and GCC developers to chime in. The DejaGNU side of this patch quasi-series relies on that patch series: <https://lists.gnu.org/archive/html/dejagnu/2023-12/msg00003.html> to be applied first, however I chose to post the two parts separately so as not to clutter the GCC mailing list with changes solely for DejaGNU. This has been verified with the GCC testsuite in a couple of environments using the Unix protocol, both locally and remotely, the GDB stub protocol, and the sim protocol, making sure that timeout settings are respected. I found no obvious way to verify the remaining parts, but the changes follow the same pattern, so they're expected to behave consistently. Let me know if you have any questions, comments or concerns. Otherwise please apply/approve respectively the DejaGNU/GCC side. Maciej
Comments
On Tue, 12 Dec 2023, Maciej W. Rozycki wrote: > Hi, > > This patch quasi-series makes it possible for individual test cases > identified as being slow to request more time via the GCC test harness by > providing a test execution timeout factor, applied to the tool execution > timeout set globally for all the test cases. This is to avoid excessive > testsuite run times where other test cases do hang as it would be the > case if the timeout set globally was to be increased. > > The test execution timeout is different from the tool execution timeout > where it is GCC execution that is being guarded against taking excessive > amount of time on the test host rather than the resulting test case > executable run on the target afterwards, as concerned here. GCC already > has a `dg-timeout-factor' setting for the tool execution timeout, but has > no means to increase the test execution timeout. The GCC side of these > changes adds a corresponding `dg-test-timeout-factor' setting. Hmm. I think it would be more correct to emphasize that the existing dg-timeout-factor affects both the tool execution *and* the test execution, whereas your new dg-test-timeout-factor only affects the test execution. (And still measured on the host.) Usually the compilation time is close to 0, so is this based on an actual need more than an itchy "wart"? Or did I miss something? brgds, H-P
On Wed, 3 Jan 2024, Hans-Peter Nilsson wrote: > > The test execution timeout is different from the tool execution timeout > > where it is GCC execution that is being guarded against taking excessive > > amount of time on the test host rather than the resulting test case > > executable run on the target afterwards, as concerned here. GCC already > > has a `dg-timeout-factor' setting for the tool execution timeout, but has > > no means to increase the test execution timeout. The GCC side of these > > changes adds a corresponding `dg-test-timeout-factor' setting. > > Hmm. I think it would be more correct to emphasize that the > existing dg-timeout-factor affects both the tool execution *and* > the test execution, whereas your new dg-test-timeout-factor only > affects the test execution. (And still measured on the host.) Not really, `dg-timeout-factor' is only applied to tool execution and it doesn't affect test execution. Timeout value reporting used to be limited in DejaGNU, but you can enable it easily now by adding the DejaGNU patch series referred in the cover letter and see that `dg-timeout-factor' is ignored for test execution. > Usually the compilation time is close to 0, so is this based on > an actual need more than an itchy "wart"? > > Or did I miss something? Compilation is usually quite fast, but this is not always the case. If you look at the tests that do use `dg-timeout-factor' in GCC, and some commits that added the setting, then you ought to find actual use cases. I saw at least one such a test that takes an awful lot of time here on a reasonably fast host machine and still passes where GCC has been built with optimisation enabled, but does time out in the compilation phase if the compiler has been built at -O0 for debugging purposes. I'd have to chase it though if you couldn't find it as I haven't written the name down. So yes, `dg-timeout-factor' does have its use, but it is different from that of `dg-test-timeout-factor', hence the need for a separate setting. Maciej
"Maciej W. Rozycki" <macro@embecosm.com> writes: > On Wed, 3 Jan 2024, Hans-Peter Nilsson wrote: > >> > The test execution timeout is different from the tool execution timeout >> > where it is GCC execution that is being guarded against taking excessive >> > amount of time on the test host rather than the resulting test case >> > executable run on the target afterwards, as concerned here. GCC already >> > has a `dg-timeout-factor' setting for the tool execution timeout, but has >> > no means to increase the test execution timeout. The GCC side of these >> > changes adds a corresponding `dg-test-timeout-factor' setting. >> >> Hmm. I think it would be more correct to emphasize that the >> existing dg-timeout-factor affects both the tool execution *and* >> the test execution, whereas your new dg-test-timeout-factor only >> affects the test execution. (And still measured on the host.) > > Not really, `dg-timeout-factor' is only applied to tool execution and it > doesn't affect test execution. Timeout value reporting used to be limited > in DejaGNU, but you can enable it easily now by adding the DejaGNU patch > series referred in the cover letter and see that `dg-timeout-factor' is > ignored for test execution. > >> Usually the compilation time is close to 0, so is this based on >> an actual need more than an itchy "wart"? >> >> Or did I miss something? > > Compilation is usually quite fast, but this is not always the case. If > you look at the tests that do use `dg-timeout-factor' in GCC, and some > commits that added the setting, then you ought to find actual use cases. > I saw at least one such a test that takes an awful lot of time here on a > reasonably fast host machine and still passes where GCC has been built > with optimisation enabled, but does time out in the compilation phase if > the compiler has been built at -O0 for debugging purposes. I'd have to > chase it though if you couldn't find it as I haven't written the name > down. Sounds like it could be the infamous gcc.c-torture/compile/20001226-1.c :) Richard > So yes, `dg-timeout-factor' does have its use, but it is different from > that of `dg-test-timeout-factor', hence the need for a separate setting. > > Maciej
Maciej W. Rozycki wrote: > On Wed, 3 Jan 2024, Hans-Peter Nilsson wrote: > > >>> The test execution timeout is different from the tool execution timeout >>> where it is GCC execution that is being guarded against taking excessive >>> amount of time on the test host rather than the resulting test case >>> executable run on the target afterwards, as concerned here. GCC already >>> has a `dg-timeout-factor' setting for the tool execution timeout, but has >>> no means to increase the test execution timeout. The GCC side of these >>> changes adds a corresponding `dg-test-timeout-factor' setting. >>> >> Hmm. I think it would be more correct to emphasize that the >> existing dg-timeout-factor affects both the tool execution *and* >> the test execution, whereas your new dg-test-timeout-factor only >> affects the test execution. (And still measured on the host.) >> > > Not really, `dg-timeout-factor' is only applied to tool execution and it > doesn't affect test execution. Timeout value reporting used to be limited > in DejaGNU, but you can enable it easily now by adding the DejaGNU patch > series referred in the cover letter and see that `dg-timeout-factor' is > ignored for test execution. > Then we need a better name for this new feature that more clearly indicates that it applies to running executables compiled as part of a test. Also, 'test_timeout' is documented as a knob for site configuration to twiddle, not for testsuites to adjust. I support adding scale factors for testsuites to indicate "this test takes longer than usual" but these will need to be thought through. This quick hack will cause future maintenance problems. >> Usually the compilation time is close to 0, so is this based on >> an actual need more than an itchy "wart"? >> >> Or did I miss something? >> > > Compilation is usually quite fast, but this is not always the case. If > you look at the tests that do use `dg-timeout-factor' in GCC, and some > commits that added the setting, then you ought to find actual use cases. > I saw at least one such a test that takes an awful lot of time here on a > reasonably fast host machine and still passes where GCC has been built > with optimisation enabled, but does time out in the compilation phase if > the compiler has been built at -O0 for debugging purposes. I'd have to > chase it though if you couldn't find it as I haven't written the name > down. > > So yes, `dg-timeout-factor' does have its use, but it is different from > that of `dg-test-timeout-factor', hence the need for a separate setting. This name has already caused confusion and the patch has not even been accepted yet. The feature is desirable but this implementation is not acceptable. At the moment, there are two blocking issues with this patch: 1. The global variable name 'test_timeout_factor' is not acceptable because it has already caused confusion, apparently among GCC developers who should be familiar with the GCC testsuite. If it already confuses GCC testsuite domain experts, its meaning is too unclear for general use. While looking for alternative names, I found the fundamental problem with this proposed implementation: test phases (such as running a test program versus running the tool itself) are defined by the testsuite, not by the framework. DejaGnu therefore cannot explicitly support this as offered because the proposal violates encapsulation both ways. 2. New code in DejaGnu using expr(n) is to have the expression braced as recommended in the expr(n) manpage, unless it actually uses the semantics provided by unbraced expr expressions, in which case it *needs* a comment explaining and justifying that. The second issue is trivially fixable, but the first appears fatal. There is a new "testcase" mulitplex command in Git master, which will be included in the next release, that is intended for testsuites to express dynamic state. The original planned use was to support hierarchical test groups, for which a "testcase group" command is currently defined. In the future, dg.exp will be extended to use "testcase group" to delimit each testcase that it processes, and the framework will itself explicitly track each test script as a group. (DejaGnu's current semantics implicitly group tests by test scripts, but only by (*.exp) scripts.) Could this multiplex be a suitable place to put this API feature? Using a command also has the advantage that it will cause a hard failure if the framework does not implement it, unlike a variable that a test script can set for the framework to silently ignore, leading to hard-to-reproduce test (timeout) failures if an older framework is used with a testsuite expecting this feature. The semantics of "testcase patience" or similar would be defined to extend to the end of the group (or test script in versions of DejaGnu that do not fully implement groups) in which it is executed. This limited scope is needed because allowing timeout scale factors to "bleed over" to the next test script would play havoc with the planned native parallel testing support, where the "next" script could have already started in another process. I suggest a few possible commands off the top of my head: testcase ask patience WHAT FACTOR testcase declare patience WHAT FACTOR testcase patience WHAT FACTOR The FACTOR is a scale factor, similar to the proposed 'test_timeout_factor' or possibly the keyword "reset" (or special value 0?) to clear a previous factor before leaving a group. Multiple invocations stack: the effective scale factor is the product of all applicable scale factors. (This will have straightforward interactions with groups: leaving a group will restore the scale factor in effect when the group was entered. The initial scale factor at top-level is 1, for any WHAT.) The WHAT is a keyword from a to-be-determined set. There is a possibility that parts of the framework might eventually respond to certain WHAT values, but for now, would "dg-run" be suitable to express a timeout for running a test program and "dg-compile" for the timeout on running GCC itself? This could lead to reserving dg-* WHAT values for dg.exp based testsuites to define, with a convention that dg-WHAT scales the timeout for "dg-do WHAT". Leaving the definition of WHAT to the testsuite is not an insurmountable barrier, as providing an inquiry command for the testsuite to use would not be difficult. This seems to lead towards a "testcase declare patience WHAT FACTOR" and "testcase inquire patience WHAT" pair. The former multiplies the current WHAT scale factor by FACTOR, while the latter returns the appropriate running product. All this provides a nice way to add upstream support for dg-patience ("{ dg-patience dg-run 3 }" or "{ dg-patience dg-compile 2 }") or a similar tag to dg.exp, but still leaves the issue of communicating /which/ scale factor to use to the various command execution procedures. Here we come back to the same problem, since the current API shape (not changing anytime soon) does not provide a way to pass a timeout value or scale factor, other than using a "magic" variable. So we are back to 'timeout_scale_factor', but documented in the procedure documentation for the remote_* procedures. In this case, the framework could use uplevel to read the variable as a local variable in the caller's frame, so the gcc-dg-test procedure would only need to do {set timeout_scale_factor [testcase inquire patience dg-run]} before using remote_load to run the test program. (Expect does similar things, according to its manpage.) The *_load procedures in the config/*.exp are not documented and config/README specifically says that they are to be called using the remote_* procedures. While using a "magic" variable would require some neat tricks with uplevel/upvar, it should work as long as testsuites use the documented entrypoints. (The *_load procedures from config/*.exp are likely to disappear into Tcl namespaces and/or parent interpreters in the future anyway.) Comments before I start on an implementation? -- Jacob
On Wed, 3 Jan 2024, Maciej W. Rozycki wrote: > On Wed, 3 Jan 2024, Hans-Peter Nilsson wrote: > > > > The test execution timeout is different from the tool execution timeout > > > where it is GCC execution that is being guarded against taking excessive > > > amount of time on the test host rather than the resulting test case > > > executable run on the target afterwards, as concerned here. GCC already > > > has a `dg-timeout-factor' setting for the tool execution timeout, but has > > > no means to increase the test execution timeout. The GCC side of these > > > changes adds a corresponding `dg-test-timeout-factor' setting. > > > > Hmm. I think it would be more correct to emphasize that the > > existing dg-timeout-factor affects both the tool execution *and* > > the test execution, whereas your new dg-test-timeout-factor only > > affects the test execution. (And still measured on the host.) > > Not really, `dg-timeout-factor' is only applied to tool execution and it > doesn't affect test execution. Let's stop here: that statement is just incorrect. There might be a use for separating timeouts, but the premise of dg-timeout-factor not affecting the execution of an (executable) test is plain wrong. Something is off here. Are we using the same terminology? Please revisit the setup where the patch made a difference and report it for others to repeat, something like the following: (Beware of typos, I didn't copy-paste like I usually do.) A recent observation is me testing MMIX, where gcc+newlib is build and with --prefix and $PATH pointing at pre-installed binutils and simulator. I test it all with "make -k check --target_board=mmixware-sim". (This should all be familiar; pick another target and baseboard or use a native+unix.exp if you prefer. Also JFTR I usually do it by means of contrib/regression/btest-gcc.sh - except of course when inspecting and manual testing.) Anyway a repeatable case where dg-timeout-factor then makes a difference for the timeout is for libstdc++-v3 test-case 20_util/hash/quality.cc. I recently committed a patch adding dg-timeout-factor 3 for that test (26fe2808d8). Let's consider the situation *before* that commit. For the mmix simulator and the particular host where I ran that test, the test normally executes in very close to 6 minutes, and as the default timeout of 360 seconds, it sometimes times out when the machine is busy. To make *sure* it times out for case of proof here, I edit the -DNTESTS=1 simulator setting to -DNTESTS=2. I execute just this test by for example "make check-target-libstdc++-v3 RUNTESTFLAGS=--target_board=mmixware-sim\ conformance.exp=quality.cc" (beware of quoting issues - which should be familiar to you). That NTESTS=2 makes the execution time go up to 13 minutes elapsed time and the test gets a "WARNING: program timed out" and a failure for that test. I also see a (timeout = 360) in the libstdc++.log - admittedly for the compilation line, but the timeout is consistent with being applied to the execution as well. Then, apply the commit, which adds a line with dg-timeout-factor 3 (bah, I had to do it manually because of that edited -DNTESTS=2 line). After, when I un the same command-line, the test *does not time out, it passes* and I see a (timeout = 1080) next to the compilation line in the .log - but it's apparently applied to the test run as well. That, as well as numerous previous commits, is consistent with dg-timeout-factor affecting the execution time, not just the compilation time. Of course, there may be some sub-test-suite that has a bug. I'm *guessing* you misinterpret observations that lead up to this patch-set, perhaps a bug in some sub-test.exp. > Timeout value reporting used to be limited > in DejaGNU, but you can enable it easily now by adding the DejaGNU patch > series referred in the cover letter and see that `dg-timeout-factor' is > ignored for test execution. Please state a case where I can observe it being ignored. > > Usually the compilation time is close to 0, so is this based on > > an actual need more than an itchy "wart"? > > > > Or did I miss something? > > Compilation is usually quite fast, but this is not always the case. If > you look at the tests that do use `dg-timeout-factor' in GCC, and some > commits that added the setting, then you ought to find actual use cases. I've not only looked at such commits, I've done quite a few myself. I'd say most such commits are for test execution, some are for compilation. Did you miss the ones where the commit log mentions "slow simulator" or "slow board"? > I saw at least one such a test that takes an awful lot of time here on a > reasonably fast host machine and still passes where GCC has been built > with optimisation enabled, but does time out in the compilation phase if > the compiler has been built at -O0 for debugging purposes. I'd have to > chase it though if you couldn't find it as I haven't written the name > down. Maybe the one Richard Sandiford mentioned in a reply. But, that's compile only. > So yes, `dg-timeout-factor' does have its use, but it is different from > that of `dg-test-timeout-factor', hence the need for a separate setting. No. They overlap: dg-timeout-factor is for both. (If you want to *remove* that overlap, please make sure you migrate the right subset to use dg-test-timeout-factor.) brgds, H-P
On Wed, 3 Jan 2024, Jacob Bachmeyer wrote:
> Comments before I start on an implementation?
I'd suggest to await the conclusion of the debate: I *think*
I've proved that dg-timeout-factor is already active as intended
(all parts of a test), specifically when the compilation result
is executed (for the applicable tests). Notably, modulo bugs in
the test-suites.
Of course, it may be useful to separate different timeouts of
separable parts of a test - compilation and execution being the
topic at hand. But IMHO, YAGNI. Having said that, don't let
that stand in the way of a fun hack!
brgds, H-P
Hans-Peter Nilsson wrote: > On Wed, 3 Jan 2024, Jacob Bachmeyer wrote: > >> Comments before I start on an implementation? >> > > I'd suggest to await the conclusion of the debate: I *think* > I've proved that dg-timeout-factor is already active as intended > (all parts of a test), specifically when the compilation result > is executed (for the applicable tests). Notably, modulo bugs in > the test-suites. > The dg-timeout-factor tag is a GCC testsuite feature; the dg-patience tag will be an upstream DejaGnu framework feature using shared infrastructure also available to tests not using dg.exp. Improved timeout handling will also eventually include per-target timeout defaults and scale factors, to allow testing sites to adjust timeouts for slow (or fast) targets. > Of course, it may be useful to separate different timeouts of > separable parts of a test - compilation and execution being the > topic at hand. But IMHO, YAGNI. Having said that, don't let > that stand in the way of a fun hack! It will go on the TODO list either way; the only difference is the priority it will have. -- Jacob
On Wed, 3 Jan 2024, Hans-Peter Nilsson wrote: > > > Hmm. I think it would be more correct to emphasize that the > > > existing dg-timeout-factor affects both the tool execution *and* > > > the test execution, whereas your new dg-test-timeout-factor only > > > affects the test execution. (And still measured on the host.) > > > > Not really, `dg-timeout-factor' is only applied to tool execution and it > > doesn't affect test execution. > > Let's stop here: that statement is just incorrect. > > There might be a use for separating timeouts, but the premise of > dg-timeout-factor not affecting the execution of an (executable) > test is plain wrong. Something is off here. Are we using the > same terminology? So I found some time finally and did a little bit of investigation and I think I can see what's been going on here. The thing is the timeouts are treated differently depending on whether tests are run locally or on a remote target board. Then I've been working with remote boards most of the time and obviously wasn't vigilant enough on this occasion to notice the different semantics. This boils down to how the handling of `dg-timeout-factor' has been done in gcc/testsuite/lib/timeout.exp, by overriding `standard_wait', which however is only used by DejaGNU for commands that are run locally. So taking gcc/testsuite/gcc.c-torture/execute/20000112-1.c as an example and a boring remote unix target board with the following settings (among others; leaving out compilation options as irrelevant): load_generic_config "unix" set_board_info rsh_prog ssh set_board_info rcp_prog scp set_board_info username macro set_board_info hostname www.xxx.yyy.zzz set_board_info timeout 500 set_board_info gcc,timeout 700 (say this is foo.exp) I run: $ make -C obj/gcc RUNTESTFLAGS="--target_board foo execute.exp=20000112-1.c" -k -i check-gcc-c and get these timeouts (among others) reported in gcc.log: Executing on host: .../obj/gcc/gcc/xgcc -B.../obj/gcc/gcc/ .../src/gcc/gcc/testsuite/gcc.c-torture/execute/20000112-1.c -fdiagnostics-plain-output -O0 -w -lm -o ./20000112-1.exe (timeout = 700) Executing on foo: /tmp/runtest.35135/20000112-1.exe (timeout = 300) -- as you can see the general board timeout (at 500) is ignored, the GCC execution timeout is correctly set (to 700), and the test execution timeout is taken from the default (at 300) buried and hardcoded in `remote_exec' in the absence of an override supplied by `unix_load' calling it. Now with 20000112-1.c modified to include: /* { dg-timeout-factor 5 } */ and the same testsuite invocation I get these timeouts reported instead: Executing on host: .../obj/gcc/gcc/xgcc -B.../obj/gcc/gcc/ .../src/gcc/gcc/testsuite/gcc.c-torture/execute/20000112-1.c -fdiagnostics-plain-output -O0 -w -lm -o ./20000112-1.exe (timeout = 3500) Executing on foo: /tmp/runtest.35836/20000112-1.exe (timeout = 300) -- so the GCC execution timeout is correctly multiplied by 5 (to 3500), however the test execution timeout is unchanged (at 300). Then with a simulator board, such as one using these settings: set_board_info slow_simulator 0 load_generic_config "sim" set_board_info sim "qemu-riscv64 -cpu rv64" set_board_info timeout 500 set_board_info gcc,timeout 700 set_board_info sim_time_limit 900 I can see by patching `standard_wait' that the test execution timeout does get multiplied by `dg-timeout-factor': spawn qemu-riscv64 -cpu rv64 ./20000112-1.exe Running standard_wait on bar (timeout = 3500) (I've double-checked with the same patch that `standard_wait' indeed does not get called in the remote case). So it seems like we're both right in the relevant areas, and the mess is perhaps even worse. I think Jacob has had a valid concern about encapsulation and we can start from there. Indeed the same remote execution timeout applies to maintenance commands used to prepare a remote executable beforehand and then delete it afterwards (which I don't normally make use of in my test environments though, so I had to realise that first and then make explicit changes to my setup to obtain these entries), more specifically: PASS: gcc.c-torture/execute/20000112-1.c -O0 (test for excess errors) Executing on foo: mkdir -p /tmp/runtest.36823 (timeout = 300) spawn [open ...] XYZ0ZYX Executing on foo: chmod +x /tmp/runtest.36823/20000112-1.exe (timeout = 300) spawn [open ...] XYZ0ZYX Executing on foo: /tmp/runtest.36823/20000112-1.exe (timeout = 300) spawn [open ...] XYZ0ZYX Executing on foo: rm -f /tmp/runtest.36823/20000112-1.exe.o /tmp/runtest.36823/20000112-1.exe (timeout = 300) spawn [open ...] XYZ0ZYX PASS: gcc.c-torture/execute/20000112-1.c -O0 execution test so indeed more changes are required to get this mess sorted properly, as all these commands except from the one to run the test case itself can be assumed to take a nominal amount of time to complete. I'll see if I can read through the proposals posted and come up with any conclusions. In any case thank you for bringing my attention to the flaw in my observations. Maciej