gdb/testsuite: Don't attempt tests if they fail to compile

Message ID 20180111190055.4875-1-andrew.burgess@embecosm.com
State New, archived
Headers

Commit Message

Andrew Burgess Jan. 11, 2018, 7 p.m. UTC
  In the gdb.base/whatis-ptype-typedefs.exp test, if the test program
fails to compile, don't run the tests.

gdb/testsuite/ChangeLog:

	* gdb.base/whatis-ptype-typedefs.exp: Don't run tests if we failed
	to prepare.
	(prepare): Return 0 on error, 1 on success.
---
 gdb/testsuite/ChangeLog                          | 6 ++++++
 gdb/testsuite/gdb.base/whatis-ptype-typedefs.exp | 9 ++++++---
 2 files changed, 12 insertions(+), 3 deletions(-)
  

Comments

Simon Marchi Jan. 11, 2018, 10:03 p.m. UTC | #1
On 2018-01-11 02:00 PM, Andrew Burgess wrote:
> In the gdb.base/whatis-ptype-typedefs.exp test, if the test program
> fails to compile, don't run the tests.

Hi Andrew,

That would make the test similar to other test, in that if we fail to
build the test program it's not a failure (it shows as UNTESTED, doesn't
make the test run fail).  I find it's a strange behavior though.  If a
test program starts not building for some reason, I'd certainly like to
know (e.g. it could be UNRESOLVED), instead of it silently failing.

Any other opinion?

Simon
  
Andrew Burgess Jan. 12, 2018, 10:23 a.m. UTC | #2
* Simon Marchi <simon.marchi@ericsson.com> [2018-01-11 17:03:51 -0500]:

> On 2018-01-11 02:00 PM, Andrew Burgess wrote:
> > In the gdb.base/whatis-ptype-typedefs.exp test, if the test program
> > fails to compile, don't run the tests.
> 
> Hi Andrew,
> 
> That would make the test similar to other test, in that if we fail to
> build the test program it's not a failure (it shows as UNTESTED, doesn't
> make the test run fail).  I find it's a strange behavior though.  If a
> test program starts not building for some reason, I'd certainly like to
> know (e.g. it could be UNRESOLVED), instead of it silently failing.
> 
> Any other opinion?

If the test fails to compile we don't get a silent failure, as you
mention, we get the UNTESTED.  Changing this to something stronger,
like UNRESOLVED, would I fear make cases where we legitimately can't
compile a test program seem worse than they really are.

The concern about missing the case where a test program goes from
compiling to not compiling is fair, however, I don't think that it's
something we need to worry about.  My understanding of the "normal"
testing flow for GDB is to compare against a baseline set of results,
a few hundred tests disappearing should raise a red flag, and once the
developer has realised that this particular test script has something
weird going on, the extra UNTESTED should guide them to the cause of
the problem.

The failed to prepare leading to skipping the tests seems like the
"standard" pattern within the GDB testsuite, so, if you agree, I think
having this test fall in line with that is probably a good thing.
That doesn't mean we can't change the standard pattern in the future
if we can come up with a better model (though I don't have any good
suggestions).

Thanks,
Andrew
  
Joel Brobecker Jan. 12, 2018, 11:45 a.m. UTC | #3
> > That would make the test similar to other test, in that if we fail to
> > build the test program it's not a failure (it shows as UNTESTED, doesn't
> > make the test run fail).  I find it's a strange behavior though.  If a
> > test program starts not building for some reason, I'd certainly like to
> > know (e.g. it could be UNRESOLVED), instead of it silently failing.
> > 
> > Any other opinion?

FWIW, no strong opinion on my side.
  
Simon Marchi Jan. 12, 2018, 12:36 p.m. UTC | #4
On 2018-01-12 05:23, Andrew Burgess wrote:
> If the test fails to compile we don't get a silent failure, as you
> mention, we get the UNTESTED.  Changing this to something stronger,
> like UNRESOLVED, would I fear make cases where we legitimately can't
> compile a test program seem worse than they really are.

An UNTESTED result does not make Dejagnu's runtest return a non-zero 
return code.  So if a test file that we expect to compile does not 
compile, the test exits with success.  That's why I consider it a silent 
failure.  If we know that a test program does not make sense under the 
current conditions, I think it should be up to the .exp file to 
determine that and manually mark the test as UNTESTED.

> The concern about missing the case where a test program goes from
> compiling to not compiling is fair, however, I don't think that it's
> something we need to worry about.  My understanding of the "normal"
> testing flow for GDB is to compare against a baseline set of results,
> a few hundred tests disappearing should raise a red flag, and once the
> developer has realised that this particular test script has something
> weird going on, the extra UNTESTED should guide them to the cause of
> the problem.

Diffing gdb.sum against the baseline is indeed our current workflow, but 
it's not the ideal one.  Ideally, you'd just run make check, and if it 
returns 0 then you can be confident that GDB is working ok.  However, 
given that we do have failures in the testsuite, diffing gdb.sum is the 
next best solution.

At Ericsson, for example (and I'm sure others do something similar), we 
have a CI for our GDB port where we run a stable subset of the tests, 
which we expect will pass.  The builds fails when "make check" fails, 
that is when there's a FAIL, UNRESOLVED or KPASS.  There are many 
(expected) UNTESTED results, because some tests (sometimes just a part 
of a .exp) doesn't apply/make sense on our platform.  In this case, a 
test case that starts not building (it could even be due to an external 
factor, like a compiler upgrade) will probably go unnoticed.

> 
> The failed to prepare leading to skipping the tests seems like the
> "standard" pattern within the GDB testsuite, so, if you agree, I think
> having this test fall in line with that is probably a good thing.
> That doesn't mean we can't change the standard pattern in the future
> if we can come up with a better model (though I don't have any good
> suggestions).

Indeed, I haven't been clear on that.  I think your patch is good, 
because it makes the test look like the rest of the testsuite.  I was 
just reflecting on that general pattern (hoping we can continue the 
discussion :)).

So, please push, and thanks for the patch!

Simon
  

Patch

diff --git a/gdb/testsuite/gdb.base/whatis-ptype-typedefs.exp b/gdb/testsuite/gdb.base/whatis-ptype-typedefs.exp
index 763d2a43952..3d910df5d02 100644
--- a/gdb/testsuite/gdb.base/whatis-ptype-typedefs.exp
+++ b/gdb/testsuite/gdb.base/whatis-ptype-typedefs.exp
@@ -45,13 +45,15 @@  proc prepare {lang} {
 
     if { [prepare_for_testing "failed to prepare" \
 	      ${out} [list $srcfile] $options] } {
-	return -1
+	return 0
     }
 
     if ![runto_main] then {
 	fail "can't run to main"
 	return 0
     }
+
+    return 1
 }
 
 # The following list is layed out as a table.  It is composed by
@@ -300,6 +302,7 @@  proc run_tests {lang} {
 }
 
 foreach_with_prefix lang {"c" "c++"} {
-    prepare $lang
-    run_tests $lang
+    if { [prepare $lang] } then {
+	run_tests $lang
+    }
 }