[COMMITTED] tests: Cleanup error handling and don't share cache between servers/client

Message ID 20210909165810.26719-1-mark@klomp.org
State Committed
Headers
Series [COMMITTED] tests: Cleanup error handling and don't share cache between servers/client |

Commit Message

Mark Wielaard Sept. 9, 2021, 4:58 p.m. UTC
  There were still three tests that shared a cache between the servers
and client that queried those servers. Give them all separate caches.

Also the error handler for debuginfod tests wasn't called when a
command inside a function failed. Since testrun is a function, there
would be no metrics or error log files listed if the testrun command
failed. Making it hard to see what went wrong. Fix this by using
set -o errtrace

Signed-off-by: Mark Wielaard <mark@klomp.org>
---
 tests/ChangeLog                            | 11 +++++++++++
 tests/debuginfod-subr.sh                   | 14 ++++++++-----
 tests/run-debuginfod-federation-link.sh    |  7 ++++---
 tests/run-debuginfod-federation-metrics.sh | 18 ++++++++---------
 tests/run-debuginfod-federation-sqlite.sh  | 23 ++++++++++------------
 5 files changed, 42 insertions(+), 31 deletions(-)
  

Comments

Mark Wielaard Sept. 9, 2021, 9:12 p.m. UTC | #1
Hi,

On Thu, Sep 09, 2021 at 06:58:10PM +0200, Mark Wielaard wrote:
> Also the error handler for debuginfod tests wasn't called when a
> command inside a function failed. Since testrun is a function, there
> would be no metrics or error log files listed if the testrun command
> failed. Making it hard to see what went wrong. Fix this by using
> set -o errtrace

So that showed run-debuginfod-fd-prefetch-caches.sh "failed".  I
"fixed" that with the attached patch which I just committed. And also
make the cleanup and error handling slightly better (by only doing it
once).

But it doesn't really "fix" run-debuginfod-fd-prefetch-caches.sh. It
now just does nothing. The debuginfod server gets to scan a
non-existing directory and then nothing even tries to query anything
from the server. So nothing ever gets prefetched. And testing for zero
happily succeeds.

Noah, could you take a peek at this testcase and see if you can make
it do something "real"?

Thanks,

Mark
-------------- next part --------------
A non-text attachment was scrubbed...
Name: 0001-tests-Don-t-fail-run-debuginfod-fd-prefetch-caches.s.patch
Type: text/x-diff
Size: 3803 bytes
Desc: not available
URL: <https://sourceware.org/pipermail/elfutils-devel/attachments/20210909/464079c3/attachment.bin>
  

Patch

diff --git a/tests/ChangeLog b/tests/ChangeLog
index 85dca442..05b31fd8 100644
--- a/tests/ChangeLog
+++ b/tests/ChangeLog
@@ -1,3 +1,14 @@ 
+2021-09-09  Mark Wielaard  <mark@klomp.org>
+
+	* debuginfod-subr.sh: set -o errtrace.
+	(cleanup): Don't fail kill or wait. Only trap on normal exit.
+	(err): Don't fail curl metrics. Call cleanup.
+	* run-debuginfod-federation-link.sh: Use separate client caches
+	for both servers and debuginfod client. Remove duplicate valgrind
+	disabling.
+	* run-debuginfod-federation-metrics.sh: Likewise.
+	* run-debuginfod-federation-sqlite.sh: Likewise.
+
 2021-09-06  Dmitry V. Levin  <ldv@altlinux.org>
 
 	* elfcopy.c (copy_elf): Remove cast of malloc return value.
diff --git a/tests/debuginfod-subr.sh b/tests/debuginfod-subr.sh
index 7d238436..c21b7b8a 100755
--- a/tests/debuginfod-subr.sh
+++ b/tests/debuginfod-subr.sh
@@ -16,6 +16,9 @@ 
 
 # sourced from run-debuginfod-*.sh tests (must be bash scripts)
 
+# We trap ERR and like commands that fail in function to also trap
+set -o errtrace
+
 . $srcdir/test-subr.sh  # includes set -e
 
 type curl 2>/dev/null || (echo "need curl"; exit 77)
@@ -27,14 +30,14 @@  echo "zstd=$zstd bsdtar=`bsdtar --version`"
 
 cleanup()
 {
-  if [ $PID1 -ne 0 ]; then kill $PID1; wait $PID1; fi
-  if [ $PID2 -ne 0 ]; then kill $PID2; wait $PID2; fi
+  if [ $PID1 -ne 0 ]; then kill $PID1 || : ; wait $PID1 || :; fi
+  if [ $PID2 -ne 0 ]; then kill $PID2 || : ; wait $PID2 || :; fi
   rm -rf F R D L Z ${PWD}/foobar ${PWD}/mocktree ${PWD}/.client_cache* ${PWD}/tmp*
   exit_cleanup
 }
 
-# clean up trash if we were aborted early
-trap cleanup 0 1 2 3 5 9 15
+# clean up trash if we exit
+trap cleanup 0
 
 errfiles_list=
 err() {
@@ -42,7 +45,7 @@  err() {
     for port in $PORT1 $PORT2
     do
         echo ERROR REPORT $port metrics
-        curl -s http://127.0.0.1:$port/metrics
+        curl -s http://127.0.0.1:$port/metrics || :
         echo
     done
     for x in $errfiles_list
@@ -51,6 +54,7 @@  err() {
         cat $x
         echo
     done
+    cleanup
     false # trigger set -e
 }
 trap err ERR
diff --git a/tests/run-debuginfod-federation-link.sh b/tests/run-debuginfod-federation-link.sh
index 050bcbcf..1347e7b8 100755
--- a/tests/run-debuginfod-federation-link.sh
+++ b/tests/run-debuginfod-federation-link.sh
@@ -98,6 +98,9 @@  wait_ready $PORT2 'thread_busy{role="http-metrics"}' 1
 
 # have clients contact the new server
 export DEBUGINFOD_URLS=http://127.0.0.1:$PORT2
+# Use fresh cache for debuginfod-find client requests
+export DEBUGINFOD_CACHE_PATH=${PWD}/.client_cache3
+mkdir -p $DEBUGINFOD_CACHE_PATH
 
 if type bsdtar 2>/dev/null; then
     # copy in the deb files
@@ -117,7 +120,6 @@  if type bsdtar 2>/dev/null; then
     archive_test f17a29b5a25bd4960531d82aa6b07c8abe84fa66 "" ""
 fi
 
-rm -rf $DEBUGINFOD_CACHE_PATH
 testrun ${abs_top_builddir}/debuginfod/debuginfod-find debuginfo $BUILDID
 
 # send a request to stress XFF and User-Agent federation relay;
@@ -148,8 +150,7 @@  export DEBUGINFOD_URLS=127.0.0.1:$PORT2
 testrun ${abs_top_builddir}/debuginfod/debuginfod-find debuginfo $BUILDID
 
 # test parallel queries in client
-export DEBUGINFOD_CACHE_PATH=${PWD}/.client_cache3
-mkdir -p $DEBUGINFOD_CACHE_PATH
+rm -rf $DEBUGINFOD_CACHE_PATH
 export DEBUGINFOD_URLS="BAD http://127.0.0.1:$PORT1 127.0.0.1:$PORT1 http://127.0.0.1:$PORT2 DNE"
 
 testrun ${abs_builddir}/debuginfod_build_id_find -e F/prog 1
diff --git a/tests/run-debuginfod-federation-metrics.sh b/tests/run-debuginfod-federation-metrics.sh
index 0cc4c2f7..2d0fd6d4 100755
--- a/tests/run-debuginfod-federation-metrics.sh
+++ b/tests/run-debuginfod-federation-metrics.sh
@@ -92,6 +92,10 @@  wait_ready $PORT2 'thread_busy{role="http-metrics"}' 1
 
 # have clients contact the new server
 export DEBUGINFOD_URLS=http://127.0.0.1:$PORT2
+# Use fresh cache for debuginfod-find client requests
+export DEBUGINFOD_CACHE_PATH=${PWD}/.client_cache3
+mkdir -p $DEBUGINFOD_CACHE_PATH
+
 if type bsdtar 2>/dev/null; then
     # copy in the deb files
     cp -rvp ${abs_srcdir}/debuginfod-debs/*deb D
@@ -110,7 +114,6 @@  if type bsdtar 2>/dev/null; then
     archive_test f17a29b5a25bd4960531d82aa6b07c8abe84fa66 "" ""
 fi
 
-rm -rf $DEBUGINFOD_CACHE_PATH
 testrun ${abs_top_builddir}/debuginfod/debuginfod-find debuginfo $BUILDID
 
 # send a request to stress XFF and User-Agent federation relay;
@@ -171,20 +174,15 @@  curl -s http://127.0.0.1:$PORT2/buildid/deadbeef/debuginfo > /dev/null || true
 curl -s http://127.0.0.1:$PORT2/buildid/deadbeef/badtype > /dev/null || true
 (curl -s http://127.0.0.1:$PORT2/metrics | grep 'badtype') && false
 
-# DISABLE VALGRIND checking because valgrind might use debuginfod client
-# requests itself, causing confusion about who put what in the cache.
-# It stays disabled till the end of this test.
-unset VALGRIND_CMD
-
 # Confirm that reused curl connections survive 404 errors.
-# The rm's force an uncached fetch
-rm -f $DEBUGINFOD_CACHE_PATH/$BUILDID/debuginfo .client_cache*/$BUILDID/debuginfo
+# The rm's force an uncached fetch (in both servers and client cache)
+rm -f .client_cache*/$BUILDID/debuginfo
 testrun ${abs_top_builddir}/debuginfod/debuginfod-find debuginfo $BUILDID
-rm -f $DEBUGINFOD_CACHE_PATH/$BUILDID/debuginfo .client_cache*/$BUILDID/debuginfo
+rm -f .client_cache*/$BUILDID/debuginfo
 testrun ${abs_top_builddir}/debuginfod/debuginfod-find debuginfo $BUILDID
 testrun ${abs_top_builddir}/debuginfod/debuginfod-find debuginfo $BUILDID
 testrun ${abs_top_builddir}/debuginfod/debuginfod-find debuginfo $BUILDID
-rm -f $DEBUGINFOD_CACHE_PATH/$BUILDID/debuginfo .client_cache*/$BUILDID/debuginfo
+rm -f .client_cache*/$BUILDID/debuginfo
 testrun ${abs_top_builddir}/debuginfod/debuginfod-find debuginfo $BUILDID
 
 # Confirm that some debuginfod client pools are being used
diff --git a/tests/run-debuginfod-federation-sqlite.sh b/tests/run-debuginfod-federation-sqlite.sh
index 5a18b4bb..45761ed7 100755
--- a/tests/run-debuginfod-federation-sqlite.sh
+++ b/tests/run-debuginfod-federation-sqlite.sh
@@ -78,7 +78,11 @@  wait_ready $PORT2 'thread_work_total{role="traverse"}' 1
 # And initial groom cycle
 wait_ready $PORT1 'thread_work_total{role="groom"}' 1
 
-export DEBUGINFOD_URLS='http://127.0.0.1:'$PORT2 
+export DEBUGINFOD_URLS='http://127.0.0.1:'$PORT2
+# Use fresh cache for debuginfod-find client requests
+export DEBUGINFOD_CACHE_PATH=${PWD}/.client_cache3
+mkdir -p $DEBUGINFOD_CACHE_PATH
+
 if type bsdtar 2>/dev/null; then
     # copy in the deb files
     cp -rvp ${abs_srcdir}/debuginfod-debs/*deb D
@@ -97,7 +101,6 @@  if type bsdtar 2>/dev/null; then
     archive_test f17a29b5a25bd4960531d82aa6b07c8abe84fa66 "" ""
 fi
 
-rm -rf $DEBUGINFOD_CACHE_PATH
 testrun ${abs_top_builddir}/debuginfod/debuginfod-find debuginfo $BUILDID
 
 # send a request to stress XFF and User-Agent federation relay;
@@ -127,8 +130,7 @@  rm -f $DEBUGINFOD_CACHE_PATH/$BUILDID/debuginfo # drop 000-perm negative-hit fil
 export DEBUGINFOD_URLS=127.0.0.1:$PORT2
 testrun ${abs_top_builddir}/debuginfod/debuginfod-find debuginfo $BUILDID
 # test parallel queries in client
-export DEBUGINFOD_CACHE_PATH=${PWD}/.client_cache3
-mkdir -p $DEBUGINFOD_CACHE_PATH
+rm -rf $DEBUGINFOD_CACHE_PATH
 export DEBUGINFOD_URLS="BAD http://127.0.0.1:$PORT1 127.0.0.1:$PORT1 http://127.0.0.1:$PORT2 DNE"
 
 testrun ${abs_builddir}/debuginfod_build_id_find -e F/prog 1
@@ -142,20 +144,15 @@  curl -s http://127.0.0.1:$PORT2/buildid/deadbeef/debuginfo > /dev/null || true
 curl -s http://127.0.0.1:$PORT2/buildid/deadbeef/badtype > /dev/null || true
 (curl -s http://127.0.0.1:$PORT2/metrics | grep 'badtype') && false
 
-# DISABLE VALGRIND checking because valgrind might use debuginfod client
-# requests itself, causing confusion about who put what in the cache.
-# It stays disabled till the end of this test.
-unset VALGRIND_CMD
-
 # Confirm that reused curl connections survive 404 errors.
-# The rm's force an uncached fetch
-rm -f $DEBUGINFOD_CACHE_PATH/$BUILDID/debuginfo .client_cache*/$BUILDID/debuginfo
+# The rm's force an uncached fetch (in both servers and client cache)
+rm -f .client_cache*/$BUILDID/debuginfo
 testrun ${abs_top_builddir}/debuginfod/debuginfod-find debuginfo $BUILDID
-rm -f $DEBUGINFOD_CACHE_PATH/$BUILDID/debuginfo .client_cache*/$BUILDID/debuginfo
+rm -f .client_cache*/$BUILDID/debuginfo
 testrun ${abs_top_builddir}/debuginfod/debuginfod-find debuginfo $BUILDID
 testrun ${abs_top_builddir}/debuginfod/debuginfod-find debuginfo $BUILDID
 testrun ${abs_top_builddir}/debuginfod/debuginfod-find debuginfo $BUILDID
-rm -f $DEBUGINFOD_CACHE_PATH/$BUILDID/debuginfo .client_cache*/$BUILDID/debuginfo
+rm -f .client_cache*/$BUILDID/debuginfo
 testrun ${abs_top_builddir}/debuginfod/debuginfod-find debuginfo $BUILDID
 # Trigger a flood of requests against the same archive content file.
 # Use a file that hasn't been previously extracted in to make it