[RFC] Rerun benchmarks only if *.out file does not exist.

Message ID 1506123735.2286.174.camel@cavium.com
State New, archived
Headers

Commit Message

Steve Ellcey Sept. 22, 2017, 11:42 p.m. UTC
  While doing some benchmarking on glibc I noticed a difference between
'make check' and 'make bench'.  If you rerun 'make check' after having
run it once already, it will only rerun tests where the *.out file does
not exist.  This is useful (as explained in the glibc Wiki) because you
can rerun one test by removing just that .out file and running 'make check'.

With 'make bench' it reruns all the benchmarks each time you do 'make bench'
even if the .out files exist.  I was thinking that it would be nice to
have this follow the 'make check' behaviour, then if you only want to retest
one test (because you are doing performance changes on one function), you
can just remove that .out file and run 'make bench'.  Right now there does
not seem to be a way to run one performance benchmark.

Two questions: One, does this sound useful to other people?  And two, what
is wrong with my patch?  This patch mostly works but the bench-memcpy,
bench-memcpy-large, and bench-memcpy-random tests get rerun everytime even
if the .out files for those tests already exist.  I don't know why.  I am
guessing is thas something to do with their common name prefix and the string
manipulations done to test names in the Makefile but memset does not seem
to have that problem.

Steve Ellcey
sellcey@cavium.com


2017-09-22  Steve Ellcey  <sellcey@cavium.com>

	* benchtests/Makefile (bench-set): Only run if *.out file
	does not exist.
	(bench-malloc): Likewise.
	(bench-func): Likewise.
  

Comments

Steve Ellcey Sept. 25, 2017, 5:17 p.m. UTC | #1
On Fri, 2017-09-22 at 16:42 -0700, Steve Ellcey wrote:

> 
> Two questions: One, does this sound useful to other people?  And two, what
> is wrong with my patch?  This patch mostly works but the bench-memcpy,
> bench-memcpy-large, and bench-memcpy-random tests get rerun everytime even
> if the .out files for those tests already exist.  I don't know why.  I am
> guessing is thas something to do with their common name prefix and the string
> manipulations done to test names in the Makefile but memset does not seem
> to have that problem.
> 
> Steve Ellcey
> sellcey@cavium.com

Well, you can ignore the question about what is wrong with the patch, I
was removing the memcpy*.out files and that was why they got rerun.  I
didn't see the rm command in my script and got confused.

While testing my changes I noticed that 'make bench-clean' does not
remove the *.out files.  It removes the executables and object files
but not the *.out result files.  So with my change 'make bench; make
bench-clean; make bench' resulted in the benchmarks getting rebuilt but
not rerun.  I don't know if the decision to not remove *.out files was
intentional or an oversight.

Steve Ellcey
sellcey@cavium.com
  
Siddhesh Poyarekar Sept. 26, 2017, 6:44 p.m. UTC | #2
On Monday 25 September 2017 10:47 PM, Steve Ellcey wrote:
>> Two questions: One, does this sound useful to other people?  And two, what
>> is wrong with my patch?  This patch mostly works but the bench-memcpy,
>> bench-memcpy-large, and bench-memcpy-random tests get rerun everytime even
>> if the .out files for those tests already exist.  I don't know why.  I am
>> guessing is thas something to do with their common name prefix and the string
>> manipulations done to test names in the Makefile but memset does not seem
>> to have that problem.
<snip>
> 
> Well, you can ignore the question about what is wrong with the patch, I
> was removing the memcpy*.out files and that was why they got rerun.  I
> didn't see the rm command in my script and got confused.
> 
> While testing my changes I noticed that 'make bench-clean' does not
> remove the *.out files.  It removes the executables and object files
> but not the *.out result files.  So with my change 'make bench; make
> bench-clean; make bench' resulted in the benchmarks getting rebuilt but
> not rerun.  I don't know if the decision to not remove *.out files was
> intentional or an oversight.

That is intentional, so is the decision to re-run benchmarks despite the
presence of the *.out files.  In fact there's a different kind of bug
there, which is that the current .out file should be backed up like the
main bench.out is to bench.out.old and I may have missed doing that for
string benchmarks.  That provides a convenient way to do before-after
comparison for patches.

If you want to run only specific benchmarks or benchsets, I suggest
making that a command line parameter, something like:

  make BENCHSETS=string-benchset

or

  make BENCH="sin cos asin acos"

Siddhesh
  

Patch

diff --git a/benchtests/Makefile b/benchtests/Makefile
index 3acc39c..66b080e 100644
--- a/benchtests/Makefile
+++ b/benchtests/Makefile
@@ -168,15 +168,19 @@  endif
 
 bench-set: $(binaries-benchset)
 	for run in $^; do \
-	  echo "Running $${run}"; \
-	  $(run-bench) > $${run}.out; \
+	  if test ! -f $${run}.out; then \
+	    echo "Running $${run}"; \
+	    $(run-bench) > $${run}.out; \
+	  fi \
 	done
 
 bench-malloc: $(binaries-bench-malloc)
 	run=$(objpfx)bench-malloc-thread; \
 	for thr in 1 8 16 32; do \
-	  echo "Running $${run} $${thr}"; \
-	  $(run-bench) $${thr} > $${run}-$${thr}.out; \
+	  if test ! -f $${run}-$${thr}.out; then \
+	    echo "Running $${run} $${thr}"; \
+	    $(run-bench) $${thr} > $${run}-$${thr}.out; \
+	  fi \
 	done
 
 # Build and execute the benchmark functions.  This target generates JSON
@@ -184,25 +188,23 @@  bench-malloc: $(binaries-bench-malloc)
 # so one could even execute them individually and process it using any JSON
 # capable language or tool.
 bench-func: $(binaries-bench)
-	{ timing_type=$$($(timing-type)); \
-	echo "{\"timing_type\": \"$${timing_type}\","; \
-	echo " \"functions\": {"; \
-	for run in $^; do \
-	  if ! [ "x$${run}" = "x$<" ]; then \
-	    echo ","; \
-	  fi; \
-	  echo "Running $${run}" >&2; \
-	  $(run-bench) $(DETAILED_OPT); \
-	done; \
-	echo; \
-	echo " }"; \
-	echo "}"; } > $(objpfx)bench.out-tmp; \
-	if [ -f $(objpfx)bench.out ]; then \
-	  mv -f $(objpfx)bench.out $(objpfx)bench.out.old; \
-	fi; \
-	mv -f $(objpfx)bench.out-tmp $(objpfx)bench.out
-	$(PYTHON) scripts/validate_benchout.py $(objpfx)bench.out \
-		scripts/benchout.schema.json
+	if test ! -f $(objpfx)bench.out; then \
+	  { timing_type=$$($(timing-type)); \
+	  echo "{\"timing_type\": \"$${timing_type}\","; \
+	  echo " \"functions\": {"; \
+	  for run in $^; do \
+	    if ! [ "x$${run}" = "x$<" ]; then \
+	      echo ","; \
+	    fi; \
+	    echo "Running $${run}" >&2; \
+	    $(run-bench) $(DETAILED_OPT); \
+	  done; \
+	  echo; \
+	  echo " }"; \
+	  echo "}"; } > $(objpfx)bench.out; \
+	  $(PYTHON) scripts/validate_benchout.py $(objpfx)bench.out \
+		scripts/benchout.schema.json; \
+	fi
 
 $(timing-type) $(binaries-bench) $(binaries-benchset) \
 	$(binaries-bench-malloc): %: %.o $(objpfx)json-lib.o \