From patchwork Tue Dec 11 22:46:59 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: leonardo.sandoval.gonzalez@linux.intel.com X-Patchwork-Id: 30638 Received: (qmail 46204 invoked by alias); 11 Dec 2018 22:47:09 -0000 Mailing-List: contact libc-alpha-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: libc-alpha-owner@sourceware.org Delivered-To: mailing list libc-alpha@sourceware.org Received: (qmail 46079 invoked by uid 89); 11 Dec 2018 22:47:08 -0000 Authentication-Results: sourceware.org; auth=none X-Spam-SWARE-Status: No, score=-23.9 required=5.0 tests=BAYES_00, GIT_PATCH_0, GIT_PATCH_1, GIT_PATCH_2, GIT_PATCH_3, KAM_LAZY_DOMAIN_SECURITY, UNSUBSCRIBE_BODY autolearn=ham version=3.3.2 spammy= X-HELO: mga11.intel.com From: leonardo.sandoval.gonzalez@linux.intel.com To: libc-alpha@sourceware.org Cc: Leonardo Sandoval Subject: [PATCH v2 3/3] benchtests: send non-consumable data to stderr Date: Tue, 11 Dec 2018 16:46:59 -0600 Message-Id: <20181211224659.29876-4-leonardo.sandoval.gonzalez@linux.intel.com> In-Reply-To: <20181211224659.29876-1-leonardo.sandoval.gonzalez@linux.intel.com> References: <20181211224659.29876-1-leonardo.sandoval.gonzalez@linux.intel.com> MIME-Version: 1.0 From: Leonardo Sandoval Non-consumable data, alias data not related to benchmarks, should be sent to the standard error, thus pipelines can work as expected. * benchtests/scripts/compare_bench.py (do_compare): write to stderr in case stat is not present. * benchtests/scripts/compare_bench.py (plot_graphs): write to stderr in case timings field is not present. Also string showing the output filename goes into the stderr. --- benchtests/scripts/compare_bench.py | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/benchtests/scripts/compare_bench.py b/benchtests/scripts/compare_bench.py index f0c9bf7a7d..eaddc57e4e 100755 --- a/benchtests/scripts/compare_bench.py +++ b/benchtests/scripts/compare_bench.py @@ -47,6 +47,7 @@ def do_compare(func, var, tl1, tl2, par, threshold): v2 = tl2[str(par)] d = abs(v2 - v1) * 100 / v1 except KeyError: + sys.stderr.write('%s(%s)[%s]: stat does not exist\n' % (func, var, par)) return except ZeroDivisionError: return @@ -85,7 +86,7 @@ def compare_runs(pts1, pts2, threshold, stats): # timing info for the function variant. if 'timings' not in pts1['functions'][func][var].keys() or \ 'timings' not in pts2['functions'][func][var].keys(): - continue + continue # If two lists do not have the same length then it is likely that # the performance characteristics of the function have changed. @@ -133,7 +134,7 @@ def plot_graphs(bench1, bench2): # No point trying to print a graph if there are no detailed # timings. if u'timings' not in bench1['functions'][func][var].keys(): - print('Skipping graph for %s(%s)' % (func, var)) + sys.stderr.write('Skipping graph for %s(%s)\n' % (func, var)) continue pylab.clf() @@ -157,7 +158,7 @@ def plot_graphs(bench1, bench2): filename = "%s-%s.png" % (func, var) else: filename = "%s.png" % func - print('Writing out %s' % filename) + sys.stderr.write('Writing out %s' % filename) pylab.savefig(filename) def main(bench1, bench2, schema, threshold, stats):