From patchwork Wed Sep 24 09:02:58 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Siddhesh Poyarekar X-Patchwork-Id: 2958 Received: (qmail 19628 invoked by alias); 24 Sep 2014 09:03:17 -0000 Mailing-List: contact libc-alpha-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: libc-alpha-owner@sourceware.org Delivered-To: mailing list libc-alpha@sourceware.org Received: (qmail 19610 invoked by uid 89); 24 Sep 2014 09:03:17 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-3.7 required=5.0 tests=AWL, BAYES_00, RP_MATCHES_RCVD, SPF_HELO_PASS autolearn=ham version=3.3.2 X-HELO: mx1.redhat.com From: Siddhesh Poyarekar To: libc-alpha@sourceware.org Cc: will.newton@linaro.org, Siddhesh Poyarekar Subject: [PATCH 1/2] New module to import and process benchmark output Date: Wed, 24 Sep 2014 14:32:58 +0530 Message-Id: <1411549379-2687-1-git-send-email-siddhesh@redhat.com> In-Reply-To: <20140612113706.GL10378@spoyarek.pnq.redhat.com> References: <20140612113706.GL10378@spoyarek.pnq.redhat.com> This is the beginning of a module to import and process benchmark outputs. Currently this is tied to the bench.out format, but in future this needs to be generalized. The module currently supports importing of a bench.out and validating it against a schema file. I have also added a function that compresses detailed timings by grouping them into their means based on how close they are to each other. The idea here is to have a set of routines that benchmark consumers may find useful to build their own analysis tools. I have altered validate_bench to use this module too. * benchtests/scripts/import_bench.py: New file. * benchtests/scripts/validate_benchout.py: Import import_bench instead of jsonschema. (validate_bench): Remove function. (main): Use import_bench. --- benchtests/scripts/import_bench.py | 45 +++++++++++++++++++++++++++++ benchtests/scripts/validate_benchout.py | 51 +++++++++++++++++---------------- 2 files changed, 71 insertions(+), 25 deletions(-) create mode 100644 benchtests/scripts/import_bench.py diff --git a/benchtests/scripts/import_bench.py b/benchtests/scripts/import_bench.py new file mode 100644 index 0000000..c147df7 --- /dev/null +++ b/benchtests/scripts/import_bench.py @@ -0,0 +1,45 @@ +#!/usr/bin/python +# Copyright (C) 2014 Free Software Foundation, Inc. +# This file is part of the GNU C Library. +# +# The GNU C Library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# The GNU C Library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with the GNU C Library; if not, see +# . +"""Functions to import benchmark data and process it""" + +import json +try: + import jsonschema as validator +except ImportError: + print('Could not find jsonschema module.') + raise + + +def parse_bench(filename, schema_filename): + """Parse the input file + + Parse and validate the json file containing the benchmark outputs. Return + the resulting object. + Args: + filename: Name of the benchmark output file. + Return: + The bench dictionary. + """ + with open(schema_filename, 'r') as schemafile: + schema = json.load(schemafile) + with open(filename, 'r') as benchfile: + bench = json.load(benchfile) + validator.validate(bench, schema) + do_for_all_timings(bench, lambda b, f, v: + b['functions'][f][v]['timings'].sort()) + return bench diff --git a/benchtests/scripts/validate_benchout.py b/benchtests/scripts/validate_benchout.py index 61a8cbd..9d3a5cb 100755 --- a/benchtests/scripts/validate_benchout.py +++ b/benchtests/scripts/validate_benchout.py @@ -27,37 +27,26 @@ import sys import os try: - import jsonschema + import import_bench as bench except ImportError: - print('Could not find jsonschema module. Output not validated.') + print('Import Error: Output will not be validated.') # Return success because we don't want the bench target to fail just # because the jsonschema module was not found. sys.exit(os.EX_OK) -def validate_bench(benchfile, schemafile): - """Validate benchmark file - - Validate a benchmark output file against a JSON schema. +def print_and_exit(message, exitcode): + """Prints message to stderr and returns the exit code. Args: - benchfile: The file name of the bench.out file. - schemafile: The file name of the JSON schema file to validate - bench.out against. + message: The message to print + exitcode: The exit code to return - Exceptions: - jsonschema.ValidationError: When bench.out is not valid - jsonschema.SchemaError: When the JSON schema is not valid - IOError: If any of the files are not found. + Returns: + The passed exit code """ - with open(benchfile, 'r') as bfile: - with open(schemafile, 'r') as sfile: - bench = json.load(bfile) - schema = json.load(sfile) - jsonschema.validate(bench, schema) - - # If we reach here, we're all good. - print("Benchmark output in %s is valid." % benchfile) + print(message, file=sys.stderr) + return exitcode def main(args): @@ -73,11 +62,23 @@ def main(args): Exceptions thrown by validate_bench """ if len(args) != 2: - print("Usage: %s " % sys.argv[0], - file=sys.stderr) - return os.EX_USAGE + return print_and_exit("Usage: %s " + % sys.argv[0], os.EX_USAGE) + + try: + bench.parse_bench(args[0], args[1]) + except IOError as e: + return print_and_exit("IOError(%d): %s" % (e.errno, e.strerror), + os.EX_OSFILE) + + except bench.validator.ValidationError as e: + return print_and_exit("Invalid benchmark output: %s" % e.message, + os.EX_DATAERR) + + except bench.validator.SchemaError as e: + return print_and_exit("Invalid schema: %s" % e.message, os.EX_DATAERR) - validate_bench(args[0], args[1]) + print("Benchmark output in %s is valid." % args[0]) return os.EX_OK