[2/2] benchtests: Add a new argument -t to read throughput results
Commit Message
String benchmarks that store results as throughput rather than
latencies will show positive improvements as negative. Add a flag to
fix the output of compare_strings.py in such cases.
* benchtests/scripts/compare_strings.py: New option -t.
---
benchtests/scripts/compare_strings.py | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
Comments
On 09/18/2017 11:40 AM, Siddhesh Poyarekar wrote:
> String benchmarks that store results as throughput rather than
> latencies will show positive improvements as negative. Add a flag to
> fix the output of compare_strings.py in such cases.
>
> * benchtests/scripts/compare_strings.py: New option -t.
... and you wouldn't need this patch if you'd not changed to throughput.
Can't you just post-process the data to get throughput for your fancy
graphs... or better yet add fancy graph support directly to benchtests ;-)
On Friday 22 September 2017 12:01 AM, Carlos O'Donell wrote:
> On 09/18/2017 11:40 AM, Siddhesh Poyarekar wrote:
>> String benchmarks that store results as throughput rather than
>> latencies will show positive improvements as negative. Add a flag to
>> fix the output of compare_strings.py in such cases.
>>
>> * benchtests/scripts/compare_strings.py: New option -t.
> ... and you wouldn't need this patch if you'd not changed to throughput.
>
> Can't you just post-process the data to get throughput for your fancy
> graphs... or better yet add fancy graph support directly to benchtests ;-)
I suppose I could add a property to the benchmark output itself like:
"result-type": "rate" | "time"
which should be a hint to any post-processing scripts like
compare_strings.py.
BTW, there's a -g switch that generates graphs for the string benchmarks
in compare_strings.py. One needs to exclude the simple_* string
functions to get more meaningful results since these tend to be
significantly slower, thus unnecessarily increasing the range of the Y-axis.
Siddhesh
@@ -79,7 +79,7 @@ def draw_graph(f, v, ifuncs, results):
pylab.savefig('%s-%s.png' % (f, v), bbox_inches='tight')
-def process_results(results, attrs, base_func, graph):
+def process_results(results, attrs, base_func, graph, throughput):
""" Process results and print them
Args:
@@ -110,6 +110,8 @@ def process_results(results, attrs, base_func, graph):
if i != base_index:
base = res['timings'][base_index]
diff = (base - t) * 100 / base
+ if throughput:
+ diff = -diff
sys.stdout.write (' (%6.2f%%)' % diff)
sys.stdout.write('\t')
i = i + 1
@@ -132,7 +134,7 @@ def main(args):
attrs = args.attributes.split(',')
results = parse_file(args.input, args.schema)
- process_results(results, attrs, base_func, args.graph)
+ process_results(results, attrs, base_func, args.graph, args.throughput)
if __name__ == '__main__':
@@ -152,6 +154,8 @@ if __name__ == '__main__':
help='IFUNC variant to set as baseline.')
parser.add_argument('-g', '--graph', action='store_true',
help='Generate a graph from results.')
+ parser.add_argument('-t', '--throughput', action='store_true',
+ help='Treat results as throughput and not time.')
args = parser.parse_args()
main(args)