Skip to content
Snippets Groups Projects

Changed methodology for benchmarks

Merged Dariusz Kędzierski requested to merge dk-benchmarks-ver-2 into develop
5 files
+ 144
13
Compare changes
  • Side-by-side
  • Inline
Files
5
  • 7d59a053
    * Usage of pytest-benchmarks is dropped since it call tests
      sequentially which very long time to complete, instead original
      tavern tests are used
    * Time measuring is done via durations switch to pytest and tavern
    * During benchmarking validate_response.py is disabled using nevly
      introduced env variable
    * Each run creates xml junit file with time data
    * After all runs data from files are combined and report file is
      generated
    * Tests above threshold are marked with red
+ 20
0
#!/bin/bash
set -e
pip3 install tox --user
export HIVEMIND_ADDRESS=$1
export HIVEMIND_PORT=$2
export TAVERN_DISABLE_COMPARATOR=true
echo Attempting to start benchmarks on hivemind instance listeing on: $HIVEMIND_ADDRESS port: $HIVEMIND_PORT
ITERATIONS=$3
for (( i=0; i<$ITERATIONS; i++ ))
do
echo About to run iteration $i
tox -e tavern-benchmark -- -W ignore::pytest.PytestDeprecationWarning -n auto --junitxml=../../../../benchmarks-$i.xml
echo Done!
done
./scripts/xml_report_parser.py . ./tests/tests_api/hivemind/tavern
Loading