Sequoia
|
Extension of the testing framework for perfomance testing. More...
#include "sequoia/TestFramework/RegularTestCore.hpp"
#include "sequoia/Maths/Statistics/StatisticalAlgorithms.hpp"
#include "sequoia/TestFramework/FileEditors.hpp"
#include <chrono>
#include <random>
#include <future>
#include <thread>
Go to the source code of this file.
Classes | |
class | sequoia::testing::performance_extender< Mode > |
class template for plugging into the checker class template More... | |
class | sequoia::testing::basic_performance_test< Mode > |
class template from which all concrete tests should derive More... | |
struct | sequoia::testing::is_parallelizable< T > |
Typedefs | |
using | sequoia::testing::performance_test = basic_performance_test< test_mode::standard > |
using | sequoia::testing::performance_false_positive_test = basic_performance_test< test_mode::false_positive > |
using | sequoia::testing::performance_false_negative_test = basic_performance_test< test_mode::false_negative > |
Functions | |
template<std::invocable Task> | |
std::chrono::duration< double > | sequoia::testing::profile (Task task) |
template<test_mode Mode, std::invocable F, std::invocable S> | |
bool | sequoia::testing::check_relative_performance (std::string_view description, test_logger< Mode > &logger, F fast, S slow, const double minSpeedUp, const double maxSpeedUp, const std::size_t trials, const double num_sds, const std::size_t maxAttempts) |
Function for comparing the performance of a fast task to a slow task. | |
template<class T , class Period > | |
std::chrono::duration< T, Period > | sequoia::testing::calibrate (std::chrono::duration< T, Period > target) |
std::string_view | sequoia::testing::postprocess (std::string_view testOutput, std::string_view referenceOutput) |
Extension of the testing framework for perfomance testing.
using sequoia::testing::performance_test = typedef basic_performance_test<test_mode::standard> |
bool sequoia::testing::check_relative_performance | ( | std::string_view | description, |
test_logger< Mode > & | logger, | ||
F | fast, | ||
S | slow, | ||
const double | minSpeedUp, | ||
const double | maxSpeedUp, | ||
const std::size_t | trials, | ||
const double | num_sds, | ||
const std::size_t | maxAttempts | ||
) |
Function for comparing the performance of a fast task to a slow task.
minSpeedUp | the minimum predicted speed up of fast over slow; must be > 1 |
maxSpeedUp | the maximum predicted speed up of fast over slow; must be > minSpeedUp |
trials | the number of trial used for the statistical analysis |
num_sds | the number of standard deviations used to define a significant result |
maxAttempts | the number of times the entire test should be re-run before accepting failure |
For each trial, both the supposedly fast and slow tasks are run. Their order is random. When all trials have been completed, the mean and standard deviations are computed for both fast and slow tasks. Denote these by m_f, sig_f and m_s, sig_s.
if (m_f + sig_f < m_s + sig_s)
then it is concluded that the purportedly fast task is actually slower than the slow task and so the test fails. If this is not the case then the analysis branches depending on which standard deviation is bigger.
if (sig_f >= sig_s)
then we mutliply m_f by both the min/max predicted speed-up and compare to the range of values around m_s defined by the number of standard deviations. In particular, the test is taken to pass if
(minSpeedUp * m_f <= (m_s + num_sds * sig_s)) && (maxSpeedUp * m_f >= (m_s - num_sds * sig_s))
which is essentially saying that the range of predicted speed-ups must fall within the specified number of standard deviations of m_s.
On the other hand
if(sig_s > sif_g)
then we divide m_s by both the min/max predicted speed-up and compare to the range of values around m_f defined by the number of standard deviations. In particular, the test is taken to pass if
(m_s / maxSpeedUp <= (m_f + num_sds * sig_f)) && (m_s / minSpeedUp >= (m_f - num_sds * sig_f))