-
Grigorii authored
Part of the big issue #608 Main introduced abstractions: * Objective -- Represents objective functions, encapsulates a set of metrics, the kind of objective (single/multi) and can provide some info about them (metric names). * ObjectiveEvaluate -- Responsible for specific evaluation policy of (a) Objective and (b) Graphs. It hides domain specifics of what are the graphs and what's additionally requried for evaluating objective. For example, Pipelines are evaluated by DataObjectiveEvaluate, that encapsulates necessary pipeline.fit on the train data and objective evaluation on the test data. * Evaluate (introduced in a previous PR #639) is renamed to EvaluateDispatcher -- Responsible for how computing ObjectiveEvaluation must be distributed over processes. So, following these abstractions, main changes in API: * Now Objective must be used instead of just a list of metrics with boolean flag is_multi_objective. All useages of metrics list are dropped from composers, optimsiers etc. * GraphOptimiser.optimise now accepts ObjectiveEvaluate as an argument. For tests and ad-hoc usages there is a way to contruct trivial ObjectiveEvaluate with trivial Objective from a simple Callable: e.g. see run_custom_example.py. * Correspondence between old/new code for metric calculation: 1. `calc_metric_for_folds` -> `ObjectiveEvaluate.evaluate` (for data_producer with K folds) 2. `composer.compose_metric` -> `ObjectiveEvaluate.evaluate` (for trivial data_producer with 1 fold)
3a6f06b2