Abstract notion of Objective, its evaluation and refactor usages of metrics
Created by: gkirgizov
Main introduced abstractions:
- Objective -- Represents objective functions, encapsulates a set of metrics, the kind of objective (single/multi) and can provide some info about them (metric names).
-
ObjectiveEvaluate -- Responsible for specific evaluation policy of (a) Objective and (b) Graphs. It hides domain specifics of what are the graphs and what's additionally requried for evaluating objective.
- For example, Pipelines are evaluated by
DataObjectiveEvaluate
, that encapsulates necessarypipeline.fit
on the train data and objective evaluation on the test data.
- For example, Pipelines are evaluated by
- Evaluate (introduced in a previous PR #639) is renamed to EvaluateDispatcher -- Responsible for how computing ObjectiveEvaluation must be distributed over processes.
So, following these abstractions, main changes in API:
- Now Objective must be used instead of just a list of metrics with boolean flag
is_multi_objective
. All useages ofmetrics
list are dropped from composers, optimsiers etc. -
GraphOptimiser.optimise
now acceptsObjectiveEvaluate
as an argument. For tests and ad-hoc usages there is a way to contruct trivialObjectiveEvaluate
with trivialObjective
from a simple Callable: e.g. seerun_custom_example.py
.
Part of the big issue #608 (closed)