Skip to content

GitLab

  • Menu
Projects Groups Snippets
    • Loading...
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
  • Sign in / Register
  • F FEDOT
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 87
    • Issues 87
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 1
    • Merge requests 1
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Monitor
    • Monitor
    • Incidents
  • Packages & Registries
    • Packages & Registries
    • Package Registry
    • Container Registry
    • Infrastructure Registry
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Repository
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • ITMO-NSS-team
  • FEDOT
  • Issues
  • #779

Closed
Open
Created Jul 17, 2022 by Elizaveta Lutsenko@LizLutsenkoOwner

Avoid failing optimization when all pipelines are evaluated with errors

Created by: gkirgizov

The problematic behavior is in method evaluate_with_cache in evaluation.py:

    def evaluate_with_cache(self, population: PopulationT) -> PopulationT:
        reversed_population = list(reversed(population))
        self._remote_compute_cache(reversed_population)
        evaluated_population = self.evaluate_population(reversed_population)
        self._reset_eval_cache()
        if not evaluated_population and reversed_population:
            raise AttributeError('Too many fitness evaluation errors. Composing stopped.')
        return evaluated_population

Failing right away seems unnecessary. I have following thoughts:

  • Making return type Optional seems more logical, as Optimizer can recover from situation in most cases.
  • For example, Optimizer could mutate (in a different way) last successfully evaluated population.
  • Yet the bad case still must be considered: when it's the very first population, or when we have failed too many times in a row.

The task is to design a solution with these points in mind, so that Optimizer could do something in case when evaluation goes wrong.

It's a more general version of the issue https://github.com/nccr-itmo/FEDOT/issues/767 , because this error sometimes appears in several tests, and in principle can arise anytime due to stochastic nature of optimization.

Assignee
Assign to
Time tracking