Prototype of GPU evaluation
New modifications include:
- initial GPU support via RAPIDS. The instruction of using Docker to run FEDOT on GPU is comming soon.
- Dockerfile for the base usage of FEDOT.
- the modification of OperationTypesRepository logic: now you 'define the behaviour' of repository by mode parameter, e.g. OperationTypesRepository('model'). Default modes, which are already included, are 'model' and 'data_operation'. If you want to assign any custom repository for any mode call the class method OperationTypesRepository.assign_repo('mode', 'custom_repo_file.json').
Example: repository = OperationTypesRepository().assign_repo('model', 'gpu_models_repository.json') available_operations = repository.suitable_operation(task_type=task.task_type)
Note: after calling the assign_repo method you saves the behaviour(in other words, saves the repository) for the chosen mode till next assign_repo will be called.
Currently
- gpu evaluation is controlled by separated json repository (See repository/data/gpu_models.json)
- models for classification and regression are supported. TS (arima) is coming soon.
- file to run: gpu_example.py
Extra manuals:
- https://github.com/rapidsai/cuml
- https://medium.com/rapids-ai/running-rapids-on-microsoft-windows-10-using-wsl-2-the-windows-subsystem-for-linux-c5cbb2c56e04
Output of Win10/WSL2: