Adaptation of API for multimodal data
Created by: andreygetmanov
Global purpose of PR - to make Fedot API work with multimodal data
Changes:
- now multimodal datasets can be run via Fedot API as easy as unimodal datasets
- now initial pipeline is based on the type of data (for example, text data always precedes a vectorizing node)
- slightly changed the logics of operation filter. Now operations are chosen not only by task but by data type too (each data source will have its own operations list in next PRs)
- MultiModalStrategy now works more effectively and stable
- AssumptionsBuilder now supports multimodal data and creates subbuilder for each data source
- added pretrained embeddings as vectorization models. The type of model can be chosen automatically during pipeline tuning
- other parameters of text models now are tunable too
- added various tests on new functionality
- added new example based on text+table dataset
- deleted deprecated imdb case
- fixed issue Fix composer and tuner work in the multimodal case #630 (closed)
- fixed issue Adapt multimodal example to run by API #626 (closed)
- fixed issue Fix CNN work in the multimodal case #627 (closed)