Bump tensorflow from 2.4.0 to 2.5.0
Created by: dependabot[bot]
Bumps tensorflow from 2.4.0 to 2.5.0.
Release notes
Sourced from tensorflow's releases.
TensorFlow 2.5.0
Release 2.5.0
Major Features and Improvements
- Support for Python3.9 has been added.
tf.data:
tf.dataservice now supports strict round-robin reads, which is useful for synchronous training workloads where example sizes vary. With strict round robin reads, users can guarantee that consumers get similar-sized examples in the same step.- tf.data service now supports optional compression. Previously data would always be compressed, but now you can disable compression by passing
compression=Nonetotf.data.experimental.service.distribute(...).tf.data.Dataset.batch()now supportsnum_parallel_callsanddeterministicarguments.num_parallel_callsis used to indicate that multiple input batches should be computed in parallel. Withnum_parallel_callsset,deterministicis used to indicate that outputs can be obtained in the non-deterministic order.- Options returned by
tf.data.Dataset.options()are no longer mutable.- tf.data input pipelines can now be executed in debug mode, which disables any asynchrony, parallelism, or non-determinism and forces Python execution (as opposed to trace-compiled graph execution) of user-defined functions passed into transformations such as
map. The debug mode can be enabled throughtf.data.experimental.enable_debug_mode().tf.lite
- Enabled the new MLIR-based quantization backend by default
- The new backend is used for 8 bits full integer post-training quantization
- The new backend removes the redundant rescales and fixes some bugs (shared weight/bias, extremely small scales, etc)
- Set
experimental_new_quantizerin tf.lite.TFLiteConverter to False to disable this changetf.keras
tf.keras.metrics.AUCnow support logit predictions.- Enabled a new supported input type in
Model.fit,tf.keras.utils.experimental.DatasetCreator, which takes a callable,dataset_fn.DatasetCreatoris intended to work across alltf.distributestrategies, and is the only input type supported for Parameter Server strategy.tf.distribute
tf.distribute.experimental.ParameterServerStrategynow supports training with KerasModel.fitwhen used withDatasetCreator.- Creating
tf.random.Generatorundertf.distribute.Strategyscopes is now allowed (except fortf.distribute.experimental.CentralStorageStrategyandtf.distribute.experimental.ParameterServerStrategy). Different replicas will get different random-number streams.- TPU embedding support
- Added
profile_data_directorytoEmbeddingConfigSpecin_tpu_estimator_embedding.py. This allows embedding lookup statistics gathered at runtime to be used in embedding layer partitioning decisions.- PluggableDevice
- Third-party devices can now connect to TensorFlow as plug-ins through StreamExecutor C API. and PluggableDevice interface.
- Add custom ops and kernels through kernel and op registration C API.
- Register custom graph optimization passes with graph optimization C API.
- oneAPI Deep Neural Network Library (oneDNN) CPU performance optimizations from Intel-optimized TensorFlow are now available in the official x86-64 Linux and Windows builds.
- They are off by default. Enable them by setting the environment variable
TF_ENABLE_ONEDNN_OPTS=1.- We do not recommend using them in GPU systems, as they have not been sufficiently tested with GPUs yet.
- TensorFlow pip packages are now built with CUDA11.2 and cuDNN 8.1.0
Breaking Changes
- The
TF_CPP_MIN_VLOG_LEVELenvironment variable has been renamed to toTF_CPP_MAX_VLOG_LEVELwhich correctly describes its effect.Bug Fixes and Other Changes
tf.keras:
- Preprocessing layers API consistency changes:
StringLookupaddedoutput_mode,sparse, andpad_to_max_tokensarguments with same semantics asTextVectorization.IntegerLookupaddedoutput_mode,sparse, andpad_to_max_tokensarguments with same semantics asTextVectorization. Renamedmax_values,oov_valueandmask_valuetomax_tokens,oov_tokenandmask_tokento align withStringLookupandTextVectorization.TextVectorizationdefault forpad_to_max_tokensswitched to False.CategoryEncodingno longer supportsadapt,IntegerLookupnow supports equivalent functionality.max_tokensargument renamed tonum_tokens.Discretizationaddednum_binsargument for learning bins boundaries through callingadapton a dataset. Renamedbinsargument tobin_boundariesfor specifying bins withoutadapt.- Improvements to model saving/loading:
model.load_weightsnow accepts paths to saved models.
... (truncated)
Changelog
Sourced from tensorflow's changelog.
Release 2.5.0
Breaking Changes
- The
TF_CPP_MIN_VLOG_LEVELenvironment variable has been renamed to toTF_CPP_MAX_VLOG_LEVELwhich correctly describes its effect.Known Caveats
Major Features and Improvements
TPU embedding support
- Added
profile_data_directorytoEmbeddingConfigSpecin_tpu_estimator_embedding.py. This allows embedding lookup statistics gathered at runtime to be used in embedding layer partitioning decisions.
tf.keras.metrics.AUCnow support logit predictions.Creating
tf.random.Generatorundertf.distribute.Strategyscopes is now allowed (except fortf.distribute.experimental.CentralStorageStrategyandtf.distribute.experimental.ParameterServerStrategy). Different replicas will get different random-number streams.
tf.data:
- tf.data service now supports strict round-robin reads, which is useful for synchronous training workloads where example sizes vary. With strict round robin reads, users can guarantee that consumers get similar-sized examples in the same step.
- tf.data service now supports optional compression. Previously data would always be compressed, but now you can disable compression by passing
compression=Nonetotf.data.experimental.service.distribute(...).tf.data.Dataset.batch()now supportsnum_parallel_callsanddeterministicarguments.num_parallel_callsis used to indicate that multiple input batches should be computed in parallel. Withnum_parallel_callsset,deterministicis used to indicate that outputs can be obtained in the non-deterministic order.- Options returned by
tf.data.Dataset.options()are no longer mutable.- tf.data input pipelines can now be executed in debug mode, which disables any asynchrony, parallelism, or non-determinism and forces Python execution (as opposed to trace-compiled graph execution) of user-defined functions passed into transformations such as
map. The debug mode can be enabled throughtf.data.experimental.enable_debug_mode().
tf.lite
- Enabled the new MLIR-based quantization backend by default
- The new backend is used for 8 bits full integer post-training quantization
- The new backend removes the redundant rescales and fixes some bugs (shared weight/bias, extremely small scales, etc)
... (truncated)
Commits
-
a4dfb8dMerge pull request #49124 from tensorflow/mm-cherrypick-tf-data-segfault-fix-... -
2107b1dMerge pull request #49116 from tensorflow-jenkins/version-numbers-2.5.0-17609 -
16b8139Update snapshot_dataset_op.cc -
86a0d86Merge pull request #49126 from geetachavan1/cherrypicks_X9ZNY -
9436ae6Merge pull request #49128 from geetachavan1/cherrypicks_D73J5 -
6b2bf99Validate that a and b are proper sparse tensors -
c03ad1aEnsure validation sticks in banded_triangular_solve_op -
12a6eadMerge pull request #49120 from geetachavan1/cherrypicks_KJ5M9 -
b67f5b8Merge pull request #49118 from geetachavan1/cherrypicks_BIDTR -
a13c0ad[tf.data][cherrypick] Fix snapshot segfault when using repeat and prefecth - Additional commits viewable in compare view
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
-
@dependabot rebasewill rebase this PR -
@dependabot recreatewill recreate this PR, overwriting any edits that have been made to it -
@dependabot mergewill merge this PR after your CI passes on it -
@dependabot squash and mergewill squash and merge this PR after your CI passes on it -
@dependabot cancel mergewill cancel a previously requested merge and block automerging -
@dependabot reopenwill reopen this PR if it is closed -
@dependabot closewill close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually -
@dependabot ignore this major versionwill close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) -
@dependabot ignore this minor versionwill close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) -
@dependabot ignore this dependencywill close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) -
@dependabot use these labelswill set the current labels as the default for future PRs for this repo and language -
@dependabot use these reviewerswill set the current reviewers as the default for future PRs for this repo and language -
@dependabot use these assigneeswill set the current assignees as the default for future PRs for this repo and language -
@dependabot use this milestonewill set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the Security Alerts page.