Bump tensorflow from 2.3.1 to 2.4.0
Created by: dependabot[bot]
Bumps tensorflow from 2.3.1 to 2.4.0.
Release notes
Sourced from tensorflow's releases.
TensorFlow 2.4.0
Release 2.4.0
Major Features and Improvements
tf.distribute
introduces experimental support for asynchronous training of models via thetf.distribute.experimental.ParameterServerStrategy
API. Please see the tutorial to learn more.
MultiWorkerMirroredStrategy
is now a stable API and is no longer considered experimental. Some of the major improvements involve handling peer failure and many bug fixes. Please check out the detailed tutorial on Multi-worker training with Keras.Introduces experimental support for a new module named
tf.experimental.numpy
which is a NumPy-compatible API for writing TF programs. See the detailed guide to learn more. Additional details below.Adds Support for TensorFloat-32 on Ampere based GPUs. TensorFloat-32, or TF32 for short, is a math mode for NVIDIA Ampere based GPUs and is enabled by default.
A major refactoring of the internals of the Keras Functional API has been completed, that should improve the reliability, stability, and performance of constructing Functional models.
Keras mixed precision API
tf.keras.mixed_precision
is no longer experimental and allows the use of 16-bit floating point formats during training, improving performance by up to 3x on GPUs and 60% on TPUs. Please see below for additional details.TensorFlow Profiler now supports profiling
MultiWorkerMirroredStrategy
and tracing multiple workers using the sampling mode API.TFLite Profiler for Android is available. See the detailed guide to learn more.
TensorFlow pip packages are now built with CUDA11 and cuDNN 8.0.2.
Breaking Changes
TF Core:
- Certain float32 ops run in lower precsion on Ampere based GPUs, including matmuls and convolutions, due to the use of TensorFloat-32. Specifically, inputs to such ops are rounded from 23 bits of precision to 10 bits of precision. This is unlikely to cause issues in practice for deep learning models. In some cases, TensorFloat-32 is also used for complex64 ops. TensorFloat-32 can be disabled by running
tf.config.experimental.enable_tensor_float_32_execution(False)
.- The byte layout for string tensors across the C-API has been updated to match TF Core/C++; i.e., a contiguous array of
tensorflow::tstring
/TF_TString
s.- C-API functions
TF_StringDecode
,TF_StringEncode
, andTF_StringEncodedSize
are no longer relevant and have been removed; seecore/platform/ctstring.h
for string access/modification in C.tensorflow.python
,tensorflow.core
andtensorflow.compiler
modules are now hidden. These modules are not part of TensorFlow public API.tf.raw_ops.Max
andtf.raw_ops.Min
no longer accept inputs of typetf.complex64
ortf.complex128
, because the behavior of these ops is not well defined for complex types.- XLA:CPU and XLA:GPU devices are no longer registered by default. Use
TF_XLA_FLAGS=--tf_xla_enable_xla_devices
if you really need them, but this flag will eventually be removed in subsequent releases.
tf.keras
:
- The
steps_per_execution
argument inmodel.compile()
is no longer experimental; if you were passingexperimental_steps_per_execution
, rename it tosteps_per_execution
in your code. This argument controls the number of batches to run during eachtf.function
call when callingmodel.fit()
. Running multiple batches inside a singletf.function
call can greatly improve performance on TPUs or small models with a large Python overhead.- A major refactoring of the internals of the Keras Functional API may affect code that is relying on certain internal details:
- Code that uses
isinstance(x, tf.Tensor)
instead oftf.is_tensor
when checking Keras symbolic inputs/outputs should switch to usingtf.is_tensor
.- Code that is overly dependent on the exact names attached to symbolic tensors (e.g. assumes there will be ":0" at the end of the inputs, treats names as unique identifiers instead of using
tensor.ref()
, etc.) may break.- Code that uses full path for
get_concrete_function
to trace Keras symbolic inputs directly should switch to building matchingtf.TensorSpec
s directly and tracing theTensorSpec
objects.- Code that relies on the exact number and names of the op layers that TensorFlow operations were converted into may have changed.
- Code that uses
tf.map_fn
/tf.cond
/tf.while_loop
/control flow as op layers and happens to work before TF 2.4. These will explicitly be unsupported now. Converting these ops to Functional API op layers was unreliable before TF 2.4, and prone to erroring incomprehensibly or being silently buggy.- Code that directly asserts on a Keras symbolic value in cases where ops like
tf.rank
used to return a static or symbolic value depending on if the input had a fully static shape or not. Now these ops always return symbolic values.- Code already susceptible to leaking tensors outside of graphs becomes slightly more likely to do so now.
- Code that tries directly getting gradients with respect to symbolic Keras inputs/outputs. Use
GradientTape
on the actual Tensors passed to the already-constructed model instead.- Code that requires very tricky shape manipulation via converted op layers in order to work, where the Keras symbolic shape inference proves insufficient.
- Code that tries manually walking a
tf.keras.Model
layer by layer and assumes layers only ever have one positional argument. This assumption doesn't hold true before TF 2.4 either, but is more likely to cause issues now.
... (truncated)
Changelog
Sourced from tensorflow's changelog.
Release 2.4.0
Major Features and Improvements
tf.distribute
introduces experimental support for asynchronous training of models via the [tf.distribute.experimental.ParameterServerStrategy
] (https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/ParameterServerStrategy) API. Please see the tutorial to learn more.
MultiWorkerMirroredStrategy
is now a stable API and is no longer considered experimental. Some of the major improvements involve handling peer failure and many bug fixes. Please check out the detailed tutorial on [Multi-worker training with Keras] (https://www.tensorflow.org/tutorials/distribute/multi_worker_with_keras).Introduces experimental support for a new module named [
tf.experimental.numpy
] (https://www.tensorflow.org/api_docs/python/tf/experimental/numpy) which is a NumPy-compatible API for writing TF programs. See the [detailed guide] (https://www.tensorflow.org/guide/tf_numpy) to learn more. Additional details below.Adds Support for TensorFloat-32 on Ampere based GPUs. TensorFloat-32, or TF32 for short, is a math mode for NVIDIA Ampere based GPUs and is enabled by default.
A major refactoring of the internals of the Keras Functional API has been completed, that should improve the reliability, stability, and performance of constructing Functional models.
Keras mixed precision API [
tf.keras.mixed_precision
] (https://www.tensorflow.org/api_docs/python/tf/keras/mixed_precision?version=nightly) is no longer experimental and allows the use of 16-bit floating point formats during training, improving performance by up to 3x on GPUs and 60% on TPUs. Please see below for additional details.TensorFlow Profiler now supports profiling
MultiWorkerMirroredStrategy
and tracing multiple workers using the [sampling mode API] (https://www.tensorflow.org/guide/profiler#profiling_apis).TFLite Profiler for Android is available. See the detailed [guide] (https://www.tensorflow.org/lite/performance/measurement#trace_tensorflow_lite_internals_in_android) to learn more.
TensorFlow pip packages are now built with CUDA11 and cuDNN 8.0.2.
Breaking Changes
- TF Core:
- Certain float32 ops run in lower precsion on Ampere based GPUs, including
... (truncated)
Commits
-
582c8d2
Merge pull request #44220 from tensorflow-jenkins/relnotes-2.4.0rc0-18048 -
c16387f
Update RELEASE.md -
4cf406c
Update RELEASE.md -
3f35ef2
Update RELEASE.md -
3647e8e
Update RELEASE.md -
281c7d5
Update RELEASE.md -
91ec75f
Update RELEASE.md -
ed5ad82
Update RELEASE.md -
1267bba
Update RELEASE.md -
13a4067
Update RELEASE.md - Additional commits viewable in compare view
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase
.
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
-
@dependabot rebase
will rebase this PR -
@dependabot recreate
will recreate this PR, overwriting any edits that have been made to it -
@dependabot merge
will merge this PR after your CI passes on it -
@dependabot squash and merge
will squash and merge this PR after your CI passes on it -
@dependabot cancel merge
will cancel a previously requested merge and block automerging -
@dependabot reopen
will reopen this PR if it is closed -
@dependabot close
will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually -
@dependabot ignore this major version
will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) -
@dependabot ignore this minor version
will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) -
@dependabot ignore this dependency
will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) -
@dependabot use these labels
will set the current labels as the default for future PRs for this repo and language -
@dependabot use these reviewers
will set the current reviewers as the default for future PRs for this repo and language -
@dependabot use these assignees
will set the current assignees as the default for future PRs for this repo and language -
@dependabot use this milestone
will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the Security Alerts page.