View source on GitHub |
Changes the layout of tensor
.
tf.experimental.dtensor.relayout(
tensor: tf.Tensor
,
layout: tf.experimental.dtensor.Layout
,
name: Optional[str] = None
) -> tf.Tensor
Used in the notebooks
Used in the guide |
---|
Changes the layout of tensor
to layout
. This is used to fine-tune the
behavior of ops following/connected to tensor
, such as choosing one SPMD
expansion pattern over another. This works by forward propagating layout
to connected TensorFlow computation graphs during layout propagation.
Currently, only converting layouts from replicated to sharded or sharded to replicated per mesh dimension is supported. That is, "x, y" -> "unsharded, y" is supported, while "x, y" -> "z, y" is not supported.
We also support a special "match" sharding spec, which instructs the relayout to act as an identity operation with respect to any sharding on these mesh dimensions.
Relayout is internally lowered to a set of Split and/or AllToAll ops. When tensor layouts are converted from replicated to sharded, the cost is comparatively low because we only insert Split ops and no cross-device communication is needed. However, when tensor layouts are converted from sharded to replicated, cross-device communication may occur, causing potential performance impact.
Args | |
---|---|
tensor
|
A DTensor to specify a new layout for. |
layout
|
A Layout object specifying a new sharding spec. |
name
|
name of the Op. |
Returns | |
---|---|
A DTensor output from the Relayout op. |