bias_act | R Documentation |
Adds bias b
to activation tensor x
, evaluates activation function act
,
and scales the result by gain
. Each of the steps is optional. In most cases,
the fused op is considerably more efficient than performing the same calculation
using standard PyTorch ops. It supports first and second order gradients,
but not third order gradients.
bias_act( x, b = NULL, dim = 2, act = "linear", alpha = NULL, gain = NULL, clamp = NULL, impl = if (cuda_is_available() & x$device$type == "cuda") "cuda" else "ref" )
x |
Input activation tensor. Can be of any shape. |
b |
Bias vector, or |
dim |
The dimension in |
act |
Name of the activation function to evaluate, or |
alpha |
Shape parameter for the activation function, or |
gain |
Scaling factor for the output tensor, or |
clamp |
Clamp the output values to |
impl |
Name of the implementation to use. Can be |
torch_tensor
of the same shape and datatype as x
.
Note that this function used code from the StyleGAN3 project which
is copyright of Nvidia 2021, and is redistributed in the torch
package under its
original license which can be found here: https://github.com/NVlabs/stylegan3/blob/main/LICENSE.txt.
Note that under the license use is restricted to non-commercial purposes. If you use this function,
please make sure your use is acceptable under the license linked above.#'
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.