Activation functions
Customized activation functions for supporting various models in 🤗 Diffusers.
GELU
class diffusers.models.activations.GELU
< source > ( dim_in: int dim_out: int approximate: str = 'none' bias: bool = True )
Parameters
- dim_in (
int) — The number of channels in the input. - dim_out (
int) — The number of channels in the output. - approximate (
str, optional, defaults to "none") — If "tanh", use tanh approximation. - bias (
bool, defaults to True) — Whether to use a bias in the linear layer.
GELU activation function with tanh approximation support with approximate="tanh".
GEGLU
class diffusers.models.activations.GEGLU
< source > ( dim_in: int dim_out: int bias: bool = True )
Parameters
- dim_in (
int) — The number of channels in the input. - dim_out (
int) — The number of channels in the output. - bias (
bool, defaults to True) — Whether to use a bias in the linear layer.
A variant of the gated linear unit activation function.
ApproximateGELU
class diffusers.models.activations.ApproximateGELU
< source > ( dim_in: int dim_out: int bias: bool = True )
Parameters
- dim_in (
int) — The number of channels in the input. - dim_out (
int) — The number of channels in the output. - bias (
bool, defaults to True) — Whether to use a bias in the linear layer.
The approximate form of the Gaussian Error Linear Unit (GELU). For more details, see section 2 of this
paper.
SwiGLU
class diffusers.models.activations.SwiGLU
< source > ( dim_in: int dim_out: int bias: bool = True )
Parameters
- dim_in (
int) — The number of channels in the input. - dim_out (
int) — The number of channels in the output. - bias (
bool, defaults to True) — Whether to use a bias in the linear layer.
A variant of the gated linear unit activation function. It’s similar to
GEGLU but uses SiLU / Swish instead of GeLU.
FP32SiLU
class diffusers.models.activations.FP32SiLU
< source > ( )
SiLU activation function with input upcasted to torch.float32.
LinearActivation
class diffusers.models.activations.LinearActivation
< source > ( dim_in: int dim_out: int bias: bool = True activation: str = 'silu' )
Update on GitHub