GitHub
deepbox/nn

Activation Layers

Module-wrapped activation functions for use in Sequential and custom Module classes. These are stateless layers (no parameters).

ReLU

extends Module

f(x) = max(0, x). The default activation for hidden layers.

Sigmoid

extends Module

σ(x) = 1/(1+e⁻ˣ). Output layer for binary classification.

Tanh

extends Module

tanh(x). Output in (−1, 1). Used in RNN/LSTM gates.

GELU

extends Module

x·Φ(x). Default in Transformer models.

LeakyReLU

extends Module

f(x) = x if x > 0, else αx. Prevents dying neurons.

ELU

extends Module

f(x) = x if x > 0, else α(eˣ−1). Smooth negative part.

Softmax

extends Module

Converts logits to probabilities. Output layer for multi-class.

LogSoftmax

extends Module

log(softmax(x)). Numerically stable. Used with NLL loss.

Mish

extends Module

f(x) = x·tanh(softplus(x)). Self-regularizing, smooth activation.

Swish

extends Module

f(x) = x·σ(x). Also called SiLU. Default in EfficientNet.

Softplus

extends Module

f(x) = log(1 + eˣ). Smooth approximation of ReLU.

activation-layers.ts
import { Sequential, Linear, ReLU, GELU, Sigmoid } from "deepbox/nn";// Use activation layers in Sequentialconst model = new Sequential(  new Linear(10, 32),  new GELU(),      // Transformer-style activation  new Linear(32, 1),  new Sigmoid()    // Binary classification output);