deepbox/nn
Layers
Core neural network layers for building models.
Linear
extends Module
Fully connected (dense) layer: y = xWᵀ + b. The most fundamental building block. Kaiming/He initialization for weights, zeros for bias.
Conv1d
extends Module
1D convolution over sequences. Input shape: (batch, inChannels, length). Useful for time series and 1D signal processing.
Conv2d
extends Module
2D convolution over images/grids. Input shape: (batch, inChannels, height, width). The standard layer for CNNs.
MaxPool2d
extends Module
2D max pooling. Reduces spatial dimensions by taking the maximum in each pooling window. Provides translation invariance.
AvgPool2d
extends Module
2D average pooling. Reduces spatial dimensions by averaging values in each window. Smoother than max pooling.
Linear Constructor
- new Linear(inFeatures, outFeatures, { bias?: boolean })
- inFeatures: Size of each input sample
- outFeatures: Size of each output sample
- bias: If true (default), adds learnable bias
Conv2d Constructor
- new Conv2d(inChannels, outChannels, kernelSize, { stride?, padding? })
- inChannels: Number of input channels (e.g., 3 for RGB)
- outChannels: Number of output channels (filters)
- kernelSize: Size of convolving kernel (e.g., 3 for 3×3)
- stride: Stride of the convolution (default: 1)
- padding: Zero-padding added to both sides (default: 0)
layers.ts
import { Linear, Conv2d, MaxPool2d, AvgPool2d, Sequential, ReLU } from "deepbox/nn";import { tensor } from "deepbox/ndarray";// Simple CNNconst cnn = new Sequential( new Conv2d(1, 16, 3, { padding: 1 }), // [B, 1, 28, 28] → [B, 16, 28, 28] new ReLU(), new MaxPool2d(2), // [B, 16, 28, 28] → [B, 16, 14, 14] new Conv2d(16, 32, 3, { padding: 1 }), // [B, 32, 14, 14] new ReLU(), new MaxPool2d(2), // [B, 32, 7, 7] new Linear(32 * 7 * 7, 10) // [B, 10]);