deepbox/ndarray
Autograd (GradTensor)
Automatic differentiation via reverse-mode backpropagation. GradTensor wraps a Tensor and builds a computational graph during the forward pass, then computes gradients via backward().
How Autograd Works
- Create parameters with parameter() — these track gradients
- Perform operations using GradTensor methods (.add(), .mul(), .matmul(), etc.)
- Each operation records itself in a computational graph
- Call .backward() on the final scalar to propagate gradients
- Read gradients from each parameter's .grad property
- Call .zeroGrad() before each training iteration to reset accumulated gradients
parameter
parameter(data: TensorLike, opts?: GradTensorOptions): GradTensor
Create a GradTensor with requiresGrad=true. This is the entry point for automatic differentiation. Parameters track gradients during forward/backward passes.
Parameters:
data: TensorLike - Initial values (nested array, TypedArray, or scalar)noGrad
noGrad<T>(fn: () => T): T
Execute a function with gradient tracking disabled. Operations inside will not build computational graphs. Use for inference, evaluation, or any code that doesn't need gradients.
GradTensor Methods
- .add(other) — Element-wise addition with gradient tracking
- .sub(other) — Element-wise subtraction
- .mul(other) — Element-wise multiplication
- .neg() — Negation: −x
- .sum(axis?, keepdims?) — Sum reduction
- .mean(axis?, keepdims?) — Mean reduction
- .matmul(other) — Matrix multiplication
- .transpose(axes?) — Transpose dimensions
- .relu() — ReLU activation
- .sigmoid() — Sigmoid activation
- .softmax(axis?) — Softmax activation
- .slice(ranges) — Slice with gradient support
- .gather(indices, axis) — Gather with gradient support
- .reshape(shape) — Reshape with gradient support
- .backward(grad?) — Compute all gradients via backpropagation
- .zeroGrad() — Reset .grad to null
- .detach() — Create a GradTensor disconnected from the graph
- .tensor — Access the underlying Tensor
- .grad — Read computed gradients (Tensor | null)
- .requiresGrad — Whether this tensor tracks gradients
autograd.ts
import { parameter, noGrad } from "deepbox/ndarray";// Basic gradient computationconst x = parameter([2, 3]);const y = x.mul(x).sum(); // y = x₁² + x₂² = 4 + 9 = 13y.backward();console.log(x.grad); // tensor([4, 6]) — dy/dx = 2x// Matrix gradientsconst W = parameter([[0.5, 0.3], [0.2, 0.8]]);const inp = parameter([[1, 2]]);const out = inp.matmul(W).sum();out.backward();console.log(W.grad); // gradients w.r.t. Wconsole.log(inp.grad); // gradients w.r.t. input// Disable gradients for inferenceconst result = noGrad(() => { return x.mul(x).sum(); // No graph built});// Reset before next training stepx.zeroGrad();W.zeroGrad();When to Use
- Training neural networks — parameter() for weights, backward() for gradient computation
- Custom loss functions — compose GradTensor operations and call backward()
- noGrad() — Inference, evaluation, or any code where you want to save memory by not tracking gradients
- detach() — Stop gradient flow at a specific point (e.g., target networks in RL)