Example 14
intermediate
14
Autograd
Gradients
Backpropagation

Automatic Differentiation (Autograd)

Automatic differentiation (autograd) is the engine behind neural network training. This example shows how parameter() creates GradTensors that record every operation (add, mul, matmul, relu, etc.) into a directed acyclic graph. When you call .backward() on a scalar loss, Deepbox traverses this graph in reverse to compute ∂loss/∂param for every parameter. You will compute gradients of simple functions (y = x², y = sum(Wx)), verify them against hand-calculated values, and learn to use noGrad() to disable tracking for inference. The example also demonstrates chaining operations, gradient accumulation, and .zeroGrad() to reset before each training step.

Deepbox Modules Used

deepbox/ndarray

What You Will Learn

  • parameter() creates GradTensors that track computation graphs
  • .backward() computes gradients via reverse-mode autodiff
  • Gradients follow the chain rule through all operations
  • noGrad() disables tracking — use for inference to save memory
  • .zeroGrad() resets accumulated gradients before each training step

Source Code

14-autograd/index.ts
1import { parameter, noGrad } from "deepbox/ndarray";23console.log("=== Automatic Differentiation ===\n");45// Simple gradient: y = x², dy/dx = 2x6const x = parameter([2, 3]);7const y = x.mul(x).sum(); // y = 4 + 9 = 138y.backward();9console.log("x =", x.tensor.toString());10console.log("y = x² → sum =", Number(y.tensor.data[0]));11console.log("dy/dx = 2x =", x.grad?.toString()); // [4, 6] ✓1213// Matrix gradients14const W = parameter([[0.5, 0.3], [0.2, 0.8]]);15const inp = parameter([[1, 2]]);16const out = inp.matmul(W).sum();17out.backward();18console.log("\nW.grad:", W.grad?.toString());19console.log("inp.grad:", inp.grad?.toString());2021// Chain rule: z = relu(Wx + b)22const x2 = parameter([[1, -1]]);23const w2 = parameter([[0.5], [-0.3]]);24const z = x2.matmul(w2).relu().sum();25z.backward();26console.log("\nAfter ReLU:");27console.log("w2.grad:", w2.grad?.toString());2829// Disable gradient tracking for inference30const result = noGrad(() => {31  return x.mul(x).sum();32});33console.log("\nnoGrad result:", Number(result.tensor.data[0]));

Console Output

$ npx tsx 14-autograd/index.ts
=== Automatic Differentiation ===

x = [2, 3]
y = x² → sum = 13
dy/dx = 2x = [4, 6]

W.grad: [[1, 1], [2, 2]]
inp.grad: [[0.8, 1.1]]

After ReLU:
w2.grad: [[1], [-1]]

noGrad result: 13