Distributions
Sample from the continuous uniform distribution U(low, high). Every value in [low, high) is equally likely. PDF: f(x) = 1/(high − low) for x ∈ [low, high). Mean = (low + high)/2, Variance = (high − low)²/12.
low: number - Lower bound (inclusive)high: number - Upper bound (exclusive)shape: Shape - Output tensor shapeSample from the normal (Gaussian) distribution N(μ, σ²). Uses the Box-Muller transform for generation. The bell curve — 68% of values fall within ±1σ of the mean, 95% within ±2σ, 99.7% within ±3σ.
mean: number - Mean (μ) — center of the distributionstd: number - Standard deviation (σ) — spread. Must be > 0.Random floats uniformly distributed in [0, 1). Shorthand for uniform(0, 1, shape). The most basic random tensor — use it for random masks, dropout, and Monte Carlo sampling.
Random floats from the standard normal distribution N(0, 1). Shorthand for normal(0, 1, shape). Used for Gaussian noise injection, weight initialization (Xavier/He), and generative models.
Sample from the binomial distribution B(n, p). Each sample counts the number of successes in n independent Bernoulli trials, each with success probability p. PMF: P(k) = C(n,k) · pᵏ · (1−p)ⁿ⁻ᵏ. Mean = np, Variance = np(1−p). When n=1, this is a Bernoulli distribution.
n: number - Number of trials (positive integer)p: number - Probability of success per trial, in [0, 1]Sample from the Poisson distribution with rate λ. Models the number of events occurring in a fixed time interval when events happen independently at a constant average rate. PMF: P(k) = λᵏe⁻λ / k!. Mean = Variance = λ. Approximates B(n,p) when n is large and p is small.
lambda: number - Average rate of events (λ > 0)Sample from the exponential distribution with rate λ. Models the waiting time until the next Poisson event. The only memoryless continuous distribution — P(X > s+t | X > s) = P(X > t). PDF: f(x) = λe⁻λˣ for x ≥ 0. Mean = 1/λ, Variance = 1/λ².
lambda: number - Rate parameter (λ > 0). Higher λ → shorter waiting times.Sample from the gamma distribution Γ(α, β). Generalizes the exponential (α=1) and chi-squared (α=k/2, β=0.5) distributions. PDF: f(x) = βᵅ xᵅ⁻¹ e⁻βˣ / Γ(α) for x > 0. Mean = α/β, Variance = α/β². Used as a conjugate prior in Bayesian statistics.
alpha: number - Shape parameter (α > 0). Controls the skewness.beta: number - Rate parameter (β > 0). Inverse of the scale.Sample from the beta distribution Beta(α, β). Values are always in (0, 1), making it natural for modeling probabilities, proportions, and rates. PDF: f(x) = xᵅ⁻¹(1−x)ᵝ⁻¹ / B(α,β). Mean = α/(α+β). When α=β=1, this is uniform; when α=β>1, it is bell-shaped; when α≠β, it is skewed.
alpha: number - Shape parameter α > 0. Larger α → more weight near 1.beta: number - Shape parameter β > 0. Larger β → more weight near 0.Random integers uniformly distributed in [low, high). Each integer in the range is equally likely with probability 1/(high − low). Output dtype is int32. Useful for random indexing, label generation, and discrete sampling.
low: number - Lower bound (inclusive, integer)high: number - Upper bound (exclusive, integer)Generate a random permutation. If x is a number n, returns a random permutation of [0, 1, ..., n−1]. If x is a 1D Tensor, returns a new tensor with elements shuffled in random order (does not mutate the input). Uses the Fisher-Yates algorithm for O(n) uniform shuffling.
x: Tensor | number - Length of permutation or tensor to shuffleDraw a random sample from a 1D tensor. With replace=true (default), the same element can appear multiple times (bootstrap sampling). With replace=false, elements are drawn without replacement (like dealing cards). Provide p for weighted sampling — p must sum to 1.
a: Tensor - 1D source tensor to sample fromsize: number - Number of items to drawreplace: boolean - Sample with replacement (default: true)p: Tensor - Probability weights for each element (must sum to 1)Shuffle elements of a tensor in place using the Fisher-Yates algorithm. This is the ONLY mutating operation in deepbox/ndarray — all other tensor operations return new tensors. O(n) time, O(1) extra space.
x: Tensor - Tensor to shuffle in placeNormal PDF
Where:
- μ = Mean (center)
- σ = Standard deviation (spread)
Binomial PMF
Where:
- n = Number of trials
- p = Success probability
Poisson PMF
Where:
- λ = Rate parameter (mean = variance = λ)
Exponential PDF
Where:
- λ = Rate parameter (mean = 1/λ)
Gamma PDF
Where:
- α = Shape (controls skewness)
- β = Rate (inverse scale)
Beta PDF
Where:
- B(α,β) = Beta function (normalizing constant)
import { rand, randn, uniform, normal, binomial, poisson, exponential, gamma, beta, randint, permutation, choice, setSeed } from "deepbox/random";import { tensor } from "deepbox/ndarray";setSeed(42); // All calls below are deterministic// ── Quick random tensors ──const r = rand([3, 3]); // Uniform [0, 1)const n = randn([100]); // Standard normal N(0, 1)// ── Parametric continuous distributions ──const u = uniform(0, 10, [3, 3]); // U(0, 10)const g = normal(5, 2, [1000]); // N(μ=5, σ=2)const exp = exponential(0.5, [100]); // Exp(λ=0.5), mean = 2const gam = gamma(2, 1, [100]); // Γ(α=2, β=1)const b = beta(2, 5, [100]); // Beta(2, 5), mean ≈ 0.286// ── Discrete distributions ──const coin = binomial(10, 0.5, [100]); // 10 coin flips × 100const events = poisson(3, [100]); // Poisson arrivals, λ=3const ints = randint(0, 10, [5]); // Random integers [0, 10)// ── Sampling utilities ──const perm = permutation(10); // Random order of 0..9const data = tensor([10, 20, 30, 40, 50]);const sample = choice(data, 3, false); // 3 unique elementsconst weighted = choice(data, 3, true, tensor([0.5, 0.3, 0.1, 0.05, 0.05]));When to Use
- uniform / rand — Weight initialization (uniform Glorot), random masks, Monte Carlo integration
- normal / randn — Gaussian noise injection, He/Xavier weight init, variational autoencoders, diffusion models
- binomial — A/B testing simulation, dropout mask generation, counting success events
- poisson — Modeling count data (website visits, particle decay, customer arrivals per hour)
- exponential — Waiting times, survival analysis, queuing theory, reliability engineering
- gamma — Bayesian conjugate prior for rates, modeling right-skewed positive data (income, insurance claims)
- beta — Modeling probabilities and proportions, Bayesian prior for Bernoulli/binomial parameters, A/B test posterior
- permutation — Shuffling dataset indices for cross-validation splits
- choice — Bootstrap resampling (replace=true), stratified sampling (replace=false, weighted)