Neural Network Playground
A fully interactive neural network implementation in pure JavaScript with zero ML libraries. Draw digits, watch predictions, correct mistakes, and observe the network learn through backpropagation in real-time.
What makes this different:
Real backpropagation – hand-coded matrix operations, gradient computation, and weight updates (no TensorFlow/PyTorch black boxes).
Interactive teaching mode – Draw a digit → Network predicts → Click correct label → Watch gradients flow backward and weights update instantly.
Auto-correction – Enable to automatically run backprop steps when predictions are wrong.
Complete algorithm visibility – visualize hidden activations, weight matrices, gradients (∂L/∂W₁, ∂L/∂W₂), and back-projected class templates.
Technical implementation:
ReLU hidden layer + Softmax output (cross-entropy loss).
Xavier/He weight initialization options.
Mini-batch SGD with configurable learning rate, batch size, epochs.
Warm-start pretraining on synthetic digits (using canvas font rendering with jitter).
Live loss curve tracking via Chart.js.
Architecture deep-dive:
784 input neurons (28×28 pixel grid).
Configurable hidden layer (default 64 neurons).
10 output neurons (digits 0-9).
Forward pass: X → ReLU(XW₁ + b₁) → Softmax(HW₂ + b₂).
Backward pass: Chain rule through softmax, ReLU derivative, matrix transposes.
Gradient visualization modes:
Hidden activations – see which neurons fire for your drawing.
W₁ weights – 28×28 heatmaps showing what each hidden neuron "looks for".
Back-projected templates – what the network thinks each digit (0-9) should look like.
|∂L/∂W₁| heatmaps – which input pixels have strongest gradients.
‖∂L/∂W₂‖₁ bars – gradient magnitudes per output class.