ArticlesProjectsCredentialsAbout

Gradient Descent from Scratch in Python

Vanilla SGD, momentum, RMSProp, and Adam implemented from scratch in NumPy — with a shared loss surface visualisation so you can watch each algorithm find the minimum differently.

machine-learningoptimizationdeep-learning

Four optimisers, one loss surface — SGD, momentum, RMSProp, and Adam in plain NumPy with convergence plots.