"๐—•๐˜‚๐—ถ๐—น๐—ฑ๐—ถ๐—ป๐—ด ๐˜€๐˜๐—ฟ๐—ผ๐—ป๐—ด ๐—ณ๐—ผ๐˜‚๐—ป๐—ฑ๐—ฎ๐˜๐—ถ๐—ผ๐—ป๐˜€ ๐—ฏ๐—ฒ๐—ณ๐—ผ๐—ฟ๐—ฒ ๐—ฏ๐˜‚๐—ถ๐—น๐—ฑ๐—ถ๐—ป๐—ด ๐—ฏ๐—ถ๐—ด ๐—บ๐—ผ๐—ฑ๐—ฒ๐—น๐˜€"

In September 2025, I completed ๐—Ÿ๐˜‚๐—ถ๐˜€ ๐—ฆ๐—ฒ๐—ฟ๐—ฟ๐—ฎ๐—ป๐—ผโ€™๐˜€ ๐— ๐—ฎ๐˜๐—ต๐—ฒ๐—บ๐—ฎ๐˜๐—ถ๐—ฐ๐˜€ ๐—ณ๐—ผ๐—ฟ ๐— ๐—ฎ๐—ฐ๐—ต๐—ถ๐—ป๐—ฒ ๐—Ÿ๐—ฒ๐—ฎ๐—ฟ๐—ป๐—ถ๐—ป๐—ด ๐˜€๐—ฝ๐—ฒ๐—ฐ๐—ถ๐—ฎ๐—น๐—ถ๐˜‡๐—ฎ๐˜๐—ถ๐—ผ๐—ป which provided a solid mathematical foundation and sparked my interest in building a more research-oriented understanding of machine learning.

After that, I started exploring classical ML algorithms, from linear regression to k-Means, from decision boundaries to clustering, from training models to questioning how learning itself actually works. But something felt incomplete.

When using libraries like ๐˜€๐—ฐ๐—ถ๐—ธ๐—ถ๐˜-๐—น๐—ฒ๐—ฎ๐—ฟ๐—ป, itโ€™s very easy to apply an algorithm, yet extremely easy to remain unaware of what is actually happening underneath the hood.

๐—™๐—ผ๐—ฟ ๐—ฒ๐˜…๐—ฎ๐—บ๐—ฝ๐—น๐—ฒ, when someone applies ๐—ธ-๐— ๐—ฒ๐—ฎ๐—ป๐˜€ ๐—ฐ๐—น๐˜‚๐˜€๐˜๐—ฒ๐—ฟ๐—ถ๐—ป๐—ด ๐˜‚๐˜€๐—ถ๐—ป๐—ด ๐˜€๐—ฐ๐—ถ๐—ธ๐—ถ๐˜-๐—น๐—ฒ๐—ฎ๐—ฟ๐—ป, they often donโ€™t realize that the algorithm is fundamentally solving a constrained optimization problem: it repeatedly updates cluster centroids to ๐—บ๐—ถ๐—ป๐—ถ๐—บ๐—ถ๐˜‡๐—ฒ ๐˜๐—ต๐—ฒ ๐˜๐—ผ๐˜๐—ฎ๐—น ๐˜„๐—ถ๐˜๐—ต๐—ถ๐—ป-๐—ฐ๐—น๐˜‚๐˜€๐˜๐—ฒ๐—ฟ ๐˜€๐˜‚๐—บ ๐—ผ๐—ณ ๐˜€๐—พ๐˜‚๐—ฎ๐—ฟ๐—ฒ๐—ฑ ๐—ฑ๐—ถ๐˜€๐˜๐—ฎ๐—ป๐—ฐ๐—ฒ๐˜€, a process grounded in ๐—น๐—ถ๐—ป๐—ฒ๐—ฎ๐—ฟ ๐—ฎ๐—น๐—ด๐—ฒ๐—ฏ๐—ฟ๐—ฎ, ๐—ฐ๐—ฎ๐—น๐—ฐ๐˜‚๐—น๐˜‚๐˜€, and ๐—ถ๐˜๐—ฒ๐—ฟ๐—ฎ๐˜๐—ถ๐˜ƒ๐—ฒ ๐—ผ๐—ฝ๐˜๐—ถ๐—บ๐—ถ๐˜‡๐—ฎ๐˜๐—ถ๐—ผ๐—ป.

Most high-level libraries summarize all of this mathematics in a single line of code, but real understanding lives in the math.

Thatโ€™s when I discovered the book โ€œ๐— ๐—ฎ๐˜๐—ต๐—ฒ๐—บ๐—ฎ๐˜๐—ถ๐—ฐ๐˜€ ๐—ผ๐—ณ ๐— ๐—ฎ๐—ฐ๐—ต๐—ถ๐—ป๐—ฒ ๐—Ÿ๐—ฒ๐—ฎ๐—ฟ๐—ป๐—ถ๐—ป๐—ดโ€ by ๐—ง๐—ถ๐˜ƒ๐—ฎ๐—ฑ๐—ฎ๐—ฟ ๐——๐—ฎ๐—ป๐—ธ๐—ฎ. This book is a goldmine that beautifully balances theoretical depth with practical machine learning intuition.

Iโ€™ve just completed Chapter 1, and to stay consistent and accountable, I created a public GitHub repository:

:link: ๐— ๐—Ÿ-๐— ๐—ฎ๐˜๐—ต-๐—•๐—ฟ๐—ถ๐—ฑ๐—ด๐—ฒ https://github.com/msami-ullah-ai/ML-Math-Bridge

There Iโ€™ll be posting:

Chapter-wise notes Python implementations of the mathematics Parallel Python projects

Iโ€™ll be sharing this journey openly. ๐—œ๐—ณ ๐˜†๐—ผ๐˜‚โ€™๐—ฟ๐—ฒ ๐—ถ๐—ป๐˜๐—ฒ๐—ฟ๐—ฒ๐˜€๐˜๐—ฒ๐—ฑ ๐—ถ๐—ป ๐˜๐—ต๐—ฒ ๐—บ๐—ฎ๐˜๐—ต๐—ฒ๐—บ๐—ฎ๐˜๐—ถ๐—ฐ๐˜€ ๐—ฏ๐—ฒ๐—ต๐—ถ๐—ป๐—ฑ ๐— ๐—Ÿ, ๐—ณ๐—ฒ๐—ฒ๐—น ๐—ณ๐—ฟ๐—ฒ๐—ฒ ๐˜๐—ผ ๐—ณ๐—ผ๐—น๐—น๐—ผ๐˜„ ๐—ฎ๐—น๐—ผ๐—ป๐—ด. Letโ€™s learn together

3 Likes

This really resonates. A lot of people reach that point where using ML libraries starts to feel a bit like magic you donโ€™t fully control. Your shift toward understanding whatโ€™s happening beneath the abstractions makes total sense, especially if youโ€™re aiming for research-level depth. Framing algorithms like k-means as optimization problems grounded in math is where things truly click. Sharing your notes and code publicly is a great way to stay disciplined, and itโ€™ll definitely help others who want to bridge that same gap between theory and practice.

1 Like

Math is literally not scary at all, I donโ€™t know why people run away from it