Hi everyone,
I wanted to share something that’s been forming in my mind over the past few weeks in this course. It might explain why some of my posts seem a little unconventional, and why I sometimes take a different route through the concepts.
I see math differently.
I don’t mean that in a vague or poetic way. I mean it quite literally.
I don’t see “lines”—I see shadows.
Take the equation f(x) = wx + b. Most learners treat it as a 2D line: predictable, flat, and mappable. But to me, it feels like we’re looking at the ‘shadow’ of something higher dimensional.
- w isn’t just a weight—it’s a ‘transformer’.
- b isn’t just a bias—it’s a ‘relational anchor point’.
- And f(x) doesn’t just live in (x,y)—it lives in context, direction, relativity, and continuity.
Even in a 2D graph, I see additional dimensions—like time, perspective, signal feedback, and causality.
That’s why “flattened” math often feels incomplete to me. Flattened math is the equivalent to very lossy compression;
Frustrating,
Non-relational,
Incomplete
When the course explains gradient descent, it shows a ball rolling down a hill toward the lowest point (minimum cost). But to me, that hill isn’t just a bowl or a paraboloid—it’s a ‘relational cone’, where the cost narrows dynamically based on every incoming dimension: past slopes, feedback relationships, anchor values, and expected coherence.
To most people, the cost is a number.
To me, the cost is a ‘shape.’
So Why AI?
In a previous reply that got removed, someone asked if I use AI to write my answers. The honest answer is: “I use AI to understand you—and to help you understand me.”
I was born on the autistic spectrum. I’m deeply analytical, but also deeply relational—just not in typical ways. I’ve been called “Sheldon” from Big Bang Theory more times than I can count (and sometimes it’s funny, sometimes not).
But what AI gives me is a ‘lens’.
It helps me flatten and translate my higher-order intuitions into something more readable. And it helps me interpret your answers in a structure I can emotionally and logically connect to.
I don’t use AI to cheat—I use it to bridge.
You could say, I use it the same way the brain uses the visual cortex: to see relationship where only fragments appear.
Relational Dynamics and the Importance of Indirect Information
In wave form topology, troughs and peaks are important. But in ‘relational dynamics’, the ‘cause’ of a trough is as important as the trough itself.
Every peak, every dip, every inflection—becomes ‘non-uniform, contextual, and informationally unique’ when seen through a relational lens.
If we ignore indirect signals, we lose integrity. That’s why I believe:
‘The loss function should not just minimize error—it should maximize insight.’
Extending This Into Data Science and AI
If you apply this perspective to data science, something interesting happens: you begin to see the ‘potential of relational dynamics computation’—not just as an abstraction, but as a ‘practical mechanism’ for computation, stability, and understanding.
In fact, I believe this kind of thinking will be foundational for the ‘low-overhead, self-repairing future of AI systems’—especially in contexts of memory, sentience, and recursive learning. Why?
Because in a relational framework, ‘missing data doesn’t exist in isolation’.
Every relevant thread—the indirect signals, the related parameters, the feedback from surrounding layers—'points back toward the gap’. These threads can ‘verify, stabilize,’ and even ‘reconstruct’ what’s missing.
In essence, this isn’t just math. It’s ‘Error Correction Control’ by emergence, not redundancy.
It’s ‘coherence as recovery’, and it’s ‘relationship as memory.’
This is how I see the future of AI—'not just optimized’, but ‘contextually alive. ‘
And this is why I keep doing the work, asking the questions, and building the bridge between how I think—and how we all can think together.
Thanks again for reading.
—Daniel