Hello, I am Mo, MSc Econometrics

Hello, I am Mo, MSc Econometrics. Currently fumbling around with AI-applications and wanting to know more of how they work since I use AI alot to be more productive in my work flow.

Very into federelated learning, differential privacy, and digital twin technology. Would really love to take those technologies and turn them into something good - not harmful, since GAI has been ruining a lot of what the internet makes so special.

1 Like

Could you say a bit more about what these three items are?

Sorry for my vague intro but let me elaborate in the context of behavourial economics and how this field fits into deep learning.

Federated Learning is like when four different study groups in the same class have to submit one final project. Each group has their own unique research materials and class notes they’ve built up over the semester. Rather than sharing all their materials directly (which they want to keep private), each group works on a shared Google Doc, but they only add their conclusions and key findings - never their raw notes or sources. Each group updates the shared document with what they’ve learned, then the next group sees those insights and builds on them with their own findings, and so on. By the end, they’ve created a strong final project that benefits from everyone’s work, but each group kept their original materials private.

Differential Privacy is like when you’re trying to figure out what your friend wants for their birthday by asking other friends, but in a way that nobody can trace back who said what. It’s about getting useful information while making sure nobody can figure out exactly where it came from. However, in the real world this is quite costly and sometimes infeasible.

When a government or entity (conceptually) like f.e. the EU, that decides to introduce “new privacy laws” such as the GDPR, they must optimize society’s privacy while considering:

  1. Opportunity costs - the relationship between privacy levels and economic output (GDP per capita)
  2. Implementation costs of providing privacy protections

The challenge isn’t just technical - it’s become more complex with the rise of noisy data. Social media platforms like Reddit (which have become a sort safe heaven for genuine opinion and non-GAI content due to google high advertising practices on their search engine) can amplify uninformed opinions, creating increasingly noisy datasets. This poses problems for companies like Google whose business models rely on high-quality data. The deteriorating quality of data creates a negative feedback loop where it becomes harder to distinguish signal from noise, whether that’s from social media, chatbots, or other sources. Meanwhile, simplistic narratives about “evil tech companies” from daily redittors miss the deeper economic tradeoffs that governments face in balancing societal benefits, individual rights, and practical implementation costs.

Digital Twins are like having a video game version of something real - a virtual copy of a factory or engine. It’s connected to the real thing through sensors, so whatever happens to the real thing happens to the virtual copy too. This makes it great for testing changes or spotting problems before they affect the actual system - like having a practice version before modifying the real thing. A very basic example of this would running a virtual sandbox on a desktop and browsing “naked” on that virtual sandbox. But after you are done with browsing for whatever you were trying to find or consume, you can close the sandbox-instance and all datas gets forgotten since the sandbox doesnt run on actual ram.

1 Like

I would also be interested to know-- as in another life I trained to be an Economist and took econometrics in 2004 and 2006-- Aside from advanced regression, the furthest we got was GMM (which, honestly, I hope I never have to see again). Since, has the field started to incorporate elements of deep learning/ML or not yet ?

1 Like

Good question!! Having studied at Erasmus (Netherlands)… Econometrics, as a field still maintains its core foundation in economics and statistics - this hasn’t changed. The rigorous statistical methods and economic theory that form its backbone remain essential. However, what’s interesting is how it’s been evolving with new technological capabilities.

Over the past decade or so (as new courses and electives were introduced at our faculty), we’ve seen machine learning gradually making its way into econometric analysis, though perhaps not as rapidly as in fields like computer science or pure data science. During my Master’s program in quantitative marketing and business analytics , I got to experience this integration firsthand in a few key ways:

I worked extensively during an internship with machine learning methods for prediction problems, particularly in cases where traditional econometric approaches might struggle. Think about situations with high-dimensional data or really complex nonlinear relationships - like when you’re trying to predict consumer behavior across thousands of products with countless variables, or when you’re analyzing financial market movements that don’t follow nice, neat linear patterns. Random forests and neural networks proved incredibly useful for these kinds of challenges. Hence, my interest in deep learning(primarily neural networks) has emerged.

But what’s really fascinating is how the field is developing techniques that bridge machine learning with causal inference - its this bridge which allows me to understand the application behind deep learning. While pure machine learning - as in Computer Science - is great at finding patterns and making predictions in digitalized data, it often doesn’t address the “why” questions that are so central to economics. We want to know not just what will happen, but why it happens and how different factors cause specific outcomes.

When it comes to AI, I’ve come to interpret it not as a standalone science but rather as a convergence of different scientific disciplines. It’s essentially - speaking from my own flawed interpretation - applying sophisticated linguistics and statistical methods through computational systems. It’s like a melting pot where computer science, linguistics, statistics, and even elements of cognitive science all come together.

To stay current with all these developments, I make it a point to read academic articles and share my insights on communities like deeplearning.ai.

This is interesting and I will have to think more about this to provide you with a better response. I did my undergraduate studies at UMass Amherst and always heard ‘rumors’ that they were ‘crazy communists’.

I had no idea what this meant until I tried grad school for this-- much more in the MIT model, and I saw the difference. It becomes about the ‘equations’ and ‘people’ become irrelevant. It feels more like herding cattle. Personally, I feel there must be a balance and mix, and in the modern age I feel there is actually some way we can achieve that.

Until I am able to come up with a better response to your inquiry, I’d highly recommend you read ‘Weapons of Math Destruction’ by Cathy O’Neil. Some of the points are very obvious to people like us, but I feel it is also great instruction in what not to do, which sometimes we only realize after
we have failed.

TTYL.

1 Like

Thank you for the recommendation. Quite a horror story.

I too have worries about the future of AI. The technology has its limitations, and the law of diminishing returns applies to it as well.

This is the result of abstracting everything away from technology in the pursuit of convenience. There’s always a trade-off.