Hello, I am Mo, MSc Econometrics. Currently fumbling around with AI-applications and wanting to know more of how they work since I use AI alot to be more productive in my work flow.
Very into federelated learning, differential privacy, and digital twin technology. Would really love to take those technologies and turn them into something good - not harmful, since GAI has been ruining a lot of what the internet makes so special.
Sorry for my vague intro but let me elaborate in the context of behavourial economics and how this field fits into deep learning.
Federated Learning is like when four different study groups in the same class have to submit one final project. Each group has their own unique research materials and class notes theyâve built up over the semester. Rather than sharing all their materials directly (which they want to keep private), each group works on a shared Google Doc, but they only add their conclusions and key findings - never their raw notes or sources. Each group updates the shared document with what theyâve learned, then the next group sees those insights and builds on them with their own findings, and so on. By the end, theyâve created a strong final project that benefits from everyoneâs work, but each group kept their original materials private.
Differential Privacy is like when youâre trying to figure out what your friend wants for their birthday by asking other friends, but in a way that nobody can trace back who said what. Itâs about getting useful information while making sure nobody can figure out exactly where it came from. However, in the real world this is quite costly and sometimes infeasible.
When a government or entity (conceptually) like f.e. the EU, that decides to introduce ânew privacy lawsâ such as the GDPR, they must optimize societyâs privacy while considering:
Opportunity costs - the relationship between privacy levels and economic output (GDP per capita)
Implementation costs of providing privacy protections
The challenge isnât just technical - itâs become more complex with the rise of noisy data. Social media platforms like Reddit (which have become a sort safe heaven for genuine opinion and non-GAI content due to google high advertising practices on their search engine) can amplify uninformed opinions, creating increasingly noisy datasets. This poses problems for companies like Google whose business models rely on high-quality data. The deteriorating quality of data creates a negative feedback loop where it becomes harder to distinguish signal from noise, whether thatâs from social media, chatbots, or other sources. Meanwhile, simplistic narratives about âevil tech companiesâ from daily redittors miss the deeper economic tradeoffs that governments face in balancing societal benefits, individual rights, and practical implementation costs.
Digital Twins are like having a video game version of something real - a virtual copy of a factory or engine. Itâs connected to the real thing through sensors, so whatever happens to the real thing happens to the virtual copy too. This makes it great for testing changes or spotting problems before they affect the actual system - like having a practice version before modifying the real thing. A very basic example of this would running a virtual sandbox on a desktop and browsing ânakedâ on that virtual sandbox. But after you are done with browsing for whatever you were trying to find or consume, you can close the sandbox-instance and all datas gets forgotten since the sandbox doesnt run on actual ram.
I would also be interested to know-- as in another life I trained to be an Economist and took econometrics in 2004 and 2006-- Aside from advanced regression, the furthest we got was GMM (which, honestly, I hope I never have to see again). Since, has the field started to incorporate elements of deep learning/ML or not yet ?
Good question!! Having studied at Erasmus (Netherlands)⌠Econometrics, as a field still maintains its core foundation in economics and statistics - this hasnât changed. The rigorous statistical methods and economic theory that form its backbone remain essential. However, whatâs interesting is how itâs been evolving with new technological capabilities.
Over the past decade or so (as new courses and electives were introduced at our faculty), weâve seen machine learning gradually making its way into econometric analysis, though perhaps not as rapidly as in fields like computer science or pure data science. During my Masterâs program in quantitative marketing and business analytics , I got to experience this integration firsthand in a few key ways:
I worked extensively during an internship with machine learning methods for prediction problems, particularly in cases where traditional econometric approaches might struggle. Think about situations with high-dimensional data or really complex nonlinear relationships - like when youâre trying to predict consumer behavior across thousands of products with countless variables, or when youâre analyzing financial market movements that donât follow nice, neat linear patterns. Random forests and neural networks proved incredibly useful for these kinds of challenges. Hence, my interest in deep learning(primarily neural networks) has emerged.
But whatâs really fascinating is how the field is developing techniques that bridge machine learning with causal inference - its this bridge which allows me to understand the application behind deep learning. While pure machine learning - as in Computer Science - is great at finding patterns and making predictions in digitalized data, it often doesnât address the âwhyâ questions that are so central to economics. We want to know not just what will happen, but why it happens and how different factors cause specific outcomes.
When it comes to AI, Iâve come to interpret it not as a standalone science but rather as a convergence of different scientific disciplines. Itâs essentially - speaking from my own flawed interpretation - applying sophisticated linguistics and statistical methods through computational systems. Itâs like a melting pot where computer science, linguistics, statistics, and even elements of cognitive science all come together.
To stay current with all these developments, I make it a point to read academic articles and share my insights on communities like deeplearning.ai.
This is interesting and I will have to think more about this to provide you with a better response. I did my undergraduate studies at UMass Amherst and always heard ârumorsâ that they were âcrazy communistsâ.
I had no idea what this meant until I tried grad school for this-- much more in the MIT model, and I saw the difference. It becomes about the âequationsâ and âpeopleâ become irrelevant. It feels more like herding cattle. Personally, I feel there must be a balance and mix, and in the modern age I feel there is actually some way we can achieve that.
Until I am able to come up with a better response to your inquiry, Iâd highly recommend you read âWeapons of Math Destructionâ by Cathy OâNeil. Some of the points are very obvious to people like us, but I feel it is also great instruction in what not to do, which sometimes we only realize after
we have failed.