Remove Biasness in Supervised Learning vs Unsupervised Learning

I just completed first 3 weeks course work on AI Deep Learning course from Professor Andrew Ng. One of the chapters in Week 4 is about Biasness. Examples were shared about analogy between men and women (i.e. Man is to Engineer as Woman is to – based on the live link of data - machine can pick homemaker). What the course does not go in depth is how Deep learning researchers & engineers removes such biasness from on going cycle of learning to improve and result in unbias outputs to such deep learning applications?

I thought i aske the community here to seek some additional guidance. By the way Professor Andrew’s course is amazing for newbies and non-technical students.


Can you share this link or the video you are trying to refer as you didn’t mention which specialisation or course you are talking about.



It was a coursera course called “AI for Everyone” by professor Andrew Ng


At the core of training all supervised machine learning systems is comparison of a system prediction to what is labeled as correct and measurement of the error or distance between the two. The designer has control over what is offered as the correct value as well as the method for computing error. Each of these can be used to influence the bias.


Hi @ujuthani

Sorry for the late reply as one of the mentor has responded you one of the way how bias between the data error is addressed

As long as I remember the course specialisation Ai for everyone is from beginner point of view and only covers some important aspects of the AI and deep learning. To know how biasness is addressed between two datasets, or input to output, one would require to understand some of very important aspect of understanding first the data in hand is being used or applied correctly in a machine learning algorithm or a neural networks.

This is either with the training output. Like the example you used Man is to Engineer as woman is to homemaker based on whatever live data was used. So chances are here the live data audience might be more males than females or more from 60s to 70s year born group who would have a their own set of mind setup.

In such scenario to have an unbiased output for woman as, the live data can do the following steps to get an unbiased output

  1. Get as much as number of male to female ratio in the live data or increase the number of live audience data irrespective of gender, race, or age group.
  2. Gets all age group live data survey and then can do the prediction output for Woman as from all kind of class, creed or community.

This is one of the way to address but deep learning ai tries to agglomerate all kind of this thought process or live data to get the best accuracy as well as keeping the outcome realistic or ground(which in deep learning would be called as loss to be minimal)

I highly encourage you to try our deep learning specialisation which you will surely not regret.

Welcome to deep