For one of the questions on the Reinforcement learning introduction Quiz in C3W3 of ML Specialization, it asks the following:

You are using reinforcement learning to fly a helicopter. Using a discount factor of 0.75, your helicopter starts in some state and receives rewards -100 on the first step, -100 on the second step, and 1000 on the third and final step (where it has reached a terminal state). What is the return?

Based on the description, the following is my interpretation of it:

If you keep watching it until 1:25, you will find 4 rewards are listed. And if you map the narration with each of those rewards, you should see the following mappings:

0 → from state 4 you go to the left, we saw that the rewards you get would be zero on the first step from state 4
0 → zero from state 3
0 → zero from state 2
100 → 100 at state 1, the terminal state

Now we go back to the quiz “your helicopter starts in some state and receives rewards -100 on the first step”, and following the logic in the above, the grid for “some state” should have a -100, and then the grid for the next state have another -100. I think this should have addressed the difference from your understanding.

Also, I want to point out that the explanation is actually quite a good one. Let me explain. We assign a different orders-of-discount as the coefficients to the sequence of rewards. Now, the part of “different orders” makes the discounting stronger in future steps of the sequence, and this achieves our goal already - we want to penalize future rewards so that among all possible paths-to-destination, the shortest one wins. However, is it useful at all to penalize the first reward? At least in the examples shown in the lectures and in this quiz, it is not useful at all. Of course I understand we want to stick with the definition, so my response before this paragraph has done that, but my response in this paragraph is just trying to give you a different perspective.

Hi @nauman
Welcome to the community!
in addition to what @rmwkwok said

The first step didn’t multiply with the discount factor or in the other the power of the discount factor in the step 1 is zero so it’s equal 1like this image

Correct me if im wrong, but I now understand that the question applies a -100 to the “some state” which is assumed to be the starting state. Im assuming thats also why the starting state is not discounted. However, that leads me to two questions:

When it mentions “the first reward is always discounted”, does that refer to the reward associated with the starting state? Or the state following the action?

If it is the former, then what is the purpose for a reward given to the starting state? Isnt that redundant? Considering regardless of which action you take, that reward will always be included in every possible return

The quiz says the the correct solution is -100 - 0.75*(-100) + 0.75^2*100. But if we’re moving to “first step” that has a reward of -100, shouldnt that reward be discounted by 0.75?

Starting state (no reward, no discount) → first step → second step → third step
0 - 0.75100 - 0.75^2100 + 0.75^3*1000

Why is that an incorrect answer according to the quiz? I applied the same logic to the next question in the quiz and got the correct answer:

Why should the value of the first step be -100*(0.75^{0})? In the course video example starting from 6:27 , when starting from state 4 and ending at state 6:

please check this and I will continue with you @nauman

The starting state is the state that you are start from it(here he say the first step and it’s mean the current step)you can get reward from it but without discount

“the first reward is always discounted”, it’s actually discounted by 0 as the power of the discount factor in the first state is 0.75 power 0 ( 0.75^0 * -100 )

@nauman
Starting state (the first step as mentioned here) (no discount) because the first step should be = -100*0.75^{0} and 0.75^{0} = 1 , the second step = -100*0.75^{1}, the third step = 1000*0.75^{2} …etc

please review what I updated , and sorry for the mistake
Cheers,
Abdelrahman

The reason is because effort is made to reach this step so we make discount of the reward we get to compensate for this effort or this error, and therefore the effort is doubled at every step, but the starting step we didn’t made any effort as we didn’t move we just pick the reward so the discount is power zero

also personally I preferred to read these topic written by @ Christian_Simonis It’s took about Intuition behind the discount factor in reinforcement learning

The discount factor, 𝛾, is a real value ∈ [0, 1], cares for the rewards agent achieved in the past, present, and future. Let’s explore the two following cases:

If 𝛾 = 0, the agent cares for his first reward only.

A reward which you get as soon as possible is just more „worth“ than a reward in the future, especially the longer the horizon is and the uncertainty would be higher.

Note that the discounting concept is also known from finance e.g. to bring future cash flows to a present value considering opportunity cost. This concept is quite similar. Feel free to take a look .