So, this question might be a slight aside (as there seems nothing directly related in the lab)-- Yet still I was curious.

So I understand these graphs in the lecture were used mostly for visually explaining understanding the concept. Though-- And also to make sure I understand the topic, I was curious, programmatically, how they might be made?

I mean, for one, theta, in the temperature in Lodon example seems to be a scalar:

So, we have a relationship to Y in the graph, however though it is implied the relationship to the particular time step, X never fits directly into the equation (unless somehow the successive values of âvâ are our X? Though that would seem backward !?).

Further, if I am understanding this correctly, by sucessive applications of the formula:

If I am getting this rightâŚ what we are really forming is one *really, really* big polynomial-- Though that could be programatically a bit difficult to parse the whole resulting equation for every step of the graph.

I know someone might just say âwell there are libraries that will do that for youâ-- But if I wanted to do it myself, I am just trying to think through how that might be done with these sets of equations ?

1 Like

Well, you can work it out with a calculator or you can âŚ wait for it âŚ write a python program to compute that in a loop. Yes, it ends up being a high degree polynomial, but the point is youâre computing it one iteration at a time according to the algorithm they give you. You have a series of inputs and you compute the EWA at each step in the process.

1 Like

And of course the real high level point here is that \beta is a value between 0 and 1 and we know what happens when we compute \beta^n for increasing values or n. Set \beta = 0.7 and give it a try: play it out on your calculator and watch what happens. And if memory serves, Prof Ng did an example like that in the lectures, didnât he?

1 Like

@paulinpaloalto, thanks.

And yes, he does.

In part I guess I am just not accustomed to performing graphing functions where your X and Y are not explictly stated in the function. I mean your âYâ here is, and your X is easily inferred. But I just wasnât sure if I was just âmissingâ the X in the equation.

So with a Python program performing this youâd have to force it out âimplicitlyâ I guess-- so \beta^0 = T_1 = X_1, \beta^1 = T_2 = X_2, etc.

Since this is somewhat of a âfeed forwardâ process I am also presuming you donât necessarily have to parse the entire dataset to start graphing-- Or the future state depends on the past, but not the otherway around.

1 Like

@paulinpaloalto so I decided to take your suggestion and when I had a free moment whipped something up in Python (I need to learn/practice Pandas anyways. Iâve done a *lot* of work with Dataframes in R, but Pandas is new to me, but similar-- Just different syntax.

My approach:

```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# Specify year of weather to examine
# Note: Important !
# Due to the sequential nature of Exponentially Weighted Average algorithm
# some years in the dataset are missing data points-- Which will cause big
# problems ! How to deal with such cases is an interesting thought for perhaps
# a later improvement on the algorithm application.
# Set global options
year = 2018 # The database contains years 1979 - 2020 (2020 is missing data)
beta = 0.98
# Weather measurements recorded by a weather station near Heathrow airport in London, UK
# Source: https://www.kaggle.com/datasets/emmanuelfwerr/london-weather-data
url="https://drive.google.com/file/d/17L1frc514nVZrHmCBVoVQWp_52u_cmmx/view?usp=drive_link"
url='https://drive.google.com/uc?id=' + url.split('/')[-2]
london_weather = pd.read_csv(url)
# Add column converting daily mean temp to F
london_weather['Daily_Temp_F'] = london_weather['mean_temp'] * (9/5) + 32
# Convert 'date' column to Pandas Datetime format
london_weather['date'] = pd.to_datetime(london_weather['date'], format = "%Y%m%d")
# Subset out desired year to examine
london_weather = london_weather[london_weather['date'].dt.year == year]
# Create column to store Vt values
london_weather['Vt'] = 0
# Set V to zero for first round
Vt_minus_one = 0
for i in range(london_weather['date'].count()):
# Get theta (aka daily temp)
theta = london_weather.iloc[i, london_weather.columns.get_loc('Daily_Temp_F')]
# Calc Vt
Vt = beta * Vt_minus_one + (1 - beta) * theta
# Store Vt
london_weather.iloc[i, london_weather.columns.get_loc('Vt')] = np.int64(np.round(Vt, 3))
# Set Vt_minus_one current value of Vt for next run
Vt_minus_one = Vt
# Chart stuff
fig, ax = plt.subplots()
ax.set_title("Daily Temps in London: " + str(year) + " - Î˛: " + str(beta))
fig.suptitle("DeepLearning.AI: Exponentially Weighted Average Example")
london_weather.plot.scatter(x = "date", y = "Daily_Temp_F", ax = ax)
ax.set_ylabel("Daily Temp Â°F")
london_weather.plot(x = "date", y = "Vt", color = "red", ax = ax)
ax.set_xlabel("Date")
plt.show()
```

I understand now the answer to my own question. Watching Prof. Ngâs example in the video the curve seemed so smooth I thought some sort of interpolation or something was going on-- But, no, it is just a side effect of having a very high Î˛ on a small enough graph with enough data points to give the illusion of such.

Note: I did not bother to implement bias correction-- Not trying to reinvent the wheel here, there are libraries for that. Just wanted to understand.

One interesting thing that did come up-- At first I thought I had an error in my formula, because all of the sudden (say on year = 2020) I was getting all these NaNâs partway through. Turns out there are some points in this dataset that have missing values.

Of course for this type of calculation that throws a wrench into the machine. I keep on thinking about creative ways to handle that, but it is worth knowing.

2 Likes

Very nice! The higher resolution graphs really give us a nice picture. Thanks for actually implementing this and sharing the results.

In terms of how to handle incomplete data samples, there is no real magic here. I donât claim to have thought deeply about this, but my memory of what I think I heard Prof Ng say at some point is that you basically have two choices:

- Just drop (ignore) the incomplete samples.
- Fill in the missing values with values either from adjacent samples or the average over several adjacent samples.

One minor implementation suggestion: you can compute `Vt_minus_one`

much more simply and efficiently as:

`Vt_minus_one = Vt`

That way you donât have to evaluate the big indexed expression twice. Of course it is probably sitting in a register already and we hope the JIT compiler is clever enough to figure that out, so it may not actually make a difference in terms of execution speed. But it would definitely make the code simpler and clearer to the readerâs eye.

I guess for that matter, you donât really need a separate variable for that value: Itâs just Vt. As in:

```
# Calc Vt
Vt = beta * Vt + (1 - beta) * theta
```

Also, sorry I canât help myself, but itâs âExponentiallyâ not âExpontentiallyâ. Once you have the programmerâs eye for detail, itâs hard to disable it.

1 Like

Oops ! Well at least *that* has now been corrected/updated. [At least one knows this program was written by a human. If anything ChatGPT/Gemini/etc does not make *those* kinds of mistakes].

I was an ESL teacher for a good number of years so I usually tend to be pretty good at this and understand. Unfortunately, math is not my first language.

Not to say âall math is easyâ â it is not; But in the applied case, if you understand what it is you are looking at and how to perform the operation, you are not trying to invent anything novel. This is where I particularly like Prof. Ngâs approach of explaining things intuitively.

I feel, also, when reading papers and the like just translating the âhieroglyphicsâ is where I most struggle. To that end I placed an order yesterday for this text to see if it helps:

https://www.ams.jhu.edu/ers/books/mathematical-notation/

It is short, cheap, and has really positive reviews on GoodReads.

I understand/follow and agree on your points regarding program structure. I actually considered first this whole algorithm can just be written recursively, but I wasnât seeking style points.

This was kind of âquick and dirtyâ as I am trying to spend the majority of my free time with this on actually finishing the specialization rather than on extracurriculars.

Further, I decided to structure it this way because this is more or less *exactly* the way Prof. Ng goes through the algorithm by hand in the lecture, so for anyone else that looked at it I wanted it to be clear in that sense rather than trying to have to interpret *how* the program is actually doing what it is doing.

1 Like

P.s. I did make that change as well, you were correct:

```
# Set Vt_minus_one current value of Vt for next run
Vt_minus_one = Vt
```

I donât remember what I was thinking at the time why Iâd actually have to pull the value from the DataframeâŚ