Help to build a model

Hi there,
I would like to write a simple NN model to predict the housing price. My dataset is simple, and the X-train has one feature: the house’s surface, and i want to predict its price.
Here is the code. Any hints?

Xt = np.array([[1.0], [2.0]], dtype=np.float32) #(size in 1000 square feet)
Yt = np.array([[300.0], [500.0]], dtype=np.float32) #(price in 1000s of dollars)

norm_l = tf.keras.layers.Normalization(axis=-1)
norm_l.adapt(Xt) # learns mean, variance
#Xn = norm_l(Xt)
Xt = np.tile(Xn,(1000,1))
Yt= np.tile(Yt,(1000,1))

tf.random.set_seed(1234)
model = Sequential(
[
tf.keras.layers.Dense(units=1, activation = ‘ReLU’, name=‘L1’ ),
tf.keras.layers.Dense(units=2, activation = ‘ReLU’, name=‘L2’ ),
tf.keras.layers.Dense(units=1, activation = ‘linear’, name=‘L3’ )

]

)
model.compile(
loss = tf.keras.losses.MeanSquaredError(),
optimizer = tf.keras.optimizers.Adam(learning_rate=0.01),
)

model.fit(
Xt,Yt,
epochs=100,
)
result=model.predict([[2.0]])
print(result)

Thank you for your help :slight_smile:

Please show what results you’re getting. Does your code throw syntax or runtime errors, or does it just not make good predictions?

Hi TMosh, thank you for your time, I am not getting any errors, just far from good results.

I modified the code, and the updated code works fine, for the x=1.0, I am getting 299.74, close to my goal of 300. but I had to increase the number of epochs to 20.000, is increasing so much the number of epochs a good solution?
PS: my dataset is now composed of just 2 samples, i got rid of of enlarging the data set code and the normalization too, here is the code.
Xt = np.array([[1.0], [2.0]], dtype=np.float32) #(size in 1000 square feet)
Yt = np.array([[300.0], [500.0]], dtype=np.float32) #(price in 1000s of dollars)

#norm_l = tf.keras.layers.Normalization(axis=-1)
#norm_l.adapt(Xt) # learns mean, variance
#Xn = norm_l(Xt)
#Xt = np.tile(Xn,(1000,1))
#Yt= np.tile(Yt,(1000,1))

tf.random.set_seed(1234)
model = Sequential(
[
tf.keras.layers.Dense(units=1, activation = ‘ReLU’, name=‘L1’ ),
tf.keras.layers.Dense(units=25, activation = ‘ReLU’, name=‘L2’ ),
tf.keras.layers.Dense(units=1, activation = ‘linear’, name=‘L3’ )

]

)
model.compile(
loss = tf.keras.losses.MeanAbsoluteError()
)

model.fit(
Xt,Yt,
epochs=20000,
)
result=model.predict([[1.0]])
print(result)

A single ReLU unit isn’t a very good hidden layer. ReLU units are very inefficient.

Normalizing the data set always helps.

This data set doesn’t really need two hidden layers, and it for sure doesn’t need 25 ReLU units in a 2nd hidden layer.

20,000 epochs is way too many.

I realize you’re probably just experimenting, which is a good thing.

This example (a dataset in one variable) doesn’t really need an NN. Linear regression would fit it nicely.

Which machine learning courses have you attended?

Hey TMosh,
I am following the Coursera Deeplearning.AI Machine Learning Specialization.
Yes i am experimenting, and i think i found the “best” solution for my NN to fit that very small dataset:


Xt = np.array([[1.0], [2.0],[3.0],[4.0],[5.0],[6.0],[7.0],[8.0],[9.0],[10.0]], dtype=np.float32) #(size in 1000 square feet)
Yt = np.array([[300.0], [500.0], [700.0], [900.0], [1100.0], [1300.0], [1500.0], [1700.0], [1900.0], [2100.0]], dtype=np.float32) #(price in 1000s of dollars)

tf.random.set_seed(1234)
model = Sequential(
[
tf.keras.layers.Dense(units=1, activation = ‘ReLU’, name=‘L1’ ),
tf.keras.layers.Dense(units=10, activation = ‘linear’, name=‘L2’ ),
tf.keras.layers.Dense(units=5, activation = ‘linear’, name=‘L3’ ),
tf.keras.layers.Dense(units=1, activation = ‘linear’, name=‘L5’ )
]
)
model.compile(
loss=‘mse’,
optimizer=tf.keras.optimizers.Adam(learning_rate=0.1),
)

history = model.fit(
Xt,Yt,
epochs=270,
)

Thank you again for your reply :slight_smile:

Your dataset is a straight line.

Solving this doesn’t require a NN with four layers.

I recommend you experiment further. Did you try a very simple linear model, without ReLU, and with only one layer?