Can a neural network approximate non linear and periodic functions

I recently found a few assertions in forums (not this one) saying that neural networks cannot approximate non linear functions and, in particular, periodic functions.

I wondered if I could answer this on my own by using what we’ve seen so far in the course. So I extended the “Hello world” example obtaining these results.

Quadratic approximation

quadratic-approximation-using-two-hidden-layers

Sinusoidal approximation (in the domain of the training)

sinusoidal-approximation-using-two-hidden-layers

Sinusoidal approximation (predicting further from training points)

sinusoidal-approximation-far-from-training-points

I think that the results speak for themselves:

  1. neural networks can easily learn non linear functions,
  2. they can be pretty accurate as soon as they are trained close to the points they make predictions for.

The sources used for the charts can be found here.

1 Like

This is false. Non-linear functions are a particular specialty of neural networks. It’s because they have a non-linear activation in the hidden layer.

2 Likes

Thanks TMosh! You are absolutely right. I’m happy that I already developed the tools to figure it out on my own. What you are pointing out is what I imagined in the first place but I still don’t have enough theoretical knowledge to answer with confidence.

By the way, as a physics student (once I was) I wanted to see if I could use a neural network to guess the explicit parameters (phase and angular velocity) of a sinusoidal signal using a neural network and it seems like it’s quite easy with a custom layer:

class Cosine(tf.keras.layers.Layer):
    def __init__(self, units=32, input_dim=32):
        super().__init__()
        self.w = self.add_weight(
            shape=(input_dim, units),
            initializer="random_normal",
            trainable=True
        )
        self.b = self.add_weight(
            shape=(units,),
            initializer="zeros",
            trainable=True)

    def call(self, inputs):
        return tf.math.cos(tf.matmul(inputs, self.w) + self.b)

I still have to do more testing but it seems to work quite well even for predicting points that are pretty far from the training set. :heart_eyes:

Too much to learn yet! Too much fun to have!

1 Like

I just said pretty much the exact same thing to a (lay) friend, while discussing Carlos’ exact same issue! I just signed up to this community while reading Carlos’ article (they should pay you bro :slight_smile: ) This is too much unrealistic exactness for one day… I need a cup of tea and a lie down… Hahaha.

Thank you for confirming my approximately-intermediate-level views!

Bro. Well done! And thank you for your efforts and kudos for your self-investigative diligence. I too come from a physics (undergraduate) background, but more likely resemble a Data Scientist now; professionally at least (even they might not accept me!..). Physics is an awesome background for this specialty, and it looks like you are already kicking @rse, so keep doin’ wha’ choo doin’ my man! :metal:

I absolutely agree with your elegant, simple analysis, as well as TMosh’s erudite input. In a discussion with my friend, who is gaining interest in the topic, I was cautioning against believing everything written about AI in mainstream media. My example case was that, some guy, somewhere, on a thread like ‘jobs which will not be replaced by AI’, ASSERTED (appropriate word you’ve used in the OP :ok_hand:) that, “…AI can only solve linear problems; it can’t solve non-linear problems.” And this is a crock of you-know-Watt - as YOU have just shown beautifully with simple examples; and as TMosh corroborates, matter-of-factly. The example I used to support this stance was that, one of the simplest activation functions, the “ReLU”, is non-linear, and introduces non-linearity - by design.

What’s the moral of the story? Most journalists - whom most people learn their “facts” from - don’t understand what “AI” is! Does this mean that most people won’t ever know?..

Anyway. :joy: Sorry to babble. Thank you for your beautiful demonstration! This is fit for the publishing office in my view. Cheers.