Autogenerative Neural Network Possible?

So I had an idea I might experiment and research on later after I am finished with one of my other projects.

Considering neural networks have these many layers that can be executed with different configurations (units, embedding lengths, etc), what would happen if you gave the program the ability to experiment with its own configuration given an objective and dataset? I imagine it would stumble by adding more layers til it reaches a resource limit (of course if you programmed it to do that), but it would be interesting to see it experiment with its own layers til it reaches a said objective. Heck, you can introduce occasional randomness similar to deep learning within the layers and config til it finds an optimal model.

So my question is, has this been done and researched on? And what field of study does this fall in?

Yes, it sounds possible. It’s already quite common to use a grid search to explore the number of hidden layers and units, and the amount of regularization.

Extended further, I believe it could ultimately consume all of the electrical power of the universe. :grinning:

1 Like

Yes, it’s an interesting idea and as Tom suggests, the problem may well revolve around making the search relatively efficient. This topic came up on another recent thread and here’s a post that is relevant to your idea.

1 Like

Grid search would be so cool as well. Adds a dynamic programming feel to it as well to find optimal paths to search

I want to code this now :grin: I’m too busy doing grassroot research on ethics for AI

Thank you for the connection. Tagging link here as well to save the click

Neural architecture search - Wikipedia