So I had an idea I might experiment and research on later after I am finished with one of my other projects.
Considering neural networks have these many layers that can be executed with different configurations (units, embedding lengths, etc), what would happen if you gave the program the ability to experiment with its own configuration given an objective and dataset? I imagine it would stumble by adding more layers til it reaches a resource limit (of course if you programmed it to do that), but it would be interesting to see it experiment with its own layers til it reaches a said objective. Heck, you can introduce occasional randomness similar to deep learning within the layers and config til it finds an optimal model.
So my question is, has this been done and researched on? And what field of study does this fall in?