Is there a path to AGI

If we can define N tasks which all humans can do and create N strong ANIs (artificial narrow intelligence) for those tasks and then connect them using a gating function, will we not be able to build an AGI (artificial general intelligence)?

What’s the challenge in this? Is defining the N tasks the main challenge since general intelligence is not well-defined or does the challenge lie somewhere else, e.g. learning the gating function?

The NN’s that are required to accomplish tasks with human-level skills can be extremely complex. For example, the simple NN’s discussed in this course have no concept of time sequences.

1 Like

I think the key question is how you define “General Intelligence”. I would say it’s more than just the sum of a bunch of different particular problem solving skills. E.g. if you’re going to claim some entity is “generally intelligent”, it should be able to tackle a problem that it’s never seen before and still come up with some kind of solution or at least movement in the direction of a solution. How does that fit with your example of knitting together a bunch of pre-existing NNs? Maybe you need to say more about what you mean by a “gating function”? :nerd_face:

But stepping back just a bit here, please note that this is a huge topic and a lot of very smart people have discussed this in quite a bit of detail over the last 10 or 20 years. It’s the work of a few minutes of googling to find lots of TED talks and YouTube videos of discussions by people like Andrew Ng, Max Tegmark, Nick Bostrom, Stuart Russell, Ray Kurzweil and many many more. Have a look!