From my understanding AI will get better if our knowledge of things gets better with time, after all its learning from the info we feed it!
@gent.spah strong point of the article though friend is the point we are running out of written knowledge to give it. Though this article also relates strictly to LLMs in that context.
IMHO also, as towards say, ‘Moore’s law’… Well, at some point we’ve found you can only cram ‘so many’ transistors so close together, but then, after that, the heat of the process, everything melts-- why clock speeds have basically stalled. I mean, yes, instead we’ve tried to swerve and go ‘parallel’, and that sort of works.
But my suspicion, if anything, we will have to do something similar with AI. I post it here as students should not just learn, but remain creative and ‘keep thinking’.
This also requires that that the “better knowledge” must be available for AI training.
Over time, the publicly available knowledge may increasingly be data generated by AI models. So the quality of the ground-truth may degrade over time, and this would impact AI performance.
Unless the training sets are kept up to date, there is risk is that the quality of the truth data will become stale, and the AI will increasingly be trained based on the output of other LLM’s instead of human knowledge.
It will always keep improving as there is lot to go in field of ai.
Knowledge is never ending, the universe is infinitely large and infinitely small depends on which direction you travel. What is crucial to remember is that AI has no consciousness, no intelligence, its a just a “memory system”. Maybe more efficient but still a “machine”.
Therefore it can never replace intelligence because it has no “life force”, no synchronicity with existence! Now its up to us what it learns and from where it learns and how its used ultimately…
Here again the human intelligence is irreplaceable as always!