AGI is a wrong goal
Many people are talking about artificial general intelligence (AGI), especially after the launch of ChatGPT by OpenAI in November 2022.
General intelligence targets solving multiple tasks optimally or above some bar. Popular methods recently are learning to learn, like transfer / few-shot / multi-task / meta-learning. This boils down to finding an optimal/feasible solution with multiple objectives and/or multiple constraints.
Let’s review two basic principles from optimization.
Multi-objective optimization usually can not optimize all objectives.
For a multi-constraint problem, the more and the tighter constraints, the less chance to find a feasible solution.
These two basic principles imply that a single model usually can not solve multiple tasks optimally, and/or may not find a feasible solution for multiple tasks above some bar, simultaneously. General intelligence by a single model is about approximation and compromise.
Previous study show that, to achieve optimality for an AI system, computational bounded rationality leads to approximations. Also, with negative transfer, the previous knowledge may interfere with later learning.
We should handle tasks together which can be handled together effectively, rather than asking a model to handle all tasks. We need to divide tasks into multiple groups of sub-tasks and conquer each group of sub-tasks separately, then let them collaborate. This resembles how human society works: division and cooperation.
In short, AGI (by a single model) is a wrong goal.