True AI
True AI is the goal for many people.
People may have different definitions of “True AI”.
My definition of True AI is: superhuman AI.
There are at least two aspects.
One aspect: performance.
It is better than the best human.
It is not just better than an average human or any human.
Another aspect: efficiency.
It should be achieved with the energy consumption similar to that by a human.
There are other aspects, equally or more important, e.g., human value alignment and readiness for commercialization.
The focus here is on performance and efficiency.
AlphaGo series are superhuman w.r.t. performance, but not yet w.r.t. efficiency.
Large language models (LLMs) have impressive performance.
However, LLMs are not regarded as superhuman w.r.t. either performance or efficiency, considering innate inaccuracy, and the requirement of huge amount of (compute) resources.
It is interesting to see stunning results for competitive maths and code by LLMs or LLMs-based AI.
However, hopefully researchers would pay more attention to both accuracy and efficiency.
Can a neural network perfectly solve arithmetic problems, matching a calculator or a programming language like Python, w.r.t. both performance and efficiency?
Why is it important?
Why not?
It is a prerequisite for more complex tasks.
Can an AI match or exceed the maths expertise level of a top maths professor, e.g., solving maths problems and conducting novel research, after learning from 100 (maths) books and 1000 (maths) papers, and with the energy consumption similar to the amount the professor consumes?
The current LLMs may be viewed as the ENIAC. We look forward to the iPhone level AI. Innovations are called for.
What may be first principles for further progress in True AI?
Ground truth and iterative improvements.
In Mandarin, AI sounds the same as LOVE.
To True AI, my True Love.