Over-claim then Correct: A New Norm of Research in the Era of LLMs
In the era of LLMs, it is a norm that people make many, bold, general claims. Then, after a little while, people correct them one by one.
So many such over-claims, e.g., reasoning, agent, P!=NP, as well as the so-called scaling law, emergent abilities, and general intelligence.
The following are examples of refutation and/or correction.
Faith and Fate: Limits of Transformers on Compositionality
Can Large Language Models Really Improve by Self-critiquing Their Own Plans?
GPT-4 Doesn't Know It's Wrong: An Analysis of Iterative Prompting for Reasoning Problems
Can Large Language Models Really Improve by Self-critiquing Their Own Plans?
Large Language Models Cannot Self-Correct Reasoning Yet
Rigorously Assessing Natural Language Explanations of Neurons
Usually, one group of people make over-claims, and another group of people make corrections. For the whole community, it is iterative improvement from feedback.
For academia, there are lots of opportunities.
For industry, be careful.