Incorporating human knowledge into AI systems often hinders long-term progress, despite providing short-term benefits and personal satisfaction to researchers.

At it’s core, the fundamental reason for this is that .

”Human knowledge” refers to the practice of explicitly encoding human understanding, insights, and strategies into AI systems. This can include incorporating knowledge about:

  1. Domain-specific information (e.g., rules of chess or Go)
  2. Human perception and reasoning (e.g., edge detection in computer vision)
  3. Structure and features of the problem space (e.g., phonemes in speech recognition)

Researchers often attempt to leverage this knowledge to improve the performance of their AI systems in the short term. However, this approach hinders long-term progress for the core reason that machine and biological intelligences think differently.

Human Knowledge Limits Scalability

By explicitly encoding human knowledge into AI systems, researchers create methods that are less suited to leveraging computation effectively leading to increased complexity in the underlying methods. As computational power increases, these human-centric approaches often fail to scale proportionally, limiting the potential for long-term improvement.

The inherent complexity of the problem domain should be discovered by the AI system itself, rather than being explicitly encoded by researchers.

Human Knowledge Hinders the Development of Autonomous Discovery

Focusing on incorporating human knowledge into AI systems can bias research efforts towards short-term, incremental improvements that satisfy researchers’ intuitions about how the problem should be solved. This bias can divert attention and resources away from exploring more general, computation-based approaches that may initially seem less intuitive but ultimately prove more effective in the long run.


Source: The Bitter Lesson