In the Matrix movies, artificial intelligence (AI) is a powerful and intelligent force that has taken over the world and is using humans as a source of energy. The map has swallowed the terrain, as in the famous short story “On Exactitude in Science” by Jorge Luis Borges. Logically speaking, this seems possible. Alan Turing is considered the father of modern computing. His work proves that AI can perform any computation that can be logically defined. However, despite such endless possibilities, the creative nature of consciousness could never be replaced by AI. The first argument suggesting this comes from Kurt Gödel, an impactful mathematician, and implies that there will always be some things that any AI system could neither prove nor disprove. The second argument stems from Chris Langton, a computer scientist known for his work on artificial life. His work implies that AI systems do not possess consciousness or self-awareness.
One example of the limitations of AI is the concept of Reber grammar, which was proposed by psychologist Thomas Reber in the 1960s. Reber grammar is a set of simple rules for generating strings of letters that are highly structured and complex, but that are also highly artificial and meaningless. Reber demonstrated that it is easy for humans to learn and recognize words in this artificial language, because it is based on familiar patterns and structures that are common in natural language. However, AI systems have struggled to learn and recognize Reber grammar, even when they are given explicit rules and examples, because it requires a level of creative and flexible thinking that is beyond their current capabilities. This suggests that there are some tasks and abilities that are uniquely human, and that AI systems may never be able to fully replicate or surpass. Importantly, in Reber's original experiment, he found that human participants who were explicitly taught the rules of Reber grammar were able to correctly recognize and generate strings of letters that followed those rules. However, they were outperformed by participants who simply memorized a list of words that followed the rules of Reber grammar, without necessarily understanding the underlying rules themselves. This suggests that humans are capable of learning and recognizing complex patterns and structures in language, even if they do not fully understand the underlying rules or algorithms that generate those patterns. This finding has important implications for the development of AI systems, because it suggests that it is not enough for AI systems to simply memorize examples and follow rules; they must also be able to understand and generate new examples that follow those rules, in the same way that humans do. This requires a level of creative and flexible thinking that is currently beyond the capabilities of most AI systems, but which is an essential component of human cognition. Such findings show important strengths and limitations of both Humans and AI, and a merger of each other strengths seems optimal.
In his book "Thinking, Fast and Slow," Daniel Kahneman, a Nobel laureate in economics, describes two different systems of thought that we use to make judgments and decisions. Humans excel in System 1 thinking, which is fast, automatic, and intuitive, and responsible for perception, attention, language, and memory. It is often described as the “gut reaction” or “automatic pilot”. System 2 is slower, more deliberate, more logical, and more effortful, and it is responsible for tasks such as problem-solving, decision-making, and planning. It is slower because it requires more mental effort and concentration, and it is more likely to be accurate and reliable because it is based on careful analysis and reasoning. However, it is also more prone to mental fatigue and can be overwhelmed by complex or demanding tasks. While the human mind is typically limited to remember around 7 items at the same time, and unable to process information in parallel, AI has an almost endless capacity for such System 2 thinking. Humans are better at understanding and recognizing complex patterns, even if they don't fully understand the rules that create them. AI systems have difficulties understanding complex patterns, even when they are given rules and examples. By combining the strengths of both humans and AI, we can achieve better outcomes and solve complex problems more efficiently, such as in medical diagnostics. In the future, merging the intuitive pattern recognition and creativity of human systems with the speed, accuracy, and capacity of AI systems may be a key step in our evolution.