Artificial Intelligence (AI) has come a long way in the last few years, with the development of super-large models and the application of conventional learning methods. However, despite the advancements made in AI, there is still a long way to go to achieve the same level of intelligence as the human brain. In this post, we will explore the question of how close we are to reaching human brain capabilities with AI and discuss the challenges that we face in achieving this goal.
Comparing Neurons and Transistors
To begin with, let’s look at the number of neurons that make up the human brain. It is estimated that there are about 100 billion neurons in the human brain. On the other hand, the number of semiconductor transistors is doubling every two years, according to Moore’s Law. For example, the NVIDIA A100 GPU has 54 billion transistors, making it one of the largest chips ever made. The Intel Xe-HPC “Ponte Vecchio” GPU is said to have over 100 billion transistors, exceeding the number of neurons in the human brain.
While this may seem like a fair comparison, it’s important to note that brain neurons perform much more complex functions than transistors. Neurons operate at 1 KHz, while processor clock speeds operate at 3 GHz or more, which is a difference of about 3 million times or more. Therefore, if we want to simulate the behavior of neurons with 3 million transistors, we need to find a way to make them work more efficiently.
The Role of Model Parameters in AI
Another way to compare human intelligence and AI is by looking at the number of basic units in a neural network. For example, the GPT-2 deep learning model has 1.5 billion parameters, while GPT-3 has 175 billion parameters. This means that the number of basic units in GPT-3 has already exceeded the number of neurons in the human brain.
However, simply increasing the number of parameters in a model does not necessarily mean that it is becoming more intelligent. Even if the number of model parameters matches the number of neurons in the human brain, we still need to find ways to make the model more efficient in terms of its operation.
Challenges in Achieving Human Brain Capabilities with AI
One of the challenges in achieving human brain capabilities with AI is the fact that the human brain has many more connections between neurons than the current processor architectures used in AI. The number of connections between neurons is estimated to be around 100 trillion, which is 1,000 times more than the number of neurons. All of these connections can operate in parallel, which is not currently possible with current processor architectures.
Another challenge is the fact that current neural network models are often designed in a layered structure, with layers connected in series. This approach is not as efficient as a structure that would allow for interconnection and parallel motion between all neurons.
In conclusion, while we have made significant progress in AI in recent years, we are still a long way from achieving the same level of intelligence as the human brain. Simply increasing the number of model parameters or the number of transistors in processors is not enough to achieve this goal. We need to find ways to make AI more efficient in terms of its operation, and we need to develop new processor architectures that can handle the high number of connections between neurons. It will take time and effort, but we will eventually get there.