🎉 Gate.io Growth Points Lucky Draw Round 🔟 is Officially Live!
Draw Now 👉 https://www.gate.io/activities/creditprize?now_period=10
🌟 How to Earn Growth Points for the Draw?
1️⃣ Enter 'Post', and tap the points icon next to your avatar to enter 'Community Center'.
2️⃣ Complete tasks like post, comment, and like to earn Growth Points.
🎁 Every 300 Growth Points to draw 1 chance, win MacBook Air, Gate x Inter Milan Football, Futures Voucher, Points, and more amazing prizes!
⏰ Ends on May 4, 16:00 PM (UTC)
Details: https://www.gate.io/announcements/article/44619
#GrowthPoints#
A Web3 practitioner looks at ChatGPT and AGI
Written by: KK, founder of HashGlobal
I recently listened to the sharing of Dr. Lu Qi and Dr. Zhang Hongjiang on the large model and ChatGPT, and I gained a lot. I compiled some thoughts and shared with Web3 practitioners:
2 The evolution of the model will be very similar to the evolution of genes and life, the essence is the same. The emergence of the Transformer model architecture is like the first time a molecule "unintentionally" built a replicable RNA; GPT-1, 2, 3, 3.5, 4, and the subsequent model development and evolution As long as there is no comet-like event, It may be like the explosion of life, faster and faster, and more and more "out of control". The model itself has an "internal driving force" that continues to complicate, which is the law of the universe.
3 More than 30 years ago, Stephen Pinker discovered that language is human instinct, just as we now realize that the original language ability is the source of the generalization ability we are striving for in model training. Why did artificial intelligence and language research fail in the past, because the path was reversed! Language is an instinct that emerges when the neuron system of the human brain is sufficiently complex. As for whether it evolved into Chinese or English, bird language is related to the environment and tribes. The generalized model plus other "models" are superimposed to produce the awesome intelligent body of human beings. You have to actively design and build intelligent agents or AGI, and the road is completely reversed. I think it may not be impossible, but it is 100 million times more difficult.
4 If there is a high-dimensional intelligent body or "god" "on top" of our universe, when it sees the Cambrian explosion on Earth, it should be as surprised as we see ChatGPT today. I can't fully explain it for the time being, I can only experience and learn slowly, and try to understand.
5AGI can be disassembled into Reason, Plan, Solveproblems, Thinkabstractly, Comprehend complex ideas, and Learning. GPT4 is currently available except for Plan, and Learning is half of it (because it is based on the pre-training model and cannot be learned in real time), and everything else is available.
6 The average learning ability of the human brain evolves slowly, but once the development of silicon-based intelligence is in the right direction, the speed can be exponential (see the gap between GPT4 and GPT3.5).
7 big models = big data + big computing power + strong algorithms. Only the United States and China can do it in the world. The difficulty of making a large model lies in the accumulation of chips, CUDA (GPU programming platform) developers, engineering construction, and high-quality data (for training, parameter adjustment and alignment). Alignment has two aspects, one is to align the model and expression of the human brain, and the other is to align human moral standards and interests. There are great opportunities for domestic vertical model tracks in at least two directions: medical care and education.
Although 8GPT4 still has weaknesses and shortcomings, just like the human brain, once it is given more clear instructions or prompts, it can be stronger; it can also be more perfect after calling other auxiliary tools. Just like the human brain also needs tools such as calculators to complete tasks that the human brain itself is not good at.
The number of parameters of the 9 large models should be compared with the number of synapses of neurons in the human brain (not neurons), which is 100 trillion. The parameter quantity of GPT4 has not been announced yet, but it is estimated that the parameter quantity of the large model will be approached soon.
The current hallucination rate of 10GPT4 is about 10%-14%, which must be reduced. Hallucination rate is an inevitable feature of "humanoid" models. This ratio is still too high compared to humans. Whether it can be effectively lowered will determine whether the development of AGI will continue to rise all the way or enter a period of trough in a few years.
11 For me personally, the biggest significance of ChatGPT is that it is the most direct and undisputed proof that based on simple calculation nodes and functions, as long as the number is large enough and the model is large enough, a sufficiently complex thinking model can be generated, and This system is finite, not infinite. There may not be a soul behind human language and the thinking that drives it. It may be something that "emerges" after being continuously tuned by the evolution of the environment after 100 trillion synaptic connections. All this is very much in line with the rapid progress of various researches on the question of "where do people come from" in the past two hundred years.
12 From single cells to the formation of human beings, all the chains of evidence are complete enough; regarding the formation of complex systems, the existence of genes and "motives" also have complete theories; but can humans design a silicon-based AGI based on all scientific theories? Woolen cloth? Some people think it is a few years, some people think it is decades, and more people think it will never (even after seeing AlphaGo's performance in the field of Go), but ChatGPT gives the most clear answer with iron facts. Sam's team should have never felt that the human brain is so great, so they can be so determined to follow the AGI route of large models. It is still a test of faith to burn 100 million US dollars a month.
13 Due to the different underlying hardware, the "strategy" of ChatGPT is likely to be very different from the human brain, and it is inefficient, but it is surprising that the results produced by Her are so human-like. Human thinking may be driven by simple rules in essence.
14 The "rules" of language and thinking may not be completely summed up by us according to "grammar". At present, this rule is implicit and cannot be completely simplified and summarized. So at present, we can only use large models to do it. After all, the human brain structure is naturally evolved from single cells. Even if there is a creator, it should be "opened up" after the universe, and then let it go. Otherwise, how could there be so many? bugs and shortcomings, lol.
15 I admire Steven Pinker. He could convincingly explain that language is the instinct of all human beings and is "engraved" in our genes decades ago, using only observation and reasoning. I don't know if Sam has read the book "Language Instinct", but he proved that artificial networks like ChatGPT can do a very good job of language creation. Language instinct and logical thinking are not as complicated as imagined. ChatGPT has "silently" discovered the logic behind the language. Language will also be the "instinct" that differentiates all silicon-based AGIs from other silicon-based calculators and AIs.
16 Both the human brain and the carbon-based brain like to generalize and refine (probably forced by cruel evolution), so they are extremely efficient (in terms of energy use); but they are not good at irreducible calculations and processing, and we know that many calculation models may only be able to Do it step by step. The architecture of GPT4 is definitely not optimal, without much generalization and simplification, so the energy consumption is extremely high. However, a global consensus has been formed that "this road is feasible". Later, we should see multiple teams in the United States and China accelerate progress in various aspects: chip computing power, data quality, algorithm optimization, and engineering architecture.
The value evaluation system of the brains of 17 people should be the DNA and genetic genes formed by carbon-based molecules. "In order" to maximize their own replication probability, through the power of natural evolution, set weights for the synapses of neurons, and gradually evolve to determine down. The "model" supported by this carbon-based computing node is far from perfect, and the evolution rate is slow. The adjustment of "weight" and "algorithm" is extremely inefficient, and it cannot keep up with environmental changes at all. That's why we have human desires and sufferings mentioned by various religions.
18 It is mentioned in the book "WhyBuddismis True" that the human brain has at least 7 modules (should be a multi-modal parallel large model). Which thinking module occupies the subject of the "present" and how people make "decisions" are actually determined by "feeling". And this "feeling" is determined by the "old" value evaluation system brought about by the evolution of "human" (one of the carriers may be intestinal bacteria, haha). I suggest that you can read Chapters 6, 7, and 9 of the reading notes I wrote a few years ago. It is available in the official account of Alpine Academy.
19 Imagine if humans really created silicon-based AGI and robots. What is the value evaluation system that drives the robot brain? Will the robot also be confused "Where did I come from and where am I going"? Humans have Sakyamuni, why can't robots? What will the Awakening of Robots be like? Will a certain robot write a book "Why Machinismis True" one day to call for robots to awaken, call for robots to enter Nirvana, and get rid of the "reincarnation" set by humans?
The 20 energy limit would be a hard cap for model evolution. However, the energy consumption mode of silicon-based AGI in the future should be much more efficient than it is now. After all, the model of carbon-based human brain has evolved iteratively for one billion years before reaching the energy efficiency of a crow brain. In the future, the energy consumption of silicon-based AGI may be hundreds of millions of times or even higher than the energy that humans can use today, but the calculations and things that can be processed will also be hundreds of millions of times. It may not be certain that controllable nuclear fusion technology will come at hand. In this case, the energy on the earth may be enough, not to mention the solar system, the Milky Way and the wider universe.
ChatGPT and AGI are great, it should be said super super great! We are fortunate to live in this era, not only can we definitely understand where people come from, but we may also understand where people are going.
The rapid development of AI will greatly promote our demand for Web3 technology: how to confirm the rights of content creation; how to establish a person's identity (Sam is engaged in worldcoin); What's the use of being so productive? Can you imagine all the content subscriptions and using the banking system to complete transfers and cross-border transfers? Can you open a bank account for an IoT device? Can you transfer 0.01 cents to 10,000 users at the same time? ...I said last time that the next three years will be the iPhone moment for Web3, and the number of Web3 users will definitely exceed 100 million in three years, or even far exceed. You can see the flywheel below:
I have always liked to read books on life sciences, complex systems, and Buddhism (as a philosophy). Genes," "Bottom Up," "The Social Conquest of Earth," "Language Instinct," "Esoteric Simplicity," "Out of Control," and "Why Buddism is True." I think if these authors are still alive and have the ability to write, they should watch the future development of GPT and write a new edition of the book.
Human life is too short, and many great ideas are unfortunately lost forever in the long river of history. Books, music and film and television records should be only a very small part. Even if recorded, so many great books and truths have always been there, how much can one person read? Silicon-based AGI doesn't have this problem at all.
It's time to find out the dialogue between Mopheus and Neo in the movie "The Matrix" and read it again.
Author: KK, founder of HashGlobal