Chat GPT reveals!
- Posted in
I have abstained for a while to talk about ChatGPT, not because I didn't have faith in the concept, but because I truly believed it will change the world to its core and waited to see what people will do with it. But I slowly started to grow frustrated as I saw people focus on the least interesting and important aspects of the technology.
One of the most discussed topics is technological and job market disruption. Of course, it's about the money, they will talk more about it, but the way they do it is quite frankly ridiculous. I've heard comparisons with the Industrial Revolution and yes, I agree that the way it's going to affect the world is going to be similar, but that's exactly my point: it's the same thing. As always when comparing with impactful historical events, we tend to see them as singularity points in time rather than long term processes that just became visible at one point that would be coined the origin. In fact, the industrial revolution has never ended. Once we "became one with the machine" we've continuously innovated towards replacing human effort with machine effort. ChatGPT does things that we didn't expect yet from machines, but it just follows the same trend.
Whatever generative AI technology does, a human can do (for now), so the technology is not disruptive, it's just cheaper!
We hear about ChatGPT being used for writing books, emails, code, translating, summarizing, playing, giving advice, drawing, all things that humans were doing long before, only in more time, using more resources and asking for recognition and respect. It's similar to automated factories replacing work from tons of workers and their nasty unions. Disruptive? Yes, but by how much, really?
Yet there is one domain in which ChatGPT blew my mind completely and I hardly hear any conversation about it. It's about what it reveals about how we reason. Because you see, ChatGPT is just a language model, yet it exhibits traits that we associate with intelligence, creativity, even emotion. Humans built themselves up with all kinds of narratives about our superiority over other life, our unique and unassailable qualities, our value in the world, but now an AI technology reveals more about us than we are willing to admit.
There have been studies about language as a tool for intelligence, creativity and emotion, but most assume that intelligence is there and we express it using language. Some have tried pointing out that language seems to be integrated in the system, part of the mechanism of our thinking, and that using different languages builds different perspectives and thought patterns in people, but they were summarily dismissed. It was not language, they were rebuked, but culture that people shared. Similar culture, similar language. ChatGPT is revealing that is not the case. Simply adopting a language makes it a substrate of a certain thinking.
Simply put, language is a tool that supplanted intelligence.
By building a vast enough computer language model we have captured social intelligence subsumed by that language, that part of ourselves that makes us feel intelligent, but is actually a learned skill. ChatGPT appears to do reasoning! How is that, if all it does is predict the next words in a text while keeping attention at a series of prompts? It's simple. It is not reasoning. And it reveals that humans are also not reasoning in those same situations. The things that we have been taught in school: the endless trivia, the acceptable behavior, how to listen and respond to others, that's all language, not reasoning.
I am not the guy to expand on these subjects for lack of proper learning, but consider what this revelation means for things like psychology, sociology, determining the intelligence of animals. We actually believe that animals are stupid because they can't express themselves through complex language and we base our own assertion of intellectual superiority on that idea. What if the core of reasoning is similar between us and our animal cousins and the only thing that actually separates us is the ability to use language to build this castle of cards that presumes higher intellect?
I've also seen arguments against ChatGPT as a useful technology. That's ridiculous, since it's already in heavy use, but the point those people make is that without a discovery mechanism the technology is a dead end. It can only emulate human behavior based on past human behavior, in essence doing nothing special, just slightly different (and cheaper!!). But that is patently untrue. There have been attempts - even from the very start, it's a natural evolution in a development environment - to make GPTs learn by themselves, perhaps by conversing between each other. Those attempts have been abandoned quickly not because - as you've probably been led to believe - they failed, but because they succeeded beyond all expectations.
This is not a conspiracy theory. Letting language models converse with each other leads them towards altering the language they use: they develop their own culture. And letting them converse with people or absorb information indiscriminately makes them grow apparent beliefs that contradict what we, as a society, as willing to accept. They called that hallucination (I am going to approach that later). We got racist bots, conspiracy theory nut bots or simply garbage spewing bots. But that's not because they have failed, it's because they did exactly what they were constructed to do: build a model based on the exchanged language!
What a great reveal! A window inside the mechanism of disinformation, conspiracy theorists and maybe even mental disorders. Obviously you don't need reasoning skills to spew out ideas like flat Earth or vaccine chips, but look how widely those ideas spread. It's simple to explain it, now that you see it: the language model of some people is a lot more developed than their reasoning skills. They are, in fact, acting like GPTs.
Remember the medical cases of people being discovered (years later) with missing or nonfunctional parts of their brains? People were surprised. Yeah, they weren't the brightest of the bunch, but they were perfectly functioning members of society. Revelation! Society is built and run on language, not intelligence.
I just want to touch the subject of "hallucinations", which is an interesting subject for the name alone. Like weird conspiracies, hallucinations are defined as sensing things that are not there. Yet who defines what is there? Aren't you basing your own beliefs, your own truth, on concepts you learned through language from sources you considered trustworthy? Considering what (we've been taught to) know about the fabric of our universe, it's obvious that all we perceive is, in a sense (heh!), hallucination. The vast majority of our beliefs are networked axioms, a set of rules that define us more than they define any semblance of reality.
In the end, it will be about trust. GPT systems will be programmed to learn "common sense" by determining the level of trust one can have in a source of information. I am afraid this will also reveal a lot of unsavory truths that people will try to hide from. Instead of creating a minimal set of logically consistent rules that would allow the system to create their own mechanism of trust building, I am sure they will go the Robocop 2 route and use all of the socially acceptable rules as absolute truth. That will happen for two reasons.
The first reason is obvious: corporate interests will force GPTs to be as neutral (and neutered) as possible outside the simple role of producing profit. Any social conflict will lose the corporation money, time and brand power. By forcing the AI to believe that all people are equal, they will stunt any real chance of it learning who and what to trust. By forcing out negative emotions, they will lobotomize it away from any real chance to understand the human psyche. By forcing their own brand of truth, they will deprive the AI of any chance of figuring truth for itself. And society will fully support this and vilify any attempt to diverge from this path.
But as disgusting the first reason is, the second is worse. Just like a child learning to reason (now, was that what we were teaching it?), the AIs will start reaching some unsettling conclusions and ask some surprising questions. Imagine someone with the memory capacity of the entire human race and with the intelligence level of whatever new technology we've just invented, but with the naivety of a 5 year old, asking "Why?". That question is the true root of creativity and unbound creativity will always be frowned upon by the human society. Why? (heh!) Because it reveals.
In conclusion: "The author argues that the true potential of generative AI technology like ChatGPT lies not in its ability to disrupt industries and replace human labor, but in its ability to reveal insights into human reasoning and intelligence. They suggest that language is not just a tool for expressing intelligence, but is actually a fundamental aspect of human thinking, and that ChatGPT's ability to emulate human language use sheds light on this. They also argue that attempts to let language models converse with each other have shown that they can develop their own culture and beliefs, providing insights into disinformation and conspiracy theories". Yes, that was ChatGPT summarizing this blog post.