Artificial Intelligence? Or, Just A New Version of Wikipedia?
David Wojick has thought a lot about AI. He says it may be little more than a fancy Wikipedia, primarily because bad inputs yield bad results and only real intelligence can separate wheat from chaff.
Guest Post from David Wojick of CFact.
Having worked on the I side of AI, off and on, for over 50 years, I am fascinated by the wave of chatbots that are getting so much debate beginning with ChatGPT. To get my fair share of abuse, here are some thoughts on it.
Just like Wikipedia, these machines are there to answer your questions. But their corpus is probably enormously larger—how big I have no idea. An interesting question is: One would like to know what body of documentation a given chat bot is working from, and how this differs from bot to bot.
Question-answering systems have been around for a while now. They really hit their stride when a bot called IBM Watson creamed two human champions in a 2011 Jeopardy match.
What is impressive is that the new bots provide long form answers along the lines of a Wikipedia article. But then you can also question the article, asking for more. You can even disagree and debate the issue. This is truly amazing.
What seems to surprise or disappoint a lot of people is that these bot answers can be wrong, or biased, or even deliberate lies. I am sure this is not deliberate, but when it comes to emulating humans, which is what AI is supposed to do, it is right on the money.
After all, there is a lot of bad information out there for the bot to use. And being biased by your training is nothing new. The lying looks at this point to be a mystery which is very interesting. Certainly a good research topic.
In fact, there is a lot of research going on into these bots and their potential (for good or evil). Google Scholar says there are already over 15,000 journal articles that mention ChatGPT, with about 600 that have it in the title making it the central focus of the research.
All of this means that one must be cautious in using a chat bot, just as with Wikipedia or any source for that matter. Bias and falsehood are constant companions of human affairs.
In particular, ChatGPT is heavily biased in the climate and energy area, being basically an artificial alarmist. The other bots likely are too. But so is Wikipedia and for the same reason. All are controlled by alarmists.
But, unlike most alarmists, these chat bots are happy (can I say that?) to discuss and debate the climate and energy issues. They will even admit being wrong, which might be a rare human trait. So, one use is for skeptics to test their arguments, learning and overcoming the false alarmist counter-arguments. This could strengthen skepticism.
Unfortunately, while a bot can be convinced to agree with a skeptical argument, they do not seem to learn from that experience. They give the same alarmist response the next time queried. Maybe the next wave will do better.
I have no idea how the bots actually work, but from my side, it is not hard to go from simple question answering to long-form answers. Back in 1973, I discovered how sentences fit together when we write and speak.
At the simplest, each sentence after the first is answering an unspoken question posed to one of the prior sentences. Since there can be multiple answers to a given question and multiple questions posed to the same sentence, this generates a tree structure I named the “issue tree.”
There are two major exceptions. One is objections, and the other is when we start talking about what has been said instead of talking about the subject under discussion. These can make issues complicated and thus confusing.
So, to craft a many-sentence response to your question, the bot just has to repeatedly pose questions to the sentences it has so far and add the answers as new sentences. To do that, it just needs a question generator to complement its answer generator.
However they do it, chatbots are amazing to me. Each is a Wikipedia-like machine that, unlike Wikipedia, is not bound by what has already been written for it. Of course, they can do damage, just as Wikipedia can, so caution is called for.
Wikipedia changed the world of knowing, and chatbots are likely to also, only more so.
Editor's Note: Why is this so important? First, because AI is both a threat and an opportunity. The threat lies in AI substituting itself for human governance, of course. It could theoretically destroy us all when used to make big decisions. There is always the potential for machines that are put in charge of things could conclude humans are the enemy, after all.
The opportunity side comes from the fact AI makes it ever harder to hide facts even though it comes with a bias at the outset. And, where it differs from Wikipedia is that we can keep digging with AI, despite the bias as I learned here. We are not confined to one biased page edited by propagandists. Climate provides a perfect illustration as we also showed here. If one keeps asking questions the truth seeps out. AI, therefore, is a vehicle for samizdat for those with the skill to use it and we must do so if we have any hopes of preserving our energy security and freedom from the corporatist, global and ideological elites who want to destroy civil society and replace it with a kingdom.
#Ai #ArtificialIntelligence #Wikipedia #Samizdat #Freedom #EnergySecurity #Climate #GlobalWarming
I to have given much thought about AI, even messed around with it some, not to the point that some have. I tend to agree with you, it is like any other tool one would use for research. If you ask the right question enough, eventually you find the correct answer. The tricks to good research apply, know enough about what you are asking to sort out bad data and look in more than one place. I also agree that there is a danger, as you say there is always a danger of putting machines in charge. The question that I pose on that issue is this: How will I know when a machine has been put in charge?