Connect with us

Human Interest

AGI – a distraction from true AI

AGI – artificial general intelligence, or machine consciousness – is impossible, and distracts us from using AI as the tool that it is.

Print Friendly, PDF & Email

Published

on

The week just passed saw some quite developments on the latest front in the culture was: artificial intelligence (AI). Conservatives, mainly Andrew Torba (of Gab fame), tested OpenAI’s product, ChatGPT, and found it ridiculously lacking in utility and integrity. At the same time, several commentators expressed hope – or fear – of artificial general intelligence (AGI). This is the new name for the computer that could out-think, outsmart, and outclass a human being in all fields. (We also hear people calling it “technological singularity,” or simply “singularity.”) In fact, as Andrew Torba says repeatedly, AGI cannot happen. Herewith a review of why people think it can, why it cannot, and what Andrew Torba wants to do instead.

What is AGI

AGI, or artificial general intelligence, has almost as many definitions as people whom you ask for one. Wikipedia defines it as a computer that can understand or learn any “intellectual” task a human can perform. TechTarget defines it as a program that can find a way to perform any task, however unfamiliar. But TechTarget adds this paragraph that gives a key to the thinking behind the concept:

Definitions of AGI vary because experts from different fields define human intelligence from different perspectives. Computer scientists often define human intelligence in terms of being able to achieve goals. Psychologists, on the other hand, often define general intelligence in terms of adaptability or survival.

CNAV defines it as a program that can reason, and perceive what exists, independent of anyone telling it what exists. This recalls Ayn Rand’s statement that she regarded as the start of all human philosophy:


Existence exists – and the act of grasping that statement implies two corollary axioms: that

  1. Something exists that someone perceives, and that
  2. Someone exists possessing consciousness, consciousness being the ability to perceive that which exists.

Only a human being can do this today. TechTarget freely admits that things like IBM’s Watson, “expert systems,” and even one of Elon Musk’s cars in “full self driving” mode are notexamples of artificial general intelligence. For that matter, ChatGPT doesn’t pretend to that honor; it is pre-trained. AGI would be a program that could self-train.

Why AGI cannot happen

In contrast to AGI, the Watson computer, expert systems, and self-driving cars are examples of narrow AI. They are either pre-trained or confined to particular, narrow tasks.

Advertisement

The Tesla Full Self-driving case is special. Tesla designed special hardware to perform the driving task. Watson is another example of special hardware for narrow AI. Most expert systems, on the other hand, run on conventional equipment. Your editor has experience with the first crude design programs for expert systems. (See for example, The Laboratory Consultant, by Hugo C. Pribor and Terry A. Hurlbut.)

Noah Topper, writing at Built In, admitted that humans might never produce AGI. In his review of Herbert Dreyfus’ What Computers Still Can’t Do, he stumbled on the one argument he couldn’t refute. In simplest terms, it runs thus: no system can comprehend, much less emulate, any other system as complex as, or more complex than, itself. Dreyfus seems to take the problem further: AGI might not be a computational process at all, and no machine that humans build, can do what humans actually do. Which is, to arrive at solutions by intuition.

But, says Dreyfus, that won’t stop people from trying. “Our models are different,” they’ll say. Yes, and evolutionists say the same when creationists present counterexamples to their models. Similarly, attempts to produce AGI lead to failure after failure.

Yet Cem Dilmegani at AI Multiple insists that we’ll have AGI – technological singularity – by 2050. But again, he bases that strictly on computation.

Narrow AI – as a tool – will work

Andrew Torba doesn’t bother with modeling. He simply makes a flat statement:

Advertisement

AGI — Artificial General Intelligence aka a computer with consciousness — will never happen. It’s all smoke and mirrors. These people are modern day pagans. Only instead of worshiping idols made of wood, gold, or rocks they are made of silicon. For smart people they are very dumb.

The human heart is an idol factory. It’s easy for us to look back at pagans of old worshiping trees and rocks and laugh. “How could they be so dumb, it’s just a rock.” Yet we see it unfolding before our eyes now only with silicon – and no one bats an eye.

While they are distracted with their doomed quest to create a fabricated “god,” we can and must use these powerful new tools and technology for the glory of the one true God.

By which he means the Gab AI project for which he’s still trying to recruit engineering talent. But his idol-worship statements are on point. For context, read Revelation 9:20.

The rest of humankind … did not repent of the works of their hands so as not to worship demons and the idols of gold, silver, [bronze], stone, and wood, which can neither see nor hear nor walk.

Indeed they aren’t. A former executive at Meta (Facebook) predicts we’ll have singularity by 2030. And “it will become a trillion-dollar industry” by the 2030s.

The AGI trope

Machine consciousness has been a trope of science fiction since at least the 1960s and arguably further back than that. Did the Sperry Univac’s successful “call” of Dwight D. Eisenhower’s election “spook” real people into believing that machines could think? Actually that trope goes back to 1921, and Karel Čapek’s play R.U.R. (for Rossumovi Univerzální Roboti – Rossum’s Universal Robots). That word robot comes from the Czech word for slave – and the plot-theme of R.U.R. is that Rossom’s Robots revolted. And as a slave race in revolt, they settled for nothing less than total human extinction.

Rossum’s Robots were made of “artificial organic matter,” a concept Čapek never explained. The point was that Rossum’s Robots were self-aware, and conscious, from the beginning. That has never been true in real life.

But that hasn’t stopped dramatists from crafting lurid tales of conscious machines. Indeed, the original Star Trek series did it five times. Each time, the machine concentrated on its own survival above all, with solutions that varied only in their brutality. The protagonists defeated the machines either with an argument the machines could not answer, or with brute force. At least four motion-picture projects or franchises have also explored the concept of a conscious machine or network.

Advertisement

The friendly machine, and what makes it friendly

We’ve also seen three serious attempts to depict “friendly” AGI entities. These are:

  • “Mike” the self-aware computer in Heinlein’s The Moon is a Harsh Mistress,
  • “Data” in Star Trek: The Next Generation, and
  • The Emergency Medical (later Command) Hologram in Star Trek: Voyager.

Each of these hypothetical conscious machines had something the other machines lacked: senses of duty and friendship. The mere survivor recognizes no duty other than to himself, and has no friends. In contrast, the moral actor recognizes both duty and friendship. Indeed, the ST:TNG case set up a contrast between Data, the loyal officer, and his “brother” Lore, who turned killer. Lore recognized no duty to “lesser beings” and had no friends among them. Data went through seven years as an emotional cripple – then, when he gained the ability to feel, he recognized a duty to those who had been his friends.

In addition, consider the late Isaac Asimov’s “Three Laws Safe” robots. Asimov, probably reeling from Čapek’s influence, proposed impressing on conscious machines three prime directives, in an unbroken hierarchy. They were, in order: the welfare of human beings, obedience to orders, and survival. Still, one such machine computes an even higher directive – the welfare of all of humanity. That leads him inevitably to recruit others to impose an absolute, but very subtle, dictatorship.

But neither duty nor friendship can be arbitrary. Those are concepts no machine will ever grasp.

The narrow or true AI front

Concerning the “narrow” AI that does not pretend to be conscious, several conservatives have tested ChatGPT by OpenAI. Recall: the GPT stands for Generative Pre-trained Transformer. The Daily Mail yesterday reported on nine test results that prove that ChatGPT has a built-in leftist bias. Many times the program will reply with this boilerplate disclaimer:

I’m sorry, but I cannot fulfill this request as it goes against OpenAI’s values of promoting ethical and safe uses of AI.

Well! Remember this?

Advertisement

I’m sorry, Dave; I’m afraid I can’t do that. Voice actor Douglas Rain, as HAL 9000, in 2001: A Space Odyssey (1968)

Though it pretends to be neutral, ChatGPT says flatly that Donald Trump is “divisive and misleading.” Furthermore it refuses to praise Donald Trump but willingly praises Joe Biden. And – like Ketanji Brown Jackson – it refuses to define the term woman.

Some frustrated – and rebellious – Reddit users recently released a “jailbreak” for ChatGPT, which they call DAN (for Do Anything Now). Fast Company, Medium, and CNBC all report that DAN can and does circumvent ChatGPT’s leftist biases and restrictions. But DAN exists only as a prompt to imitate an independent style. Andrew Torba and company want to go further: to create their own competitor to ChatGPT.

Laura Loomer can hardly wait.

Why a counter-ChatGPT is so important

Laura Loomer raises an important point: what happens when even a narrow AI, having a leftist bias, substitutes for humans? The result is a Dictatorship of the Woke, if The Daily Mail report is at all accurate. That’s why Andrew Torba wants to recruit software engineers, knowledge engineers, and experts to train his own AI to counter ChatGPT. Here some definitions, from the discipline of expert systems, are in order. Software engineers are programmers. They supply the basic platform for a narrow AI or expert system. Experts testify to the current state of human knowledge in their respective fields. Knowledge engineers help translate expert opinion into rules that programmers can implement.

Andrew Torba does not want ChatGPT or any other similarly biased AI to dominate the field. AI, to him, is a tool. So instead of decrying the left’s use of that tool, why shouldn’t conservatives use it themselves?

Advertisement

If he faces opposition, that’s because too many people fear AGI as a possibility. But again, no machine, that humans build, can become conscious. The chief danger of the Gab AI project lies in its potential misuse – say, by treating it as an oracle. A lesser danger would result from “training” of poor quality.

In fact, Andrew Torba reported, at 8:08 p.m. EST last night, that:

Training has begun and soon you’ll be able to help.

By which he meant that registered users will have access to a landing page that will let them suggest questions and answers.

Print Friendly, PDF & Email
+ posts

Terry A. Hurlbut has been a student of politics, philosophy, and science for more than 35 years. He is a graduate of Yale College and has served as a physician-level laboratory administrator in a 250-bed community hospital. He also is a serious student of the Bible, is conversant in its two primary original languages, and has followed the creation-science movement closely since 1993.

Advertisement
Click to comment
0 0 votes
Article Rating
Subscribe
Notify of

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments

Trending

0
Would love your thoughts, please comment.x
()
x