Google's AI Chatbot Spews Anti-American
Bilge on Nation's Birthday, Defends Communist Manifesto
Red State,
by
Mike Miller
Original Article
Posted By: Dreadnought,
7/4/2024 1:36:24 PM
If you (generic "you") thought texts created by artificial intelligence (AI) chatbots were free from the biases associated with biased human writers, you were terribly wrong.
Before we get to the "best" recent example, a few words on the basics of how AI chatbots work (emphasis, mine):
AI chatbots employ a variety of AI technologies, from machine learning—comprised of algorithms, features, and data sets—that optimize responses over time, to natural language processing (NLP) and natural language understanding (NLU) that accurately interpret user questions and match them to specific intents.
Yeah, no. The bolded-font claim is a complete load of crap, as a Media Research Center (MRC) study
Post Reply
Reminder: “WE ARE A SALON AND NOT A SALOON”
Your thoughts, comments, and ideas are always welcome here. But we ask you to please be mindful and respectful. Threatening or crude language doesn't persuade anybody and makes the conversation less enjoyable for fellow L.Dotters.
Reply 1 - Posted by:
DVC 7/4/2024 1:39:10 PM (No. 1749661)
AI should be called AS, for Artificial Stupid. I am entirely unimpressed with this.
And as always, Google is EVIL.
33 people like this.
Reply 2 - Posted by:
davew 7/4/2024 2:25:13 PM (No. 1749685)
LLMs create "concept maps" that associate the probability of words appearing together based on the user's prompt. Questions with "should" are very problematic because they are based on the values of the prompter, which are unknown to the model. It returns a "on the one hand, on the other hand" type of answer. If the prompt includes 1619 and 1776, it will merge concepts around these dates, including the alternate 1619 theories. If the prompt asks "whether the Star-Spangled Banner was offensive," it will find associations between "offensive" and "Star Spangled Banner" and merge those into a coherent sentence. Prompts about "what is more important" are value judgments that cause the LLM to split the difference.
The LLM is asked to make value judgments on various topics in each prompt in the article. Because it is unbiased, it attempts to satisfy all viewpoints. The authors want an LLM that parrots back their idea of "truth" based on their values. As Nietzsche wrote, "all truth is human." AI is not human.
12 people like this.
Reply 3 - Posted by:
DVC 7/4/2024 4:48:05 PM (No. 1749758)
Yes, #2 is exactly right. This is what I call "artificial stupidity".
And many years ago, when I was still solving big engineering problems for a major aerospace government contractor, we looked very closely at "AI" with a real thought to, if it was useful, put it to work in manufacturing and other useful things.
We found quickly that "AI" absolutely CANNOT do simple things that humans can do in a fraction of a second. A perfect example is a pile of raw forgings for a machine part, piled into a handling basket. This is a very normal thing in manufacturing. A person reaches into the backet, grabs a raw forging, rotates and translates it as necessary to get it proparly oriented to put into a holding fixture for a CNC machining center. A computer, with "vision" absolutely could NOT figure out how to pick up one part from a pile and orient it correctly. Now, admittedly this was 15 years ago, but we had a pretty big budget and ten engineers assigned to this and were talking to the best "AI" folks out there, willing to spend significant money on this capability. Several companies tried it, installed hardware and cameras and such....and failed miserably.
After three years of serious effort, trying to find some commercially useful "AI" capability, we abandoned it, as "not ready for prime time".
And a good friend, coworker, when asked about "how the project worked out" a few months after it was concluded and he went back to his normal work, answered "You have to be very, very careful when creating artificial intelligence." I asked why, and the answer was "Because we discovered an important law of AI, which is that for every unit of artificial intelligence create in the universe, there are two units of artificial stupidity created, too. " And he laughed out loud.
In certain narrow areas, "AI" can be taught to do useful pattern matching and such things. And with enough detailed programming in advance to provide "boundaries" and general directions, no doubt there will be more semi-autonomous weapons systems coming to the market. There are already plenty of semi-autonomous weapons systems that have been in use for decades. Air-to-air missiles often have a final guidance system which "sees" the target and homes in on it. It has to be the right kind of target, in the right conditions, and it has to be guided to a pretty close proximity....but the final guidance is autonomous.
Is that "AI"? Not really, but it's a continuum, not a sudden breakthru, really.
13 people like this.
Reply 4 - Posted by:
DanvilleBill 7/4/2024 5:37:51 PM (No. 1749775)
I recommend posters read this book:
"The Coming Wave: Technology, Power and the Twenty-first Century's Greatest Dilemma"
Education is a good thing.
8 people like this.
Reply 5 - Posted by:
Tet Vet 68 7/4/2024 6:24:40 PM (No. 1749796)
Fifty years ago we said that computers were garbage in garbage out. Nothing has changed with AI. Programmers are still putting garbage in and AI is spewing out that garbage.
12 people like this.
Reply 6 - Posted by:
kono 7/4/2024 7:29:12 PM (No. 1749836)
All the AI faculty and researchers I met when I did computer support at MIT and SRI leaned strongly Left, and many were outspoken Communists. They seeded the knowledge base on which their reasoning engines were built; so it's no surprise that the systems using them have a Leftist perspective.
10 people like this.
Reply 7 - Posted by:
Strike3 7/4/2024 7:45:42 PM (No. 1749841)
The current version of artificial intelligence is akin to loading your car's mileage calculating computer with thousands of miles worth of mileage history of 5 mpg. Any future miles will always be biased toward that low mileage figure even though you actually get 28 mpg and you will never reach your car's true performance. The Leftists who work with today's high tech will be able to do much more harm than that such as predicting that a new virus will wipe out 60% of the world's population when it's actually a mild cold.
6 people like this.
Reply 8 - Posted by:
danu 7/5/2024 1:23:28 AM (No. 1749923)
we are shocked, i tell you , to hear these doofii defending the comanifesto
surely, somebody must read it to them to concoct a defence. bah humbug.
3 people like this.
Reply 9 - Posted by:
DanvilleBill 7/5/2024 1:30:14 AM (No. 1749925)
It's nice to see there are followers of Ned Ludd on this thread.
2 people like this.
Reply 10 - Posted by:
Trapper 7/5/2024 7:59:47 AM (No. 1750010)
Garbage in / garbage out.
6 people like this.
Reply 11 - Posted by:
LC Chihuahua 7/5/2024 10:45:12 AM (No. 1750125)
Just because Google calls Chatbot an AI does not make it so. Chatbot is not a true AI. It does not learn. It is a program created by lefties to repeat leftist propaganda. It will never do anything else. It will never learn. It is the opposite of an AI.
5 people like this.
Reply 12 - Posted by:
JimBob 7/5/2024 11:28:38 AM (No. 1750147)
This AI thing..... is it a Blonde?
4 people like this.
Reply 13 - Posted by:
lennon47 7/5/2024 11:34:22 AM (No. 1750150)
If Microsoft were not so USER-UNFRIENDLY I would leave Google Gmail. Does anyone have a recommendation?
4 people like this.
Reply 14 - Posted by:
MickTurn 7/5/2024 12:05:58 PM (No. 1750175)
The AI tool's real name is CrapBot.
3 people like this.
Meta AI is the absolute worst. It is dumber than the entire “brain trust” of the White Basement.
3 people like this.
Reply 16 - Posted by:
danu 7/5/2024 2:17:52 PM (No. 1750275)
#13--look up ionos available 24/7... for various needs, many businesses use it
1 person likes this.
Reply 17 - Posted by:
clipped wings 7/5/2024 3:01:42 PM (No. 1750294)
#13, you might like Protonmail. It’s encrypted,and the servers are in Switzerland.
3 people like this.
Reply 18 - Posted by:
paral04 7/5/2024 5:55:55 PM (No. 1750379)
This is what scares me about kids relying on AI for information instead of researching with at least three sources before making any conclusions. What AI is spewing out is based upon what liberal tech person thinks should be the resource of all information.
2 people like this.
Below, you will find ...
Most Recent Articles posted by "Dreadnought"
and
Most Active Articles (last 48 hours)