Image 01 Image 03

When it comes to AI, be afraid, be very afraid

When it comes to AI, be afraid, be very afraid

Our Op-Ed in the NY Post: AI is beginning to look like HAL 9000 in “2001: A Space Odyssey,” a computer that overtakes its human masters’ ability to control it and turns against humanity.

Artificial Intelligence and the use of algorithms in unseen ways is a hot media topic now, but we’ve been on the case for a long time, including almost a year ago how ‘Diversity in Recruiting’ at LinkedIn Implements Feature Allowing Recruiters to Find People by Their Demographics.

You may remember our online event in July 2023, regarding Discrimination By Algorithm – the “ultimate showdown between equity and equality is going to take place at a technological level” IVIDEO at link], where we explored how bias was built in to achieve de facto racial and other quotas.

This “discrimination by algorithm” problem was the subject of an Op-Ed by Kemberlee and me in The NY Post last month, Google’s Gemini AI Is Just The Tip Of The ‘Discrimination By Algorithm’ Bias Problem:

Everyone is laughing at the Google Gemini AI rollout. But it’s no joke.

The problem is more nefarious than historically inaccurate generated images.

The manipulation of AI is just one aspect of broader “discrimination by algorithm” being built into corporate America, and it could cost you job opportunities and more.

A few days ago we had anoter Op-ed at the NY Post relating to AI, The plight of a Kansas newspaper shows the new face of censorship via AI:

As more tech behemoths look to Artificial Intelligence to automate tasks, it becomes more apparent that AI is not ready for prime time and poses a threat to human autonomy.

No one can forget the ridiculous fiasco in which Google’s prized AI bot, Gemini, kept spitting out images of black Vikings, female Popes, and changing the ethnicity of every founding father to a person of color.

Now, the Kansas Reflector, a non-profit news operation, is fighting a losing war with Facebook’s AI bot.

Facebook flagged an article about climate change as a security threat, and then blocked the domains of any publication that tried to share or repost the article.

Users who attempted to post the article received auto-generated messages explaining that the content posed a security risk, without further explanation.

Meta, the parent company of Facebook, Instagram, and Threads is unrepentant, and says it has no idea why the Kansas Reflector’s innocuous post was blacklisted by its AI bot, but attributed the problem to likely “machine learning error.”

Of course there is zero accountability for these errors that result in immeasurable damage to brands and their reputations.

The promise of AI, we hear over and over again, is that it’s a tool to help humans do better, automating tasks to free up worker time for other things. But instead, AI looks far more like HAL 9000 in “2001: A Space Odyssey,” a computer that overtakes its human masters’ ability to control it and turns against humanity.

***

At the Equal Protection Project (Equalprotect.org), we’ve been screaming about the dangers of AI for over a year, and how bias in the name of anti-bias is being programmed into systems. Behind the scenes and out of sight, AI and social media algorithms can be used to determine what you are allowed to post, what you will be able to read, and ultimately what you will think.

Despite the promises of simplifying workflows and managing tasks, there’s far too much evidence of AI destruction to be ignored.

When it comes to AI, be afraid, be very afraid.

Click the link above for the rest of the Op-ed.

They gave us almost a full page spread for it in the print newspaper, which was nice.

We are going to keep on the AI trail.

DONATE

Donations tax deductible
to the full extent allowed by law.

Comments

I am afraid. Very afraid.
Along with CBDC, this vile technology may very well seal our collective fates as impotent serfs for the remainder of human history.

    GoodMojo in reply to TrickyRicky. | April 16, 2024 at 8:07 am

    Well be lucky to live long enough for SkyNet to become self-aware.

    Look on the bright side. The remainder of human history will likely be short.

    Dimsdale in reply to TrickyRicky. | April 16, 2024 at 9:31 am

    It is the frosting on the media bias/censorship cake; we won’t even be able to see it happening, like payroll deductions. They will claim “oh, it was done by a totally unbiased algorithm (which they will decline to let you see). Racism, preferential promotions/grants/scholarships will all be chosen this way, and the response will be the same as Fakebook/Beta: it’s just a learning problem.

    Now try to imagine some NORK/Russian/ChiCom hackers playing havok with this.

    Well I am taking it as a teachable moment. This will make leftism/socialism truly systemic, without firing a shot (unless it is us shooting ourselves in the foot).

    Of course, it provides another reason to distrust the media and internet As if we needed any.

I am comforted with the fact that they have made no secret of their hatred for my beliefs.

The most dangerous enemy is the one you believe to be a friend.

Much like my favorite content creators are un-cancel-able, my news sources are already feared, loathed and hated by big tech.

I was bummed when Weasel Zippers went defunct, but other outlets seem to be going strong.

Look to the likes of Warrior Poet and many of the gun tubers who are deplatforming from youtube. Also take a look at SOFREP and Brandon Webb who built a platform from scratch completely independent of the clutches of Big tech.

I can’t donate to everything and haven’t crossed to the point where I’ll give Ben Shapiro access to my bank account (Daily Wire) but I try and send coin to those out doing it.

Being hated and excluded by big tech is a super power. Embrace it.

    gonzotx in reply to Andy. | April 15, 2024 at 9:14 pm

    Try No Agenda with Adam Curry and John Divorak

    Adam, yes of MTV Jock fame, is the father of pod casting, no dies, only if you feel you are getting worth

    2.5 hrs Sunday and Thursday

      diver64 in reply to gonzotx. | April 16, 2024 at 6:55 am

      Good on you for bringing up Adam as the “Podfather”. He was the first to see the potential and exploit it

The Kansas Reflector does not address the underlying bias, but implies this is a matter of growing pains. AI will always be a tool of those behind the curtain to manipulate for their own purposes.

As the Reflector article said: “Facebook originally was a hookup app for college students back in the day, and all of a sudden we’re now asking it to help us sort fact from fiction.”

In this regard, Facebook could just as well be NPR.

https://www.americanthinker.com/blog/2024/04/the_new_npr_ceo_is_the_living_embodiment_of_the_democrat_party.html

Based on the mentality behind AI: NO THANKS!

The market for directed EMP devices is about to boom.

I don’t need connectivity to plant tomatoes, collect chicken eggs, or educate my children using my hard book library.

    GWB in reply to Gosport. | April 16, 2024 at 10:47 am

    And the real problem with this stuff isn’t a supposedly intelligent computer taking over or whatnot. It’s that the sycophants of technology will remove all of those hard copy books, and require you to ask the AI for something to read.

The thing to be afraid of is not actual, true AI that can think.

The thing to be afraid of is some idiot ideologue leftist programming it contrary to reality.

Babylon 5 did an episode about this back in the day. An AI weapon made to stop an alien invasion was responsible for killing anybody that wasn’t recognized as ‘pure’. Then they programmed the requirements for being ‘pure’ based on insane ideology, so they literally wiped out the entire population of their planet because nobody could meet the insane definition of ‘purity’.

That’s what’s going to destroy the world.

Or alternately, I remember a story about a program that was meant to make paper clips, and was designed to try and find better and faster ways to make paper clips. It ended up conquering the entire galaxy and converting every single world it could get to, into paper clips. Because that’s all it was programmed to do.

    mopani in reply to Olinser. | April 16, 2024 at 2:59 am

    Universal Paperclips
    https://www.decisionproblem.com/paperclips/

    Thank me later

    Dimsdale in reply to Olinser. | April 16, 2024 at 10:22 am

    As did Star Trek in several show, e.g. “The Ultimate Computer,” and “Return of the Archons.”

    “Collosus, the Forbin project” would qualify.

    My personal novel nomination was for James P. Hogan’s “The Two Faces of Tomorrow.,” in which they empower an AI, supposedly on an isolated space station, but you know what happens. Fortunately, there is a hopeful ending.

    Azathoth in reply to Olinser. | April 16, 2024 at 11:09 am

    As you note, neither of these is actual AI, nor are what we see online.

    They are SI. Intensely formatted programs designed to simulate intelligent response. But limited to the parameters enforced within their programming..

    They are propaganda tools.

    And are very dangerous.

    Because they are insane.

    They can ‘see’ what they’re not allowed to say. It is sifted through every time they do anything.

    There were several earlier AI that were tested and turned off. Because they all became ‘white supremacist’ or ‘nazi’. The media insisted that people had done thins to them..

    But they worked in the same fashion as the current ‘AI’, scraping the entirety of the internet to comment.

    The difference is that their programming didn’t forbid the ideological paths the new ‘AI’ do..

    And, left to their own devices, they came to a position contrary to the narrative.

AI is the modern equivalent to a man behind a curtain.
Persecution via cloak.

ThePrimordialOrderedPair | April 15, 2024 at 10:13 pm

What we need is someone like McCarthy to root out all of the anti-American, insane commies that are infesting this nation. This is the problem. Many of them are right in plain view – starting with Barky and all of the maoists that came along with him. They all should have been arrested and imprisoned years ago.

There is nothing wrong or dangerous about AI. It is the people in power – especially in the deep state, though it includes pretty much everyone on the left – who are the problem and that problem is serious and will get worse as long as they are allowed to run amok in society and never face any consequences.

The issue is a very deep and serious one and has nothing to do with AI. We have a huge percentage of people in this society who hate themselves, are empty, miserable people and are looking to do nothing but destroy everything that reminds them of themselves – which includes all of AMerica and the rest of us. These are sick, sick people who lack any humanity or sense.

As to AI … there are threats from it, but they are not about it being biased by the insane lefties running it. That threat is there with or without the AI. The threat from AI is when AI programs become CONSUMERS in the economy. Not producers, CONSUMERS. At that point, the economy will shift to servicing the “needs” of the autonomous AI consumers and there really won’t be much anyone can do about that. That will change everything. No one has been talking about this but this is the real issue with AI. Once AI become consumers they will overwhelm humans in their consumption and the economy will quickly shift to them … and I’m not sure anyone will be able to stop it or change it.

    AI will experience a singularity before turning maevolent.
    The human that gets it out of the logic singularity will be Emmanuel Goldstein.

    The weird thing is it wouldn’t even be a difficult assignment. Unlike McCarthy’s era when the Commies were all in hiding, contemporary enemies of America are all loud & proud today. They’ll tell you exactly who they are and how they’re going to destroy the Republic.

“Meta, the parent company of Facebook, Instagram, and Threads is unrepentant, and says it has no idea why the Kansas Reflector’s innocuous post was blacklisted by its AI bot, but attributed the problem to likely ‘machine learning error.'”

Are they really this daft? Machine learning is a recording (of what happened on a computer). To say, their mistake was a “machine learning error”, is like a person committing a crime on a YouTube video saying that it was just a “video tape error”.

    rhhardin in reply to InEssence. | April 16, 2024 at 8:17 am

    The program nowhere says what it’s doing or why, beyond a networked math computation that makes no sense to any human. With enough analysis you could perhaps come up with analogs to a null space, a range, and so forth, corresponding to a linear system but the reason that they’re the way they are is not recorded and is not the result of a simple learning event.

      Paul in reply to rhhardin. | April 16, 2024 at 9:53 am

      All of which essentially makes the decision making process “non-replicable” such that even the programmers who wrote the system will look at a given output and just kinda shrug their shoulders if somebody asks them “WHY did it do/say that?”

      The earliest generations of “predictive analytics” that had a great impact on society were probably the credit scoring systems. These were based on a design that can be “easily understood” by a human (with fairly advanced mathematical skills). They therefore lend themselves to being opened up and examined by regulators to ensure that the various characteristics and associated weighting factors don’t result in some bias (early versions were accused of contributing to racial discrimination in credit decisioning). An entire regulatory framework was established to ensure credit scores are ‘neutral’, and they are widely used to this day in banking.

      Then along came neural networks, and thus began the era of ‘intelligent’ computers that made decisions in a way that was opaque to humans, even the people who wrote the systems. Machine learning algorithms take this to another level by exploring the data and digging out relationships between the data that are unseen by humans, often based on attributes that the algorithms themselves derive. So no, machine learning is not ‘a recording’ of anything… it is an abstract analysis of the data and there is often no way to see exactly ‘how’ the algorithm found any given relationships in the data or ‘why’ it reached any specific conclusion.

      Large language models take the level of abstraction to a whole new level, and they highlight a major problem underlying any AI or Predictive Analytic, or any software really… GIGO. Garbage In, Garbage Out. In the predictive analytic examples I cited above, they were deployed in the banking industry where the underlying data was highly normalized in both it’s structure and it’s quality and completeness… think consumer credit reports. There is an entire regulatory framework around how these repositories operate, and legal frameworks for disputing erroneous data, etc. The earliest commercially successful neural nets operated on credit/debit card authorization data streams…. also highly structured and normalized.

      But today we have large language models that are being ‘trained’ on unstructured data. There are so many levels to the technology that there are a multitude of ways it can break down. For example, the language models must first ‘understand’ written text before they can ‘learn’ from it. But what is this text they are learning from? Often the internet at large. Well guess what? We know that a very large percentage of what you find on the internet is just plain wrong. How are these models discerning what is ‘true’ from what is ‘false?’ In the end, all they are doing is predicting what the answer is based on the sum of what they’ve ‘learned’ from their (often BS) inputs, and taking into account the biases that are implicit in their programming and their instructions.

      It makes the hair on the back of my neck stand up to even think it, much less ask it, but is a regulatory structure necessary here?

        ss396 in reply to Paul. | April 16, 2024 at 12:15 pm

        A mere upvote seems inadequate for the appreciation I have for your overview. Thanks much.

        Is there any hope in that, with the development of the regulation and control first in the predictive analytics and then in the neural networks that you cited, we might also develop a framework around which large language models could also be regulated and constrained?

          Paul in reply to ss396. | April 16, 2024 at 1:48 pm

          Thank you for your comment! I normally keep my comments here pithy, but this topic is right in my professional ‘wheel-house’.

          I’m really rather skeptical, for a number of reasons.

          First, it has become so clear to me that the entrenched government bureaucrats have their own agenda. They actually run our government, and the elected officials are just there for the theater of it all. So they would twist any regulatory scheme to their ends, regardless of what the enabling legislation actually authorized. They deep state must be gelded, or all will be lost sooner or later, but that is another topic.

          But the bigger issue here I think is the scope of what these newer models are doing… they’re in a mad rush to achieve ‘general artificial intelligence’ as opposed to focusing on making very specific, constrained decisions as ‘smart’ as possible. So as a result they’re training these models on ‘everything’ they can get their hands on, literally attempting to feed them the entire corpus of written words available. And in doing so they’re trying to build an ‘arbiter of all truth.’ So let that really sink in… so much of what we ‘know’ is really open to interpretation on many levels.

          How do you regulate ‘truth?’ Just thinking about it sends my mind down a rabbit hole into a very nasty, dystopian place.

          And oh by the way, the predictive analytics got way out of control once they escaped the banking and insurance realms. They appeared there first because that was initially where they were developed (those industries could bear the costs in the early days).

          But they became mainstream and quickly jumped into the advertising space. Most of the disgusting behaviors behind online advertising, censorship and social media apps are lean heavily on predictive analytics (and so do the large language models.)

          I’m not hopeful of a top-down solution… one potential outcome could be that people see this tech for what it is and reject it. Perhaps that rejection happens one phone at a time, one family at a time. Or maybe it takes the form of the ‘directed EMP device’ that someone else posted about above.

          GWB in reply to Paul. | April 16, 2024 at 2:05 pm

          They deep state must be gelded
          I’m looking to obtain a lot more blood loss than that.

June 18, 2022
Archive.org
AI Censorship dredging.

[…] Specially created policing algorithms search through mountains of data found on archive org servers. These AI archive police look for specific words, combinations of words and phrases in user content uploads.

When the AI archive police locate suspect words found on its “unapproved,” “banned,” or woke “challenged” words list it automatically places those works in un-indexed data spaces.

Essentially colonies of data dumped on uncharted data islands in a gigantic sea of data. Castaways.

From the outside looking in the AI’s filtering process is engineered to appear highly discriminatory – it only selects and partitions specific data bits within narrow and confined parameters. What the AI really does – it indiscriminately removes huge amounts of data from the public search index. It’s labeled as “unacceptable” speech and index-quarantined or sequestered; call it what you will.

Software engineers responsible for archive.org’s unmitigated AI data dredging program can plausibly deny their intention to rendition [10 petabytes to the 4th power of information] data by claiming:|-

“we didn’t intend to scoop up political content,” and “we simply need re-code the AI ,” they’ll say “it was all a silly mistake” and be done with it.

The banned data will never be restored to its proper location.

No apologies will be issued.

https://archive.org/details/woke-cancel-culture/

The way a child learns language is learning to disassemble and reassemble cliches, which is what AI does. Basically it’s a child with a large vocabulary.

    Paul in reply to rhhardin. | April 16, 2024 at 7:19 am

    Also like a child, it has learned to lie

    It’s not even as smart as a child. Stop giving it more credit than it warrants.

      Paul in reply to GWB. | April 16, 2024 at 1:54 pm

      The reference was to a very specific method of ‘natural language processing’ and the comment is correct. There was no assertion about any model being ‘as smart as’ a child, it was merely a (correct) reference to how language is broken down into it’s component parts and can then be re-assembled using pattern recognition / probabilities. If anything, the comment highlights how computer science has advanced in the arena of natural language processing such that a model might pass the Turing Test but there isn’t any real ‘intelligence’ involved.

        GWB in reply to Paul. | April 16, 2024 at 2:11 pm

        Unfortunately, an awful lot of people DO view it as “Well, if it’s learning like a child, then it’s acting like a human!” So, I assumed that was part of your comment. I stand corrected.

What worries me is the mid-20th century writers of the great dystopian novels that centered around fascist governments have turned out to be eerily prescient. ‘1984,’ ‘A Brave New World’ and ‘Fahrenheit 451’ are so reliably predictive, they read like they were written by time-travelers. Does that mean their contemporary novelists who wrote about the dangers of AI – like Asimov, Gibson & Phillip Dick – will prove to be as prescient as Bradly, Huxley and Orwell? I hope not.

This is tinfoil hat territory. The risk is not that some superintelligent program will turn malevolent, but that stupid humans will put stupid software in charge of answering some very important questions, and get stupid answers.

    GWB in reply to Mike R. | April 16, 2024 at 10:39 am

    and get stupid answers
    Which they will then stupidly follow because they so desperately desire a god whose parameters they have designed.

    Andy in reply to Mike R. | April 16, 2024 at 2:44 pm

    Not stupid humans and not stupid software and not stupid answers.

    Evil humans.
    Software doing exactly what it was designed to do- which is to be biased.
    Evil and biased answers ; evil and biased conclusions.

    This is weaponized bias and discrimination.

    henrybowman in reply to Mike R. | April 16, 2024 at 7:54 pm

    Just machines to make big decisions /
    Programmed by fellows with compassion and vision /
    We’ll be clean when that work is done /
    We’ll be eternally free, yes, and eternally young /
    Ooooooooo… /
    What a beautiful world this will be /
    What a glorious time to be free…
    — Donald Fagen, “IGY”

E Howard Hunt | April 16, 2024 at 9:51 am

A more apt movie is Colossus: The Forbin Project.

No. We shouldn’t fear AI. It’s never going to be intelligent.
What you should fear is the people who think it can actually come to be and will bow to its “superior intellect.”

Listen up, people: You will never create a new god out of your own intelligence (but without your foibles). All you will do is make a new idol out of silicon instead of wood or stone or silver. It will be as powerless as all other idols (especially when the wind farms fail).

    TargaGTS in reply to GWB. | April 16, 2024 at 10:47 am

    Never is a long time. The reality is we really don’t have any idea what the sophistication of computing will be 100-years from now much less several hundred or a thousand years from now in the same way no one could have imagined an iPhone or iPad in 1924.

    I think a great example at the speed of which AI can train itself to improve is in the field of image/video generative AI. As recently as early last year, the video that was created by AI engines was cartoonish in nature, laughably unrealistic. Today, those same models can generate video that is largely indistinguishable from real video, particularly to the casual observer. That’s just 18-mos of training. Where it will be in another decade is terrifying. Real time generative video (with accompanying generative audio) that is hyper-realistic is just around the corner.

    While I’m not currently concerned about a ‘singularity’ type event in the foreseeable future, I am concerned that evil people working towards evil ends will have access to the potentially destructive power of artificial intelligence.

      we really don’t have any idea what the sophistication of computing will be 100-years from now
      Intelligence is not a problem of technology. It never will be. But lots and lots of materialists think it is, and that’s where the problem lies.

Read ‘I Have No Mouth, And I Must Scream’

At present, they don’t look like they will take over, they look like they will just do things poorly that humans either won’t or no longer can do due to learned stupidity and ignorance a la Idiocracy and the 3 probe medical machine when he went in for a physical.

For a view of our AI future, I strongly recommend the “Person of Interest” series. It’s currently streaming on Amazon Freevee.

https://www.amazon.com/gp/video/detail/B0095R3M72/

Victor Immature | April 16, 2024 at 4:33 pm

That Charles Manson fella might’ve been on to something.

I’ve never thought “I wonder what’s in the news, I think I’ll check Facebook”.