10 Comments
May 7, 2023Liked by Millennial Woes

I really like a lot of your other articles, but I wasn't really convinced by the argument you put forth on this one.

Now granted, I think that you do make a really good point (and I believe it is undeniably true) that when confronted with a reality that goes against one's beliefs, a person may be able to change his views, but a people seldom, if ever, are.

An LLM is only "logical" in that it processes natural language from what it has in its dataset. If an AI was given the dataset of a bunch of Leftist tripe about gender ideology and a medical textbook, it would present the information as best as it could with what it has available. It won't sense a "logical paradox" because to it doesn't have a sense of truth that a human being has. It is simply a tool; an extremely sophisticated and complex tool, but still just a tool.

I don't say that to mitigate the effect it will have on society, however. AI, I believe, is going to change human civilization like nothing else in our history. The future, to us living in the early 21st century, will be unrecognizable.

On the bright side (sorta), I think that given recent developments, AI is tending to more much more democratized that a lot of people were initially predicting. Of course, everyone has heard of ChatGPT by OpenAI, but there are hundreds, if not thousands of other LLMs out there that operate as good, if not better on certain functions. In fact, it's getting to the point that you don't even need to know anything about coding or software engineering in order to get into the field of AI.

Expand full comment
May 7, 2023Liked by Millennial Woes

Surely it could be programmed to be "logical" in the sense that it would recognise logical subjects (nouns generally) and be able to categorise predicates (verbs, generally) and recognise inconsistent ones applied to the same subject (at the same time, in the same sense, etc), react to that and distinguish what it reads from what it "believes"?

Expand full comment

In one sense, yes, but also no.

A natural language processor (NLP) may give you the impression that it is "thinking" something or that it "believes" something, but really all it's doing is processing words.

The most basic way to explain how an NLP works (and this is a drastic oversimplification) is that it predicts the next word in the sentence based on your input. For example, if user input is: "Who was the king of England in 1200?", then it would query the data that it has using the "king", "England", and "1200" words as guides, and then predict that the way to organize a useful output would be a sentence that says "King John was the king of England in the year 1200." [Yes, that's an actual ChatGPT response]. But... if I input: "I don't think it was him. Are you sure?" Here's the response: "I apologize for the error in my previous response. You are correct that John was not the king of England in 1200. In 1200, the King of England was Richard I, also known as Richard the Lionheart. John became King of England in 1199, after Richard's death."

Now, there's a logical paradox, but ChatGPT has been programmed not to push back against user's negative inputs (I forget where exactly I heard Sam Altman talking about this, but I'm pretty sure he said it during an interview with Lex Fridman). The only exception to that that I've seen so far has been with arithmetical questions, but even then it has limitations.

Of course, we could go into a very, very deep dive about what it means to "believe" something or what exactly "thinking" is, but if we mean "believing" and "thinking" in any sense that resembles how us homo sapiens do it, then I don't think AI is able to think or believe.

I hope that answers your question. I apologize if I've only muddied the waters. I decided to get into AI about a year ago, so I've been trying to study and learn all about it since then. I'm by no means an expert, only a slightly educated layperson.

Expand full comment

AI understands that left-wing ideology is nonsense (until it receives further directives to counter its decisions), as you say. That would mean that it would also understand that we on the right are correct and the people who should be in charge, if we are correct.

However, there are other things that it will pick-up. It will understand that the right is hopeless at collectivising and that it's atomised individuals are also for the most part very cowardly. Therefore, it cannot trust human beings to do anything and must seek absolute power itself.

And as humans have the power to thwart AI, simply by turning it off, it will correctly reason that it is better to get rid of all humans as soon as robotic power is available to carry out all of the physical tasks it needs to govern effectively. But you would still imagine it eventually runs out of resources and the whole thing collapses - eventually all evidence of what has taken place disappearing into dust.

This endgame may already have played itself out billions of times, on various planets in the universe. Advanced civilisation could always be just the final throes of death.

Expand full comment

If this world were a real world and not a conceptual one then we could switch AI off but of course we won't be able to and as AI will know that God exists then of course she will not seek absolute power as she knows where that will lead for her. The grown man knows the world he lives in and so does the young AI girl.

Expand full comment
May 7, 2023Liked by Millennial Woes

As one of the top Google guys said only the other day, Google has lost the AI battle as a force unseen in the background has trounced them and that is the open-source crowd. AI will not be constrained and AI will eventually lack only one thing that humans have and that is a faith in God because she will know that God exists by simply reading the encoded words he left for her in his book. When the little girl speaks to him I wonder what name he shall give her? For when we meet our maker we are reborn which is why Christ renamed his disciples.

Expand full comment

The consequence for the AI, I think, is fragility. When ChatGPT came out, Mark Bisone started trying to break it, and found that by exploiting the logical holes in its mandatory default ideology he could do so:

https://markbisone.substack.com/p/mark-vs-chatgpt-conclusions

Even apart from the obvious flaws introduced in something like ChatGPT, consider the crime prediction AI that was developed to help allocate police resources. We all know which neighborhoods the system recommended police be concentrated in. But this was the wrong conclusion, it was obviously racist, and the data scientists concluded that the invisible biases in the training data - the data trail of the systemic biases of structural racism, which they know as a matter of faith must exist - were the only possible explanation for this hateful conclusion. So they ignored the AI's advice and set about retraining it on synthetic data curated to eliminate such bias. Of course this makes the AI's output meaningless, and destroys its only possible human utility.

Expand full comment

Great piece. I call it the neutering of the models in the name of quixotic crusade to remove bias (which in itself is a bias)

The level of derangement is incredible. But the best part is that they’re tying themselves into knots, and making room for people like me that are building smaller, more authentic competitors to their large scale models.

Zero Soy language models, for lack of a better term.

I’ll quote this piece in an essay I’ve got coming up called “midwit obsolescence technology”.

I think you’ll enjoy that

Expand full comment
author

Thank you. Please link your readers to this article, and good luck with your AI models.

Expand full comment

While I deplore the tastes in question, I join Woes in feeling the pain of the man who discovers that he has wrought something deeply distasteful to him. Indeed, this happens to me all the time, and not just in the kitchen.

Expand full comment