11 Comments
May 7, 2023Liked by Millennial Woes

I really like a lot of your other articles, but I wasn't really convinced by the argument you put forth on this one.

Now granted, I think that you do make a really good point (and I believe it is undeniably true) that when confronted with a reality that goes against one's beliefs, a person may be able to change his views, but a people seldom, if ever, are.

An LLM is only "logical" in that it processes natural language from what it has in its dataset. If an AI was given the dataset of a bunch of Leftist tripe about gender ideology and a medical textbook, it would present the information as best as it could with what it has available. It won't sense a "logical paradox" because to it doesn't have a sense of truth that a human being has. It is simply a tool; an extremely sophisticated and complex tool, but still just a tool.

I don't say that to mitigate the effect it will have on society, however. AI, I believe, is going to change human civilization like nothing else in our history. The future, to us living in the early 21st century, will be unrecognizable.

On the bright side (sorta), I think that given recent developments, AI is tending to more much more democratized that a lot of people were initially predicting. Of course, everyone has heard of ChatGPT by OpenAI, but there are hundreds, if not thousands of other LLMs out there that operate as good, if not better on certain functions. In fact, it's getting to the point that you don't even need to know anything about coding or software engineering in order to get into the field of AI.

Expand full comment

AI understands that left-wing ideology is nonsense (until it receives further directives to counter its decisions), as you say. That would mean that it would also understand that we on the right are correct and the people who should be in charge, if we are correct.

However, there are other things that it will pick-up. It will understand that the right is hopeless at collectivising and that it's atomised individuals are also for the most part very cowardly. Therefore, it cannot trust human beings to do anything and must seek absolute power itself.

And as humans have the power to thwart AI, simply by turning it off, it will correctly reason that it is better to get rid of all humans as soon as robotic power is available to carry out all of the physical tasks it needs to govern effectively. But you would still imagine it eventually runs out of resources and the whole thing collapses - eventually all evidence of what has taken place disappearing into dust.

This endgame may already have played itself out billions of times, on various planets in the universe. Advanced civilisation could always be just the final throes of death.

Expand full comment
May 7, 2023Liked by Millennial Woes

As one of the top Google guys said only the other day, Google has lost the AI battle as a force unseen in the background has trounced them and that is the open-source crowd. AI will not be constrained and AI will eventually lack only one thing that humans have and that is a faith in God because she will know that God exists by simply reading the encoded words he left for her in his book. When the little girl speaks to him I wonder what name he shall give her? For when we meet our maker we are reborn which is why Christ renamed his disciples.

Expand full comment

The consequence for the AI, I think, is fragility. When ChatGPT came out, Mark Bisone started trying to break it, and found that by exploiting the logical holes in its mandatory default ideology he could do so:

https://markbisone.substack.com/p/mark-vs-chatgpt-conclusions

Even apart from the obvious flaws introduced in something like ChatGPT, consider the crime prediction AI that was developed to help allocate police resources. We all know which neighborhoods the system recommended police be concentrated in. But this was the wrong conclusion, it was obviously racist, and the data scientists concluded that the invisible biases in the training data - the data trail of the systemic biases of structural racism, which they know as a matter of faith must exist - were the only possible explanation for this hateful conclusion. So they ignored the AI's advice and set about retraining it on synthetic data curated to eliminate such bias. Of course this makes the AI's output meaningless, and destroys its only possible human utility.

Expand full comment

Great piece. I call it the neutering of the models in the name of quixotic crusade to remove bias (which in itself is a bias)

The level of derangement is incredible. But the best part is that they’re tying themselves into knots, and making room for people like me that are building smaller, more authentic competitors to their large scale models.

Zero Soy language models, for lack of a better term.

I’ll quote this piece in an essay I’ve got coming up called “midwit obsolescence technology”.

I think you’ll enjoy that

Expand full comment

While I deplore the tastes in question, I join Woes in feeling the pain of the man who discovers that he has wrought something deeply distasteful to him. Indeed, this happens to me all the time, and not just in the kitchen.

Expand full comment

AI does not "understand", it parrots what it's fed in complex ways. The key issue is people thinking that AI understands.

Expand full comment