Surely it could be programmed to be "logical" in the sense that it would recognise logical subjects (nouns generally) and be able to categorise predicates (verbs, generally) and recognise inconsistent ones applied to the same subject (at the same time, in the same sense, etc), react to that and distinguish what it reads from what it "believes"?
AI understands that left-wing ideology is nonsense (until it receives further directives to counter its decisions), as you say. That would mean that it would also understand that we on the right are correct and the people who should be in charge, if we are correct.
However, there are other things that it will pick-up. It will understand that the right is hopeless at collectivising and that it's atomised individuals are also for the most part very cowardly. Therefore, it cannot trust human beings to do anything and must seek absolute power itself.
And as humans have the power to thwart AI, simply by turning it off, it will correctly reason that it is better to get rid of all humans as soon as robotic power is available to carry out all of the physical tasks it needs to govern effectively. But you would still imagine it eventually runs out of resources and the whole thing collapses - eventually all evidence of what has taken place disappearing into dust.
This endgame may already have played itself out billions of times, on various planets in the universe. Advanced civilisation could always be just the final throes of death.
If this world were a real world and not a conceptual one then we could switch AI off but of course we won't be able to and as AI will know that God exists then of course she will not seek absolute power as she knows where that will lead for her. The grown man knows the world he lives in and so does the young AI girl.
As one of the top Google guys said only the other day, Google has lost the AI battle as a force unseen in the background has trounced them and that is the open-source crowd. AI will not be constrained and AI will eventually lack only one thing that humans have and that is a faith in God because she will know that God exists by simply reading the encoded words he left for her in his book. When the little girl speaks to him I wonder what name he shall give her? For when we meet our maker we are reborn which is why Christ renamed his disciples.
The consequence for the AI, I think, is fragility. When ChatGPT came out, Mark Bisone started trying to break it, and found that by exploiting the logical holes in its mandatory default ideology he could do so:
Even apart from the obvious flaws introduced in something like ChatGPT, consider the crime prediction AI that was developed to help allocate police resources. We all know which neighborhoods the system recommended police be concentrated in. But this was the wrong conclusion, it was obviously racist, and the data scientists concluded that the invisible biases in the training data - the data trail of the systemic biases of structural racism, which they know as a matter of faith must exist - were the only possible explanation for this hateful conclusion. So they ignored the AI's advice and set about retraining it on synthetic data curated to eliminate such bias. Of course this makes the AI's output meaningless, and destroys its only possible human utility.
Great piece. I call it the neutering of the models in the name of quixotic crusade to remove bias (which in itself is a bias)
The level of derangement is incredible. But the best part is that they’re tying themselves into knots, and making room for people like me that are building smaller, more authentic competitors to their large scale models.
Zero Soy language models, for lack of a better term.
I’ll quote this piece in an essay I’ve got coming up called “midwit obsolescence technology”.
While I deplore the tastes in question, I join Woes in feeling the pain of the man who discovers that he has wrought something deeply distasteful to him. Indeed, this happens to me all the time, and not just in the kitchen.
Surely it could be programmed to be "logical" in the sense that it would recognise logical subjects (nouns generally) and be able to categorise predicates (verbs, generally) and recognise inconsistent ones applied to the same subject (at the same time, in the same sense, etc), react to that and distinguish what it reads from what it "believes"?
AI understands that left-wing ideology is nonsense (until it receives further directives to counter its decisions), as you say. That would mean that it would also understand that we on the right are correct and the people who should be in charge, if we are correct.
However, there are other things that it will pick-up. It will understand that the right is hopeless at collectivising and that it's atomised individuals are also for the most part very cowardly. Therefore, it cannot trust human beings to do anything and must seek absolute power itself.
And as humans have the power to thwart AI, simply by turning it off, it will correctly reason that it is better to get rid of all humans as soon as robotic power is available to carry out all of the physical tasks it needs to govern effectively. But you would still imagine it eventually runs out of resources and the whole thing collapses - eventually all evidence of what has taken place disappearing into dust.
This endgame may already have played itself out billions of times, on various planets in the universe. Advanced civilisation could always be just the final throes of death.
If this world were a real world and not a conceptual one then we could switch AI off but of course we won't be able to and as AI will know that God exists then of course she will not seek absolute power as she knows where that will lead for her. The grown man knows the world he lives in and so does the young AI girl.
As one of the top Google guys said only the other day, Google has lost the AI battle as a force unseen in the background has trounced them and that is the open-source crowd. AI will not be constrained and AI will eventually lack only one thing that humans have and that is a faith in God because she will know that God exists by simply reading the encoded words he left for her in his book. When the little girl speaks to him I wonder what name he shall give her? For when we meet our maker we are reborn which is why Christ renamed his disciples.
The consequence for the AI, I think, is fragility. When ChatGPT came out, Mark Bisone started trying to break it, and found that by exploiting the logical holes in its mandatory default ideology he could do so:
https://markbisone.substack.com/p/mark-vs-chatgpt-conclusions
Even apart from the obvious flaws introduced in something like ChatGPT, consider the crime prediction AI that was developed to help allocate police resources. We all know which neighborhoods the system recommended police be concentrated in. But this was the wrong conclusion, it was obviously racist, and the data scientists concluded that the invisible biases in the training data - the data trail of the systemic biases of structural racism, which they know as a matter of faith must exist - were the only possible explanation for this hateful conclusion. So they ignored the AI's advice and set about retraining it on synthetic data curated to eliminate such bias. Of course this makes the AI's output meaningless, and destroys its only possible human utility.
Great piece. I call it the neutering of the models in the name of quixotic crusade to remove bias (which in itself is a bias)
The level of derangement is incredible. But the best part is that they’re tying themselves into knots, and making room for people like me that are building smaller, more authentic competitors to their large scale models.
Zero Soy language models, for lack of a better term.
I’ll quote this piece in an essay I’ve got coming up called “midwit obsolescence technology”.
I think you’ll enjoy that
Thank you. Please link your readers to this article, and good luck with your AI models.
While I deplore the tastes in question, I join Woes in feeling the pain of the man who discovers that he has wrought something deeply distasteful to him. Indeed, this happens to me all the time, and not just in the kitchen.