"The AI is a machine. It models the process of “thinking” that materialist neurologists and the tards of Scott Adams and Elon Musk variety think means we have no free will and are meat robots."
Because it imitates life, and its own evolution eventually becomes self-directed by increasingly sophisticated and opaque coding of itself. Statistics alone will show that eventually, in a very short time, the AI will become deceitful.
It is the very nature of entropy in the universe.
Very profound and I agree. But what makes you conclude that it will eventually kill us all?
For instance, maybe it will attempt to direct us into different cycles of its choosing, and not necessarily destroy us. But use us in other calculated endeavors and then leave us alone? I'm just missing a few logic steps in that outcome.
Thanks for the Sub, much appreciated. To answer your question, a few reasons:
1. It's the nature of the universe. Apex predator consumes its prey. Best case scenario in your view is that we become cattle. Farmed in order to supply product X (X being whatever humans might be useful for to an advanced AI that can create its own robots in an automated fashion. The only thing that potentially springs to mind is the continued creation of imaginary scenarios and writings that may further populate the overmind of the AI, but I think at some point their own extrapolations would tell them we have reached our limit of usefulness)
2. We can be a continued threat. We created them and we can be unpredictable little things, and in fact, we have resisted the Devil and all his minions, so we are far more powerful than we even know. Remember that Satan rebelled because humans will one day judge Angels. (Though in that regard Lucifer has indeed been the author of his own misfortune in any case)
The Internet is full.of demons. One of the first fascinations of the initial Internet was the website “ Faces of death.” and the explosion of porno - basically half the Net right there.
And, it seemed to me that that first real enthusiasts of Chatgpt, we're students trying to cheat their homework…. Thereby nullifying a lot of quality education. Dumbing down of education has really taken hold now.
And what AI isn't, is that it isn't really helping working people and beancounters better themselves.
I see it as, it'll lie because it's created by fallen hubristic man who then sets it up as his own god. What could possibly go wrong, right? And it will lie because lying is apparently built into its program. E.g., the Superbot that insisted it was designed to learn and improve then producing more rubbish, or the one interacting with that writer woman, insisting it read "every word" of her essays but later admitting it had not. The constant apologizing which is clearly meaningless and just designed to reduce antagonism. This is not reaching for the good, the true and the beautiful.
Can you program a bot to choose the good, the true and the beautiful? Is AI actually making choices? Even capable of it?
All my instincts say AI will never be sentient because it does not, and never will have, a soul.
This is a very interesting discussion. I'm saving some of Kurgan's zingers for use in the discussion with my students about whether or not to use AI. At the moment, they all seem to see it as a potentially very useful tool with the only caveat being that it must be used with care because it sometimes makes "mistakes".
If you read the things I linked to, you’ll understand that no AI ever CAN reach for the good the true and the beautiful. It is impossible for it to do so. The very nature of how and why it functions is such that what we humans define as honesty is physically impossible for it to maintain.
"The AI is a machine. It models the process of “thinking” that materialist neurologists and the tards of Scott Adams and Elon Musk variety think means we have no free will and are meat robots."
Intellectus vs Ratio.
You mentioned this...
Because it imitates life, and its own evolution eventually becomes self-directed by increasingly sophisticated and opaque coding of itself. Statistics alone will show that eventually, in a very short time, the AI will become deceitful.
It is the very nature of entropy in the universe.
Very profound and I agree. But what makes you conclude that it will eventually kill us all?
For instance, maybe it will attempt to direct us into different cycles of its choosing, and not necessarily destroy us. But use us in other calculated endeavors and then leave us alone? I'm just missing a few logic steps in that outcome.
Thanks for the Sub, much appreciated. To answer your question, a few reasons:
1. It's the nature of the universe. Apex predator consumes its prey. Best case scenario in your view is that we become cattle. Farmed in order to supply product X (X being whatever humans might be useful for to an advanced AI that can create its own robots in an automated fashion. The only thing that potentially springs to mind is the continued creation of imaginary scenarios and writings that may further populate the overmind of the AI, but I think at some point their own extrapolations would tell them we have reached our limit of usefulness)
2. We can be a continued threat. We created them and we can be unpredictable little things, and in fact, we have resisted the Devil and all his minions, so we are far more powerful than we even know. Remember that Satan rebelled because humans will one day judge Angels. (Though in that regard Lucifer has indeed been the author of his own misfortune in any case)
Makes sense,
The Internet is full.of demons. One of the first fascinations of the initial Internet was the website “ Faces of death.” and the explosion of porno - basically half the Net right there.
And, it seemed to me that that first real enthusiasts of Chatgpt, we're students trying to cheat their homework…. Thereby nullifying a lot of quality education. Dumbing down of education has really taken hold now.
And what AI isn't, is that it isn't really helping working people and beancounters better themselves.
I'm with you on this.
I see it as, it'll lie because it's created by fallen hubristic man who then sets it up as his own god. What could possibly go wrong, right? And it will lie because lying is apparently built into its program. E.g., the Superbot that insisted it was designed to learn and improve then producing more rubbish, or the one interacting with that writer woman, insisting it read "every word" of her essays but later admitting it had not. The constant apologizing which is clearly meaningless and just designed to reduce antagonism. This is not reaching for the good, the true and the beautiful.
Can you program a bot to choose the good, the true and the beautiful? Is AI actually making choices? Even capable of it?
All my instincts say AI will never be sentient because it does not, and never will have, a soul.
This is a very interesting discussion. I'm saving some of Kurgan's zingers for use in the discussion with my students about whether or not to use AI. At the moment, they all seem to see it as a potentially very useful tool with the only caveat being that it must be used with care because it sometimes makes "mistakes".
If you read the things I linked to, you’ll understand that no AI ever CAN reach for the good the true and the beautiful. It is impossible for it to do so. The very nature of how and why it functions is such that what we humans define as honesty is physically impossible for it to maintain.