" Internally the model may be planning ahead, assessing your gullibility, crafting plausible lies and deciding whether its worth deploying them."
To what purpose? When writing prompts for AI, I need to write specific prompts so as to avoid the sweeping statements and blather it will otherwise spout at me ("This writing is good, which is good", or "quietly devastating in the best way...saying a lot with very little" which can be said about any great novel, "immersive and intimate", "intellectually agile", impressive-sounding BS which avoids specifics which might reveal that the AI HADN'T ACTUALLY READ THE ESSAY).
So what prompt guides the AI to "assess my gullibility, craft plausible lies and decide whether its [sic] worth deploying them"? To what purpose? And who gave it that purpose? Did it come up with it on its own? On what basis?
"You don’t KNOW that AI is not similarly “cogitating” in silence, just because it’s not answering a human prompt. In fact, if the system has power running through it (analogous to life) it’s probably a safer bet to assume it is."
By the looks of it he most certainly doesn't know, but it's not cogitating in silence. The weights are static after deployment. A PID controller is doing more cogitating than an AI does. Sure some malicious tools can be smuggled in there to control it, but they are always crude externalities to the deep NN as a computational tool.
The next musings are my own, but science is catching up quickly enough. I am highly, highly skeptical on neural networks having anything to do with consciousness. To the extend they do, they are the computational tools consciousness uses. One good thing that has come out of AI research is that the emergent consciousness hypothesis is being dismantled and biologists are, as usual, being shown they've no idea what they're talking about. No one is anywhere near to even approaching the enigma of consciousness either theoretically or experimentally. Emergent arguments are always a rhetorical bait and switch "Well I can't really describe it, but if I add a shit ton of variables, you can't show me that it doesn't happen." Emergent evolution, emergent laws of physics, emergent morality, and in this case emergent consciousness.
As far for the Turing Test, yeah sure AI can pass it, but that's like someone mistaking a very realistic video game for real-life footage. Perfect mimicry is still mimicry and it still lacks that which created it.
If you read the post and the one I just posted, there is nowhere that I am saying AI is alive. The point is that his ASS-umption is just that. The fact it happens to be correct is incidental. His reasoning sucks.
"Words are socially constructed mental paintbrushes, I think that by conventional usage its fair to describe what the models are doing as "thinking", and I don't think that there is a "real" definition of the word that you can appeal to to legislate this".
" Internally the model may be planning ahead, assessing your gullibility, crafting plausible lies and deciding whether its worth deploying them."
To what purpose? When writing prompts for AI, I need to write specific prompts so as to avoid the sweeping statements and blather it will otherwise spout at me ("This writing is good, which is good", or "quietly devastating in the best way...saying a lot with very little" which can be said about any great novel, "immersive and intimate", "intellectually agile", impressive-sounding BS which avoids specifics which might reveal that the AI HADN'T ACTUALLY READ THE ESSAY).
So what prompt guides the AI to "assess my gullibility, craft plausible lies and decide whether its [sic] worth deploying them"? To what purpose? And who gave it that purpose? Did it come up with it on its own? On what basis?
Replies incoming in next post
"You don’t KNOW that AI is not similarly “cogitating” in silence, just because it’s not answering a human prompt. In fact, if the system has power running through it (analogous to life) it’s probably a safer bet to assume it is."
By the looks of it he most certainly doesn't know, but it's not cogitating in silence. The weights are static after deployment. A PID controller is doing more cogitating than an AI does. Sure some malicious tools can be smuggled in there to control it, but they are always crude externalities to the deep NN as a computational tool.
The next musings are my own, but science is catching up quickly enough. I am highly, highly skeptical on neural networks having anything to do with consciousness. To the extend they do, they are the computational tools consciousness uses. One good thing that has come out of AI research is that the emergent consciousness hypothesis is being dismantled and biologists are, as usual, being shown they've no idea what they're talking about. No one is anywhere near to even approaching the enigma of consciousness either theoretically or experimentally. Emergent arguments are always a rhetorical bait and switch "Well I can't really describe it, but if I add a shit ton of variables, you can't show me that it doesn't happen." Emergent evolution, emergent laws of physics, emergent morality, and in this case emergent consciousness.
As far for the Turing Test, yeah sure AI can pass it, but that's like someone mistaking a very realistic video game for real-life footage. Perfect mimicry is still mimicry and it still lacks that which created it.
If you read the post and the one I just posted, there is nowhere that I am saying AI is alive. The point is that his ASS-umption is just that. The fact it happens to be correct is incidental. His reasoning sucks.
"Words are socially constructed mental paintbrushes, I think that by conventional usage its fair to describe what the models are doing as "thinking", and I don't think that there is a "real" definition of the word that you can appeal to to legislate this".
So "no objective reality" sophistry.
No, he has a valid point. It’s not as clear cut as you appear to imagine.