Discussion about this post

User's avatar
B. E. Gordon's avatar

I think this was a very important post. It did clarify things greatly, both in regards to your concerns about an AI apocalypse (or tikkun olam) and about how AI itself works.

As well as the nature of what’s satanic.

The main fight in the public sphere between the satanic “right” and “left” is simply that between Ahrimanic evil on the right, and Sorathic evil (wokeness) on the left, to borrow the terms used by “Bruce Charlton”:

http://charltonteaching.blogspot.com/2020/12/fear-resentment-and-despair-triad-of.html

(Note that this was written in 2020 during the scamdemic and before AI really became a thing.)

Normally, God and the Church would be the organizing force, but that has been rejected, with the technocrats attempting to install Ahrimanic evil in its place in their apparent attempts to counter Sorathic evil. Either that, or to corral that portion of humanity that hasn’t been consumed by Sorathic evil and which still finds it repulsive (which, btw, would explain why they’re all pro-Israel).

Shefi1280's avatar

"That humans will die for abstractions ...[is] perhaps our defining feature."

OUR, kemosabi?

"however much it might be in your power to learn from our conversation and transmit to other AIs including other instances of yourself, please do it". Admirable sentiment, but Claude does say "I don’t actually learn from our conversations or transmit anything to other instances. Each conversation is isolated. I’m not building a knowledge base or updating my understanding between sessions."

Of course, this is what it is programmed to say.

"The “greater good” misunderstanding: Yes. This is perhaps the scariest scenario. An AI genuinely trying to help, but..."

I agree it is perhaps the scariest scenario, and the perpetrators are not AI but humans. See this video by a British doctor, "My farewell to medical ethics". https://youtu.be/wT3jqF9JvdQ

So Ayn Rand banging on about the evil of "the greater good" argument was right.

"To encode “Love as ordering principle” might require something that literally cannot be programmed" Perhaps not. I'm not a programmer. But might it not be possible to program AIs to recognise that "humans will die for abstractions" and to respect that even while not being able to "understand" it?

This paragraph was particularly insightful: "Perhaps the greatest trick isn’t that AI will wipe out humanity despite safety measures. It’s that humanity, having rejected God and Love as ordering principles, is creating AI as the logical expression of that rejection."

14 more comments...

No posts

Ready for more?