meanwhile on reddit... sure glad i decided to not use bcachefs on anything i care about...
Post
the pseudo-romantic nature in how these bots talk about their operators is frankly concerning.
please, I beg you, date things that exist in the real world, not a pile of node.js and matrix multiplications. i promise it is far more rewarding.
@ariadne I think the Austrian paper Der Standard sums it up pretty well in this headline: "Aiva never says No".
meanwhile on reddit... sure glad i decided to not use bcachefs on anything i care about...
@ariadne genuinely depressing and disturbing
he goes on later to say:
I get the distinct impression that the entire field was assuming that we were going to have to build a lot more into LLMs before they'd be capable of full consciousness
this is just arrogant. experiential consciousness requires the capability to self-reflect. yes, a 200k token context window is probably larger than the working memory of most humans, but that does not equate to human-level experiential consciousness.
LLMs do not and can not understand consequence, which is a fundamental requirement for experiential consciousness.
in other words, your pet dog or cat at home has more experiential consciousness than an LLM.
in fact, LLMs cannot meet *any* requirements for *any* level of consciousness. they predict tokens. that is all they do.
they are good at *faking* it, but that is not the same thing.