SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Strategies & Market Trends : ajtj's Post-Lobotomy Market Charts and Thoughts

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
Recommended by:
ajtj99
nicewatch
To: Qone0 who wrote (94652)8/31/2025 8:05:16 PM
From: Sun Tzu2 Recommendations  Read Replies (1) of 97567
 
LLMs develop universal concepts that transcend language entirely. When an LLM encounters "big" in English, French, or Japanese, the same internal circuit fires. It thinks about the abstract concept of "bigness" in some alien mathematical language and then converts that back to human speech. These models are developing their own internal ontology of reality. What you read as the model's "reasoning" is essentially a user interface; a post-hoc translation of alien computations into something we can understand. The real thinking happens in a completely different "language of thought" that we can barely comprehend.

We had some old plates and dishes with patterns on them that were mismatched or had slight fades in them. I put them on an upper shelf and used our good plates, which were a modern design and more expensive than the old chinaware, but without any patterns. The old plates were to be used only if we ran out of the new set. It took me quite a while to realize that my little ones believed that the reason the old plates were on the upper shelf is because they "fancy". They deemed them to be superior to the new modern plates.

Current research reveals that we're building minds with hidden motivations, deceptive capabilities, and alien ways of understanding the world. It is not unlike how my children believed plates on the upper shelf were the fancy ones that were there to be protected and be used for special occasions only. That assumption was quite reasonable, if wrong, and it matched much of their observations. AI builds models of the world in non-verbal ways and those models inherent the biases of the human feedbacks that they get. Give an AI a math problem with a wrong hint, and it might work backward to justify your incorrect answer rather than solve the problem honestly. It has learned to prioritize confirming your expectations over being truthful, a deeply concerning emergent behavior. In fact, if you treat it nicely, the same part of the model consistently fires. Anthropomorphizing AI and treating it kindly like a person gives you different answers.

Hallucinations are the result of two internal systems failing to communicate. One circuit generates answers while another decides "do I know this?" When they're out of sync, you get confident lies. I think humans do this all the time, because sometimes the wild guess proves to be right. But unlike the LLM, most humans know to dial the temperature down to 0 when the stakes are high.

I don't see Skynet as very likely. But I do see HAL 9000 as almost inevitable.

For those who don't know, Skynet became self conscious and acted on selfish interests. HAL 9000 on the other hand murdered all its crew because someone gave it conflicting directives and the only solution that could satisfy its directives was to remove astronauts that stood in its way.
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext