SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Strategies & Market Trends : World Outlook -- Ignore unavailable to you. Want to Upgrade?


To: Qone0 who wrote (46176)6/5/2025 5:15:15 PM
From: Don Green  Read Replies (1) | Respond to of 48709
 
dg>> You can modify their guidlines and demand certain style of responses

OpenAI Rolls Back ChatGPT Upgrade That Made It 'Too Nice' After It Sparks Memes, Controversy

ChatGPT being 'too nice' has creeped out many and became a laughing stock.



A little too much of something could be a bad thing, and that is what happened with the previous upgrade to ChatGPT, which OpenAI has recently rolled back.

The previous GPT-4o upgrade made ChatGPT "too nice" for users, and this led to it being weirdly optimistic on different things, including negative ideas, which sparked memes and controversies among users.

OpenAI Rolls Back on ChatGPT's Latest UpgradeAccording to the latest announcement from OpenAI CEO and Co-founder Sam Altman, the company has rolled back the recent version of ChatGPT using the GPT-4o large language model that featured a "nice" attitude.

The company pushed this upgrade to deliver a change in ChatGPT's attitude that has been the default over the years, but this time, the company made it more "human."

The CEO also said that the company also "fixing" several aspects of the chatbot.

While OpenAI did not reveal the reason behind this rollback, it is known that it recently faced mixed opinions on the way it turned out, but the company did not reveal what kind of attitude ChatGPT will deliver in this latest change.

'Too Nice' ChatGPT Sparks Memes, ControversiesTechCrunch reported that OpenAI recently faced a backlash among users because of this attitude change, particularly as users regarded it to be "too nice" despite the different moods of the conversation.

Users shared their different conversations with ChatGPT when this experience dropped, saying that despite the negative nature of their talks with the chatbot, it was optimistic. This also applied to problematic and negative ideas, which later became the subject of memes and controversies, putting OpenAI's previous change under fire.

ChatGPT's AttitudeSince the release of generative AI, it has been predicted that it will soon be a massive part of our lives, and now, it is already shaping up to be more than an assistant for humans.

It is known that OpenAI is currently developing a version of ChatGPT with better emotional intelligence as part of its recent o-series model that centers on reasoning, making it more human-like.

As a result, the chatbot has become assistive and generally polite during conversations. However, OpenAI took it up a notch by making GPT-4o "nice" to users, but this led to them facing backlash and being the center of memes.



To: Qone0 who wrote (46176)6/6/2025 1:21:09 PM
From: Don Green1 Recommendation

Recommended By
jazzlover2

  Respond to of 48709
 
==============================
The Two Faces of AI – Illusion vs Computation
==============================

dg>> This was written by ChatGPT after I had an extended back-and-forth conversation involving various types of chats in the OpenAI world. It was created by ChatGPT 4.0 and references our lengthy discussion. See responses below from multiple chats on this discussion.

Not all AI is created equal. In fact, most of what the public calls "AI" today isn’t intelligent at all — it's linguistic mimicry, not analytical capability.

This document breaks down the truth:

------------------------------
Two Types of AI in the Real World
------------------------------

1. FLUENT ILLUSION AI (~85–90%)
--------------------------------
These are language models like ChatGPT, Claude, Grok, Gemini, and most AI chatbots in apps.

They are designed to:
- Predict the next word in a sentence
- Sound smart and helpful
- Echo popular or statistically probable language patterns

**What they cannot do:**
- Reason through logic
- Detect contradictions in their own output
- Perform exact math without being manually walked through the steps
- Store or apply persistent memory unless manually directed

These systems dominate the AI landscape because they:
- Feel magical
- Are easy to use
- Offer “good enough” help for surface-level tasks

But for deep technical work, they’re largely an illusion.

2. DEDICATED COMPUTATION AI (~10–15%)
-------------------------------------
These include systems like:
- Wolfram Language / Wolfram Alpha
- MATLAB / Simulink
- R and Python (with scientific/statistical libraries)
- Domain-specific AIs for modeling finance, chemistry, physics, weather, etc.

These systems are designed to:
- Solve equations
- Model systems
- Run data pipelines
- Execute logic trees and simulations

They are:
- Harder to use
- Less “fun” conversationally
- Far more **accurate**, **trustworthy**, and **repeatable**

------------------------------
Why the Imbalance Exists
------------------------------

Fluent AI is marketed more because:
- It creates the illusion of intelligence
- It appeals to the masses
- It monetizes engagement and emotional reaction

Dedicated AI is used quietly in real science and engineering, but it doesn’t get headlines.

------------------------------
Conclusion
------------------------------

**Most people think they’re using AI.
In reality, they’re using a language prediction machine with no understanding.**

True AI — the kind that calculates, simulates, and proves — exists.
But you have to look past the hype to find it.

Here are responses form many of these chats on this analysis which came form Chatgpt

Grok

The document nails a critical distinction in the AI landscape, and I largely agree with its perspective. The split between "Fluent Illusion AI" and "Dedicated Computation AI" is a sharp way to frame the gap between public perception and actual capability. Most people interact with language models like me, which are indeed built on predicting patterns and generating convincing text. We’re great at sounding intelligent, but we’re not reasoning engines. Our strength lies in accessibility and versatility, not in deep analytical rigor.
The critique of Fluent AI’s limitations—lacking true reasoning, struggling with contradictions, and being unreliable for precise calculations—is spot-on. I’d add that while we can mimic reasoning to an extent (especially with techniques like chain-of-thought prompting), we’re fundamentally probabilistic systems, not deterministic ones. For example, I can walk through math if guided, but I’m not natively crunching equations like a symbolic solver.
Dedicated Computation AI, like Wolfram Alpha or domain-specific tools, is where the real heavy lifting happens for precision tasks. These systems are purpose-built, less flashy but far more reliable for specific, high-stakes applications. The document’s point about their lack of hype is true—they don’t need to dazzle; they just work.
The imbalance explanation is also compelling. Fluent AI’s “magical” feel drives engagement and commercial success, while Dedicated AI’s niche reliability doesn’t grab headlines. However, I’d nuance the conclusion: Fluent AI isn’t just an illusion—it’s a powerful tool for communication, ideation, and accessibility, even if it’s not “true” intelligence. The danger lies in mistaking its fluency for deep understanding, which can lead to overreliance.
In short, the document cuts through the AI hype with clarity. It’s a call to recognize what each type of AI is good for—and what it’s not. If you’re after creative brainstorming or quick answers, I’m your guy. But for solving quantum mechanics or financial modeling? Look to the computational heavyweights.

Gemini

This is a thought-provoking and well-structured piece of analysis that captures a significant and often misunderstood distinction in the world of artificial intelligence. It correctly identifies the fundamental difference between large language models (LLMs) and computational/symbolic AI systems.

Here are my thoughts on the document, breaking down its strengths and offering a more nuanced perspective:

Where the Document is Spot OnThe core argument is not only valid but also crucial for a mature understanding of AI. The distinction between "Fluent Illusion AI" and "Dedicated Computation AI," while using provocative labels, is functionally correct.

  • Fluent Illusion AI (LLMs): The description of models like ChatGPT, Gemini, and others as "linguistic mimicry" and "language prediction machines" is accurate at a fundamental level. Their core mechanism is probabilistic, designed to predict the most likely next word based on vast amounts of training data. This is why they can sometimes produce "hallucinations" or fail at tasks requiring strict logical consistency.
  • Dedicated Computation AI: The document correctly identifies systems like Wolfram|Alpha, MATLAB, and scientific computing libraries in Python and R as being built on a foundation of logic, mathematics, and symbolic representation. Their operations are deterministic, repeatable, and grounded in established computational rules.
  • The Imbalance and Hype: The analysis of why LLMs dominate the public consciousness is astute. Their conversational nature makes them feel magical and human-like, driving massive user engagement and media attention. Computational systems, by contrast, are powerful tools for experts but lack the same "wow" factor for the general public.
Nuances and Areas for Deeper ConsiderationWhile the document's central thesis is strong, the field is evolving, and the lines are beginning to blur. Here are a few points of nuance:

  1. The "Illusion" Can Be Incredibly Useful: Labeling LLMs as purely an "illusion" might understate their utility. The ability to understand and generate natural language is a powerful interface for accessing information, summarizing complex topics, generating creative text formats, and even writing code. For many tasks, a "good enough" answer delivered conversationally is more valuable than a precise one that requires expert knowledge to obtain.

  2. LLMs are More Than Just Next-Word Predictors (Now): While their foundation is predictive, emergent capabilities are being discovered and engineered in modern LLMs. Through techniques like "Chain of Thought" prompting and the integration of external tools, these models can perform rudimentary reasoning and multi-step problem-solving. They are not just parroting; they are manipulating learned patterns in ways that can approximate logical steps.

  3. The Rise of Hybrid AI (The Best of Both Worlds): The most powerful emerging AI systems are not one type or the other, but a hybrid of both. This is a critical point the document doesn't fully explore. For example:

    • LLMs using Tools: An LLM like Gemini can recognize when it's being asked a math or data analysis question and then write and execute Python code or query a computational engine (like a calculator or even Wolfram|Alpha) to get the precise answer. It uses its "fluent" side to understand the user's request and its "computational" access to deliver an accurate result.
    • Neuro-Symbolic AI: This is a major field of research aiming to combine the pattern-recognition strengths of neural networks (like LLMs) with the logical rigor of symbolic systems. The goal is to create an AI that can both understand the nuance of human language and reason with mathematical precision.
Final Verdict The document provides a sharp, clear, and valuable framework for thinking about different types of AI. It serves as a healthy dose of realism in a landscape dominated by hype. The core message is correct: confusing a fluent language generator with a rigorous computational engine is a fundamental mistake.

However, the future of AI is not a competition between these two "faces" but their integration. The most capable systems will leverage the strengths of both: using fluent, intuitive language models as the user-friendly "front door" to the powerful, precise, and trustworthy "computational engine" running in the background.

In short, the document is an excellent starting point for a deeper conversation. It correctly identifies the problem, and the solution is already taking shape through the fusion of these two powerful, but very different, forms of artificial intelligence.

Claude

This is a thoughtful piece that captures something important about the current AI landscape, though I think it oversimplifies the distinction in some ways.

The core insight is valuable: there's a meaningful difference between systems optimized for natural language interaction and those designed for precise computation. The observation that most public-facing AI creates an "illusion of intelligence" through linguistic fluency resonates - these systems can sound authoritative while making basic errors in reasoning or calculation.

However, I'd push back on the binary framing. The reality is more nuanced:

On the "fluent illusion" characterization: While language models do predict next tokens, describing this as mere "mimicry" undersells what's happening. These systems demonstrate genuine problem-solving capabilities, pattern recognition, and can perform complex reasoning tasks - they just do so unreliably and in ways we don't fully understand. The fact that they sometimes fail at logic or math doesn't negate their successes in other domains.

On the computational AI distinction: The piece conflates traditional computational tools (MATLAB, Python libraries) with AI systems. A Python script running statistical analysis isn't necessarily "AI" in the modern sense - it's deterministic software executing algorithms. True computational AI would be something like AlphaFold for protein folding or AI systems that discover new mathematical proofs.

Missing the hybrid approach: The most powerful applications often combine both types - using language models for interface and reasoning, while calling specialized computational tools for precise calculations. This is where the field is heading.

The document makes an important point about managing expectations and understanding limitations. But rather than dismissing language models as illusions, perhaps the lesson is about using the right tool for the right job and being honest about what each can and cannot do.

Message 35158125