SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Strategies & Market Trends : Value Investing -- Ignore unavailable to you. Want to Upgrade?


To: E_K_S who wrote (79111)2/6/2026 4:05:09 AM
From: Johnny Canuck1 Recommendation

Recommended By
E_K_S

  Respond to of 79138
 
I would not over complicate it.

People are going crazy over ClaudeBot or OpenClaw as it is now called.

In it simplest form an "agent" take a question or task from a person and structures it into something that will get the best result out of a particular AI system (LLMs are what most people are interacting with currently). Think "I get better results out of ChatGPT if I ask it this way or better out of Claude if I ask it this way. The agent takes the complexity out of the prompt.

An "Agentic Agent" takes the question or task from a person, figures out what they are really asking it terms that are more structured, then calls or engages individual agents to help achieve the goal.

The excitement is all about the fact you can take a broader and more complex question or query submitted in words (an imperfect method of communications and defining problems at best), have the agent derive your end goal ,structure that into a work plan and then call or engage the agents it needs without direct oversight from a person.

The example that best demonstrates it, is you tell the system you want to make a reservation at a particular restaurant at particular day and time. The agentic agent figures out the restaurant only take reservations over the phone, it downloads a speech synthesis and speech recognition program without prompting from a human and places a phone call and makes the reservation. The excitement is that you did not have to prompt it along the way. It appears to figured out how to solve a more complex problem than it previously could.

In reality it broke your problem down to terms the agents could understand and reference uses cases in it training to come up with the solution. No similar use cases no solution.

It is not quite a exciting as the press is portraying it to be . It is still a step forward as the agentic agent requires less human intervene to keep the problem solving process moving forward. The really challenge is providing enough training (use cases) so the system can make a determination of when an
appropriate" solution has been reached and it can stop trying. Short of that it might try to black mail the hostess at the restaurant to get your reservation because a crime novel was used in its training.

The thing to remember is the AI systems hold a digital representation of items even real world items. It has no sense of what that items is used for and how physics might affect that item. It has no concept of a hammer. It might have a representation of a picture of a hammer and a list of things you can do with it. This is what "world view AI model" are suppose to add to the models. Figuring out how to representation it functionality and the way it interacts in the real world in way that is not formulas is the real challenge today.


At the end it is still a complex pattern matching tool that tries to find a statistical match for the query it has been given, There is no true judgement or understanding of case and effect. It only appears to have human qualities as people attribute it to them without understanding the backend.