SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Strategies & Market Trends : The Art of Investing -- Ignore unavailable to you. Want to Upgrade?


To: Johnny Canuck who wrote (10559)11/12/2025 10:42:28 PM
From: Sun Tzu  Read Replies (1) | Respond to of 10705
 
Thanks. That's interesting.
OpenAI has explicitly disallowed it.
Not only ChatGPT will not act as a therapist, the usage term says that we cannot use ChatGPT to give legal or therapy advice.
I wonder how Elon will deal with the lawsuits when when it goes all haywire.
Has he learned nothing from the "Self-driving" or "Autonomous Driving" or whatever he called it for Tesla cars?
He's had several lawsuits to the effect of, "When you advertise the car as driving itself, it doesn't matter if your fine print says that the user must vigilantly supervise the car."


I expect him to be nailed for calling it therapy mode. He should have called it self-help coaching or something like that.



To: Johnny Canuck who wrote (10559)11/14/2025 6:45:34 PM
From: Sun Tzu1 Recommendation

Recommended By
sixty2nds

  Read Replies (1) | Respond to of 10705
 
I am using ChatGPT 5.1 thinking mode and it is a much bigger upgrade than a simple .1 delta suggests.

  • First, it is much less sycophantic that before.
  • Secondly, it does a good job of structuring what it must deliver before it goes about trying to answer it. In version 4 it was much more likely to dive right in and try to answer. Now it actually asks itself, what am I suppose to deliver and what steps should I take to get there.
  • Thirdly, when necessary, it actually searches the internet to augment its answer and it will change it multiple times during different stages of answering as needed.


Here's a little example:

Let me do three things:

  1. Map your idea to what already exists (so you know you’re not crazy).

  2. Show how I’d concretely engineer your 3-stage system with a learnable HHT front-end.

  3. Be honest about practicality / speed.

...

Automated preprocessing / feature engineering as a field There’s now an explicit line of work on automating preprocessing and feature engineering:

  • Mumuni & Mumuni (2024) survey AutoML for data preprocessing + feature engineering, including automated cleaning, transformations, feature construction and selection; they explicitly discuss end-to-end pipelines that optimize preprocessing and model together.

  • Recent “Data-Aware Differentiable NAS” extends neural architecture search to also search the data preprocessing / representation choices, not just network structure.


Event streams as a first-class branch (finance) Very recent finance work is also going in your direction:

  • DAFF-Net (2025): a “dual-stream” network with one branch that learns event-driven temporal patterns and another that does multi-asset relationship modeling; then fuses them for prediction.

  • AFML / Lopez de Prado: “event-driven bars” (tick/volume/dollar/imbalance) as better sampling units for ML than fixed time bars; you form those bars and let the ML model learn on top.

Nobody (that I’m aware of) has a implemented your idea yet, but:
...


There are still issues. It still loses context when the session drags on when it generates an answer, it is likely to revert to what it was thinking 5 prompts ago rather than what I am telling it.

I know why this happens. The gist of it is that I am doing research and what I am telling it is unfamiliar to it. Since both my prompts and its answers are long, it loses context and replaces what it was unfamiliar with what it already knows.

Not much I can do about that until I build my own AI.
But for 90% of people, this should be very good. Even for me, it is good enough 70% of the times if I am using short context and not requiring deep dives into novelty.