SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Strategies & Market Trends : The Art of Investing -- Ignore unavailable to you. Want to Upgrade?


To: Johnny Canuck who wrote (10602)11/19/2025 9:37:17 PM
From: Sun Tzu1 Recommendation

Recommended By
sixty2nds

  Respond to of 10708
 
Thanks. He is not wrong.
The gist of what is happening, which is just one thing but he is enumerating the impact, is that Google took a page from Microsoft and rapidly raised its AI from a complete flop to a leading position.

Microsoft was way behind Amazon in cloud. And there was no way they could catch up to AWS. So what they did was two things: (1) They stopped competing head-on with AWS and didn't go after the IaaS space which AWS owned. Rather they went after the SaaS and PaaS space. (2) They heavily leveraged their Office Presence and enterprise connections to win clients whereas AWS (quite wrongly) refused to engage enterprises and wanted them to interface with their partners, but the problem was that cloud was new enough that those partners were not in the same size as the enterprises. A company with 5 digit employees and 11 digit revenue doesn't want to partner with vendor that has a 100 employees, no matter how good they are.

Google has done exactly the same thing. They know they cannot beat OpenAI in the chat space and indeed in many of the other services that OpenAI offers. But they could do other things that OpenAI cannot: (1) As Microsoft did, they can embed their AI into every product they have. This is a sort of a "forced" expansion, but since it is free (for now), they are onboarding massive number of people in a short time. (2) The ubiquity of their applications, and therefore the connectivity that Gemini can achieve cannot be matched by OpenAI. In other words, OpenAI asks you, "May I connect to your Gmail and Calendar to help?" But Google doesn't ask. They just embed Gemini in Sheets, Docs, Drive, Gmail, Android, Search, etc.

I do play with Gemini. The quality has improved a lot. Mostly because they have copied a lot and have learned a lot from OpenAI. But to the user it doesn't matter who was there first.

To me, and for my purposes, the most impressive part of Gemini is how big their context space is. Then it is that they mesh with Gdrive well. But the functionality, even if we leave the chat space aside, is well below OpenAI. For example, I can ask ChatGPT write a python script and run it, but Gemini has no run time space within it. There are other similar types of shortcoming to Gemini. However, that may not matter to many less technical users.

have a good evening

I am feeling much better, which means I will be spending less time on SI and more time AI. I should be able to file for another patent soon.



To: Johnny Canuck who wrote (10602)11/20/2025 9:50:30 PM
From: Sun Tzu  Read Replies (1) | Respond to of 10708
 
Some comparison of popular AI.
The most impressive thing here is that with 128k context window, ChatGPT is competitive with Gemini with a 10x larger context.

?? Context Window Sizes of Popular AI Models Model / Platform Context Size Notes
ChatGPT (GPT-4 Turbo) 128k tokens Available in GPT-4 Turbo via OpenAI; used in ChatGPT Plus and API.
Claude 2.1 / 3 Opus 200k – 1M tokens Claude 2.1 supports 200k; Claude 3 Opus supports up to 1 million tokens.
Gemini 1.5 Pro Up to 1M tokens Tiered pricing model; full 1M context available in enterprise and developer tiers.
Grok 4 (xAI) Up to 128k tokens Matches GPT-4 Turbo; optimized for code and real-time data.
You.com (YouChat) ~32k tokens Based on open-source models like Mixtral or GPT-J; context size varies by backend.
Perplexity AI (using Claude/GPT) Up to 128k–200k tokens Depends on which model is active (Claude or GPT-4 Turbo).
DeepSeek-V2 Up to 32k tokens Open-source Chinese model; context expanding in newer versions.
Meta LLaMA 3 (70B) Up to 8k–32k tokens Context size depends on fine-tuning and deployment; not yet competitive at scale.
Sources: Appscribed comparison, Leanware AI showdown, FlexOS context guide

?? What Does This Mean for You?
  • Claude 3 Opus and Gemini 1.5 Pro are currently the leaders in long-context reasoning—ideal for legal documents, codebases, or multi-turn simulations.
  • GPT-4 Turbo (ChatGPT) is highly efficient at 128k, especially for structured tasks and retrieval.
  • Grok is competitive in context size but optimized more for real-time and code-heavy tasks.
  • YouChat and DeepSeek are best for lightweight use, not deep context modeling.