SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Strategies & Market Trends : The Art of Investing
PICK 49.91+1.0%Dec 19 4:00 PM EST

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: Johnny Canuck who wrote (10602)11/20/2025 9:50:30 PM
From: Sun Tzu  Read Replies (1) of 10704
 
Some comparison of popular AI.
The most impressive thing here is that with 128k context window, ChatGPT is competitive with Gemini with a 10x larger context.

?? Context Window Sizes of Popular AI Models Model / Platform Context Size Notes
ChatGPT (GPT-4 Turbo) 128k tokens Available in GPT-4 Turbo via OpenAI; used in ChatGPT Plus and API.
Claude 2.1 / 3 Opus 200k – 1M tokens Claude 2.1 supports 200k; Claude 3 Opus supports up to 1 million tokens.
Gemini 1.5 Pro Up to 1M tokens Tiered pricing model; full 1M context available in enterprise and developer tiers.
Grok 4 (xAI) Up to 128k tokens Matches GPT-4 Turbo; optimized for code and real-time data.
You.com (YouChat) ~32k tokens Based on open-source models like Mixtral or GPT-J; context size varies by backend.
Perplexity AI (using Claude/GPT) Up to 128k–200k tokens Depends on which model is active (Claude or GPT-4 Turbo).
DeepSeek-V2 Up to 32k tokens Open-source Chinese model; context expanding in newer versions.
Meta LLaMA 3 (70B) Up to 8k–32k tokens Context size depends on fine-tuning and deployment; not yet competitive at scale.
Sources: Appscribed comparison, Leanware AI showdown, FlexOS context guide

?? What Does This Mean for You?
  • Claude 3 Opus and Gemini 1.5 Pro are currently the leaders in long-context reasoning—ideal for legal documents, codebases, or multi-turn simulations.
  • GPT-4 Turbo (ChatGPT) is highly efficient at 128k, especially for structured tasks and retrieval.
  • Grok is competitive in context size but optimized more for real-time and code-heavy tasks.
  • YouChat and DeepSeek are best for lightweight use, not deep context modeling.
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext