SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Strategies & Market Trends : Technical analysis for shorts & longs -- Ignore unavailable to you. Want to Upgrade?


To: Johnny Canuck who wrote (70627)2/9/2026 12:07:54 PM
From: Johnny Canuck  Read Replies (1) | Respond to of 70658
 
Conversation with Gemini



Summarize : m.youtube.com



In this video, Dylan Davis provides a comprehensive guide on how to prevent AI models like ChatGPT, Gemini, and Claude from "hallucinating" (guessing) when extracting information from documents.

The Problem: Helpful vs. AccurateAI models are trained to be helpful assistants. If they cannot find information in a document, they often guess confidently rather than admitting they don't know, filling in gaps with their existing training data [ 01:17].

Step 1: Choose the Right ModelTo minimize errors, always use high-level reasoning models rather than default versions [ 01:47]. Recommended models include:

  • ChatGPT: GPT-o1 (or the latest reasoning model) [ 02:13].

  • Claude: Opus 3.5/4.5 with extended reasoning [ 02:20].

  • Gemini: Gemini 1.5/2.0 Pro [ 02:25].

Step 2: The 3-Rule Grounding PromptAdding these three specific rules to your prompt forces the AI to stay anchored to your file [ 03:02]:

  1. Strict Grounding: "Base your answer only on the uploaded documents and nothing else" [ 03:13].

  2. Permission to Fail: "If information isn't found, say 'not found.' Do not guess" [ 03:30].

  3. Mandatory Citations: Require the AI to provide the document name, page/section, and a direct quote for every claim [ 04:07].

Bonus Tips:

  • The Gray Area: Ask the AI to mark information as "unverified" if it is only "pretty sure," helping you prioritize what to double-check manually [ 05:20].

  • High Stakes: For legal or financial work, tell the AI to only respond if it is 100% confident [ 06:07].

Step 3: Verification MethodsDavis outlines three ways to check the AI's work [ 06:36]:

  • Self-Check: In the same chat, ask the AI to "rescan the document" to find supporting quotes for its own previous claims [ 06:56].

  • Cross-Model Check: Take the first AI's analysis and the source document, then feed them into a different AI model to flag unsupported claims [ 07:35].

  • NotebookLM: Upload your document and the AI’s analysis to Google's NotebookLM. It is specifically designed for grounded search and provides clickable source links for easy validation [ 08:25].

Video Link: The 3-Rule Prompt That Stops ChatGPT, Gemini, and Claude From Guessing



The 3-Rule Prompt That Stops ChatGPT, Gemini, and Claude From Guessing

Dylan Davis · 84K views






Gemini is AI and can make mistakes.