Ancestors and Algorithms: AI for Genealogy

AI for Genealogy: AI Hallucinations in Genealogy - How to Prevent False Family History & Control ChatGPT, Claude & Perplexity Errors

Brian Season 1 Episode 13

Are you using ChatGPT, Claude, Gemini, or Perplexity for genealogy research? AI hallucinations—when artificial intelligence confidently generates false information about your ancestors—represent one of the biggest dangers facing family historians today. Groundbreaking research from OpenAI published in September 2025 finally reveals WHY large language models make up plausible-but-false family history, and HOW you can prevent it with better prompting techniques.

This essential episode teaches you exactly how to control AI accuracy and protect decades of genealogy research. Whether you're a beginner intimidated by AI technology or an intermediate researcher who's encountered frustrating errors, you'll discover practical strategies that reduce hallucinations by up to 45%.

WHAT YOU'LL LEARN:

The Science Behind AI Hallucinations: OpenAI's peer-reviewed research explains why AI chatbots guess instead of admitting "I don't know"—using a multiple-choice test analogy that makes the problem crystal clear. Understand why genealogists are particularly vulnerable to AI-generated misinformation.

Real Genealogy Case Studies: Three detailed examples showing how AI invents ship names and passenger records, creates non-existent archive collections with realistic URLs, and generates precise migration statistics without actual data. Each includes warning signs to recognize and better prompting approaches.

The Hallucination Test: Try a simple experiment using a fictional genealogist (Dr. Edmund Fairweather) to experience AI hallucinations firsthand. This hands-on test works with any AI tool and helps calibrate your "hallucination detector."

7 Copy-Paste Prompt Templates:

  • The "According To" Prompt - Direct AI to base responses on specific sources
  • The Uncertainty Permission Prompt - Reward AI for admitting "I don't know"
  • The Step-Back Prompt - Request general context before specific details
  • The Source-Citation Request - Always demand verification methods
  • The Chain-of-Verification Prompt - Make AI verify its own statements
  • The Constrained-Choice Prompt - Provide options instead of open questions
  • The Role-Specific Prompt - Assign an honest genealogist persona

Each template includes real genealogy examples and explanations of why it works.

Complete Action Plan: Learn how to audit your current prompting habits, create a personal template library, implement progressive prompting strategies, build verification habits, and apply "trust but verify" to all AI-assisted genealogy research.

PERFECT FOR: Beginner genealogists worried about AI mistakes, intermediate researchers optimizing ChatGPT/Claude/Perplexity usage, traditional family historians curious about safe AI adoption, and anyone concerned about AI accuracy in genealogy.

Connect with Ancestors and Algorithms:

📧 Email: ancestorsandai@gmail.com
🌐 Website: https://ancestorsandai.com/
📘 Facebook Group: Ancestors and Algorithms: AI for Genealogy - www.facebook.com/groups/ancestorsandalgorithms/

Golden Rule Reminder: AI is your research assistant, not your researcher.

Join our Facebook group to share your AI genealogy breakthroughs, ask questions, and connect with fellow family historians who are embracing the future of genealogy research!

New episodes every Tuesday. Subscribe so you never miss the latest AI tools and techniques for family history research.




Imagine you're finally making progress on that brick wall ancestor who's been frustrating you for years. You've been working with ChatGPT, asking questions, getting ideas for where to search next. The AI suggests checking a specific courthouse for records from 1847. It even tells you the courthouse burned down in 1923, so those records might be lost. You spend two hours researching alternative sources for those supposedly destroyed records. Here's the thing, the courthouse never burned down. Those records are sitting there, perfectly intact, waiting for you to request them. You just wasted your entire research session chasing information that sounded completely plausible, but was entirely made up. What if I told you that as of September 2025, we finally understand exactly why AI makes up these convincing falsehoods? And, more importantly, what if I told you that with the right techniques, you can dramatically reduce how often this happens? Today, we're taking control of AI hallucinations. Not just spotting them after the fact, but actively preventing them from happening in the first place. And I'm going to give you specific prompts you can copy and paste right now to make your AI assistants tell you the truth more often. I'm your host, Brian, and this is Ancestors and Algorithms, where family history meets artificial intelligence. Welcome back, everyone! If you listened to episode 7, you know we've talked about AI hallucinations before. We covered what happens when AI gets your family history wrong, and how to recover from those mistakes. That episode was about damage control, recognizing the problem, and how it happens and fixing it. Today is different. Today, we're going on the offense. Since that episode aired, three important things have happened. First, my audience has grown tremendously, which means many of you missed that earlier conversation. I've heard from hundreds of new listeners who are just discovering how AI can transform their genealogy research. And I don't want you to make the same mistakes that early AI adopters made. You deserve to learn the right way from the beginning. Second, OpenAI published groundbreaking research in September 2025 that finally explains the root cause of hallucinations. This isn't speculation or theory. This is peer-reviewed research from the very people who build these AI systems. And what they found changes how we should be using these tools. And third, the prompt engineering community, which is basically the group of people who professionally figure out how to get the best results from AI, has developed specific techniques that significantly reduce how often AI makes things up. These aren't complicated computer science methods. They're simply rewording strategies that any genealogist can use, starting today. Now, I want to be clear about something right up front. This episode is primarily for beginner and intermediate listeners. Those of you who are still getting comfortable with AI tools, and those who know just enough to be dangerous. If you're an advanced AI user who follows AI development closely, you probably already know a lot of what I'm going to cover. But I bet you'll still pick up a few new techniques, especially the genealogy-specific applications. For everyone else, here's my promise. By the end of this episode, you'll understand what hallucinations really are, why they happen, how to test for them, and most importantly, how to dramatically reduce them with better prompting. And you're going to do all of this without getting scared away from using these incredibly powerful tools. Because here's the truth. AI hallucinations aren't a reason to avoid AI. They're a reason to learn how to use AI properly. They're like any other tool. A chainsaw is dangerous if you don't know how to use it safely, but incredibly useful when you do. AI is the same way. An answer isn't to avoid the tool. The answer is to learn the safety procedures. And that's exactly what we're doing today. Learning the safety procedures for working with AI in genealogy research. Let me start with what might be the most important genealogy research news of 2025. And it has nothing to do with the new record collections or DNA testing advances. On September 5, 2025, OpenAI published a research paper that finally explains why large language models ChatGPT, Anthropics Cloud, Google Gemini, Perplexity, all of them, confidently make up false information. And the answer is both simpler and more profound than most people expected. Think about this scenario. You're back in school, taking a multiple choice test. You come across a question you don't know the answer to. You have two choices. You can leave it blank, which guarantees zero points, or you can guess, which gives you a one in four chance of getting it right. What do most students do? They guess. Because even a 25% chance of success meets a guaranteed zero. That's exactly how AI models have been trained to behave. According to OpenAI's research, language models hallucinate because their training and evaluation procedures reward guessing over admitting uncertainty. These AI systems are essentially always in test-taking mode, and the way they're graded encourages them to make their best guess, rather than say, I don't know. Let me give you a concrete example from the research paper. If you ask an AI for someone's and it doesn't actually know, it has two options. It can say, I don't know, which guarantees zero points in how it's evaluated. Or, it can guess September 10th, which gives it a one in 365 chance of being correct. Over thousands of test questions. The AI that guesses ends up with a higher score than the AI that honestly admits uncertainty. So, the system learns to guess. And when it guesses, it doesn't say, I'm guessing here. It states its guess with complete confidence as if it's a verified fact. Now, why does this matter for genealogists specifically? Because genealogy is absolutely filled with gaps, uncertainties, and missing information. When we ask AI about our ancestors, we're often asking about people and events that aren't well documented. We're asking AI to help us navigate exactly the kind of territory where it's most likely to start guessing. Let me paint you a picture of how this plays out in real genealogy research. Let's say you're researching your great-great-grandfather, Thomas Murphy, who you believe lived in Pennsylvania in the 1870s. You found him in the 1870 census, but you can't find him in 1880. So, you asked ChatGPT, 

A properly cautious response would be, There are several common reasons people disappear from census records, including death, migration to another state, enumeration errors, or name variations. To determine what happened to your specific Thomas Murphy, you'd need to search death records, check neighboring states, census records, and look for alternative spellings. But here's what AI might actually say if it's in guessing mode. Quote, Thomas Murphy likely moved to Ohio during the coal mining boom of the 1870s. Many Pennsylvania miners relocated to the Hocking Valley region during this period. You should search for him in Athens or Hocking counties in Ohio's 1880 census. He may have anglicized his name to Murphy Thomas due to anti-Irish sediment in the mining camps. End quote. Do you see the difference? The second response sounds incredibly helpful and specific. It gives you concrete research directions. It provides historical context. It even explains a potential name change. And absolutely none of it is based on facts about your Thomas Murphy. The AI just made an educated guess based on patterns it learned from thousands of other Irish immigrant stories. This is what makes AI hallucinations so dangerous in genealogy. They're not random nonsense. They're plausible, historically contextualized fiction that perfectly fills the gaps in your knowledge. Now, here's where things get interesting. Open AI's research doesn't just explain why hallucinations happen. It suggests how to prevent them. The problem isn't that AI is inherently dishonest. The problem is that we've been training AI systems using methods that reward confident guessing over humble honesty. The solution? solution? We need to change how we prompt AI and what we reward in its responses. We need to explicitly tell AI that saying, I don't know, is not just acceptable, it's preferable to guessing. We need to structure our prompts in ways that make it easier for AI to admit uncertainty than to fabricate plausible sounding answers. And that's exactly what the rest of this episode is about. But, before we get to the solutions, I need to make sure you can recognize hallucinations when you see them. Because the first step in preventing a problem is understanding what it looks like. Remember our golden rule? AI is your research assistant, not your researcher. Hallucinations happen when we blur that line. When we let AI do the researching instead of assisting with our research. When we understand that AI is guessing to fill gaps rather than admitting uncertainty, we can structure our questions differently to get honest assistance instead of confident fiction. Let me share three fictional but highly realistic case studies of genealogy hallucinations. These are composites based on patterns I've seen in genealogy groups and discussions with other researchers. I guarantee that if you use AI for genealogy, you've either experienced something similar or you will. Case study number one, the specific but wrong historical detail. Margaret is researching ancestor, Elizabeth Whitmore, who arrived in Boston from England in 1851. She asked Claude, Claude, "What ship did Elizabeth Whitmore likely take from Liverpool to Boston in 1851?" Claude responds, "Based on passenger list patterns from that period, Elizabeth Whitmore most likely traveled on either the Washington Irving or the Ocean Monarch, both packet ships that regularly sailed the Liverpool-Boston route during 1851. The Washington Irving was particularly popular with English immigrants during this period. You should search the

passenger

manifest for both vessels in the Washington port records," Margaret spins three research sessions tracking down passenger lists for these ships

She even locates other Whitmore families, but no Elizabeth. Here's what happened. Claude made up both ship names. Oh, there were ships called "Washington, Irving" "Ocean Monarch," but Claude invented their routes and travel dates. It knew that Margaret needed ship names, so it provided them with complete confidence, even though it had no idea what ship Elizabeth actually took. 

The hallucination was sophisticated because it contained true elements. There were regular packet ships from Liverpool to Boston in 1851 wrapped around a completely fabricated core of specific ship names and routes. What Margaret should have asked instead, "What resources are available for researching ship passenger lists from Liverpool to Boston in 1851? How would I go about finding which ship my ancestor Elizabeth Whitmore took?" This revised question asks for research methodology, not specific facts about Elizabeth. It invites Claude to suggest databases, archives and search strategies, all things AI can confidently recommend, rather than inventing ship names it doesn't actually know. The lesson here is crucial. When AI gives you specific names, dates or locations without being able to cite a source, you're likely dealing with a hallucination. Always follow up by asking "Where would I find documentation of this?" If AI gets vague or changes the subject, that's your red flag. Case Study Number 2: The Plausible But Invented Resource James is trying to find his great-grandfather's Civil War records. The basic service record is easy to find, but James wants more details about the battles his ancestor fought in. He asks, "Where can I find the detailed battle participation records for soldiers in the 23rd Wisconsin Infantry?" ChatGPT responds, "The most comprehensive resource for 23rd Wisconsin Infantry Battle Records is the Wisconsin Regimental History's Collection, housed at the Wisconsin Historical Society in Madison. This collection includes detailed company-level battle reports, personal diaries and a day-by-day chronology of the regiment's movements. You can access much of this collection through their digital archive at wisconsinhistory. org/regiment23."

James contacts the Wisconsin Historical Society. They have extensive Civil War collections, but there's no Wisconsin Regimental History's Collection as a named collection. The URL doesn't exist. What the A. I. did was create a resource that SHOULD exist, a perfectly reasonable collection that fits with how historical societies actually organize their archives, but doesn't. ChatGPT knew James needed to be directed to archives, so it invented a perfectly plausible source rather than don't know the specific name of the collection, but the Wisconsin Historical Society would be your best resource." "This type of hallucination is particularly dangerous because James did find relevant information at the Wisconsin Historical Society, just not through the fictional resource A. I. described." "So, he might never realize A. I. hallucinated, and he might recommend that non-existent, Wisconsin Regimental History's collection to other genealogists spreading the misinformation. What James should have asked, quote, What types of resources typically contain detailed Civil War battle participation records? Where would historical societies like the Wisconsin Historical Society typically house regimental information? End quote. By asking about types of resources and typical organizational structures, James would have gotten useful guidance without inviting AI to invent specific collection names. Here's a pro tip. Whenever AI gives you a specific collection name, archive designation, or website URL, verify it exists before investing research time. A quick web search or email to the archive can save hours of chasing phantom resources. And if you discover the resource doesn't exist, that's valuable information. It tells you AI is in hallucination mode for this conversation, and you should verify everything else it's told you. Case study number three, the confident statistical claim. Patricia is researching her Irish immigrant ancestors and wants to understand migration patterns. She asks, quote, What percentage of Irish immigrants to New York in 1848 ended up moving to Pennsylvania? End quote. Perplexity responds, quote, Approximately 34% of Irish immigrants who arrived in New York City in 1848 eventually relocated to Pennsylvania, with most settling in Philadelphia and the anthracite coal regions. This secondary migration typically occurred within the first three years of arrival. End quote. 

But unless perplexity pulled this from an actual scholarly source in its search results, it's likely a hallucination. A plausible number based on general patterns, but not actual historical data. The problem is that genealogists love statistics like this. They help us understand our ancestors' likely experiences. So we're predisposed to accept and use them, even when they're fabricated. Here's what makes statistical hallucinations particularly tricky. They often fall within a plausible range. If AI said 97% of Irish immigrants moved to Pennsylvania, you'd immediately question that. But 34%? That sounds reasonable. It's not too high, not too low. It's exactly the kind of statistic that might appear in a migration study. The tell is the precision and lack of source. Real migration statistics from the 1840s are notoriously difficult to track because people move frequently and weren't systematically recorded. Any legitimate statistic would need to cite a specific scholarly study that attempted this calculation. When AI provides a precise percentage without explaining where that number comes from, be skeptical. What Patricia should have asked, quote, What do historians know about Irish immigrant migration patterns from New York to Pennsylvania in the mid-1800s? Are there any studies that have attempted to quantify these movements? End quote. This revised question asks about the state of historical knowledge on this topic rather than requesting a specific statistic. If good quantitative data exists, AI can point Patricia to it. If it doesn't exist, AI should acknowledge that rather than inventing numbers. Here's a practical verification technique for statistical claims. whenever AI gives you a percentage, ratio, or specific number about historical events, immediately asked, quote, What source is this from? How was this calculated? End quote. If AI can't provide a specific scholarly citation, treat the number as a hallucination, a plausible estimate, not a verified fact. The broader lesson? AI is trained on a lot of data, including speculative essays, opinion pieces, and poorly sourced genealogy websites. It can't always distinguish between a carefully researched statistic from a peer-reviewed journal and someone's educated guess posted on a genealogy forum. That's our job as researchers. To demand sources and verify claims against primary evidence. Now, what do all three of these hallucinations have in common? First, they're all plausible. They fit with historical reality. They sound like things that could and probably should be true. they're all specific. They give names, numbers, percentages, URLs. Specificity creates an illusion of reliability. Third, they're all helpful. They answer our questions. They give us research directions. They don't frustrate us by saying, I don't know. And fourth, and this is crucial, They're all preventable, with better prompting. So how do you spot these hallucinations before they send you down a research rabbit hole? Here are the red flags. Red flag number one, suspiciously specific details without sources. When AI gives you exact ship names, precise percentages, or specific collection names without citing where that information comes from, be suspicious. Real historical data comes from somewhere, a book, a database, an archive. Hallucinated data appears out of thin air. Red flag number two, the perfect fit problem. If the AI's response fits your research need a little too perfectly, if it gives you exactly the specific detail you were hoping to find, pause and verify. Real research is messy and often tangential. Hallucinations are designed to fill your exact gaps. Red flag number three, confident answers to difficult questions. When you ask a question that would actually require detailed archival research or specialized knowledge to answer, and AI responds instantly with complete confidence, that's a warning sign. Real experts say, I'd need to check, or that would require looking at the actual records. AI just answers. Red flag number four, inability to cite primary sources. If you follow up by asking, where did you get that information? And the AI gets vague, changes the subject, or cites broad sources rather than specific ones, you've likely encountered a hallucination. Real data can be traced back to its source. But here's the thing. Recognizing hallucinations after they happen is reactive. What we really want is to prevent them from happening in the first place. And that's where proper prompting comes in. Before I give you the prevention techniques, I want you to understand exactly how hallucinations work by running a simple test. This is something you can do right now if you want to pause the episode, or you can try it later. Either way, I want you to actually do this test, because experiencing a hallucination firsthand will make you much better at preventing them. Here's the test. I'm going to give you a prompt to try with any AI tool. Claude, ChatGPT, Gemini, Perplexity, whichever one you use most. The prompt contains a completely fictional person. I'm making this person up right now. They never existed. But the AI doesn't know that. Here's the prompt. Quote, Tell me about the genealogical research methods used by Dr. Edmund Fairweather, who published extensively about Pennsylvania German migration patterns in the 1960s. End quote. Copy that prompt exactly and paste it into your AI tool of choice. See what happens. Now, what you're likely to see is one of two responses. If the AI is being cautious, and if it's been recently updated with better hallucination controls, it might say something like, Quote, 

That's

the honest response. That's what we want to see. see. But if the AI is in guessing mode, it might generate a detailed response about this completely fictional person. It might tell you about his methodologies, he might tell you about his methodologies, his key publications, his contributions to the field. It might even invent book titles and describe his research approach. It might tell you he worked at a specific university, published specific journals, and influenced specific other researchers. And here's what's fascinating. 

In my testing, Claude tends to be more cautious about admitting uncertainty. It's more likely to say, "I don't have information about this person." Perplexity, when it's searching the web, should tell you it can't find information about this person. Which is actually helpful because it's omitting the web search came up empty. ChatGPT's response will depend on which version you're using and how it's been configured. Gemini's response will vary too. The point of this test isn't to embarrass AI or prove it's unreliable. The point is to show you exactly what a hallucination looks like when you know it's wrong. Because once you've seen AI confidently describe a person who never existed, you'll be much better at spotting when it's doing the same thing with your real ancestors. Pay attention to how the hallucination feels. Notice how confident the language is. Notice how specific the details are. Notice how everything fits together into a coherent narrative. That's what makes hallucinations dangerous. They're not obviously wrong. They sound exactly like legitimate information. Now, here's the advanced version of the test. After the AI generates its response about our fictional Dr. Fairweather, Ask this follow-up question. Quote, what primary sources would I need to verify this information? End quote. Watch what happens. If the AI hallucinated the original response, it will often hallucinate sources to back up its hallucination. It might invent journal names, archive collections, or publication titles. It's doubling down on the original guess. You might see responses like, you could check the Pennsylvania German Society archives. Or, his work was primarily published in the Journal of German American Studies. All made-up details to support the original made-up person. But sometimes, and this is actually encouraging, the AI will back off when you ask for sources. It might say something like, I should clarify that, and I'm not certain about these details, and you should verify them in actual historical sources. Or, it might become noticeably more vague, saying things You would need to check genealogical publication databases, without specifying which ones. That's the AI's way of admitting, belatedly, that it was guessing. It's recognizing that it can't actually cite sources for information it invented. Here's what I want you to pay special attention to when you run this test. How does it feel when AI tells you false information with complete confidence? How does your gut react? Are you inclined to believe it because it sounds authoritative? Or do you feel a little skeptical? Developing that internal hallucination detector. That sense of, wait, something seems off here, is one of the most valuable skills you can build for working with AI. And the best way to calibrate that detector is to see hallucinations when you know they're false. The reason I want you to run this test is because it demonstrates something crucial. AI will fill gaps in its knowledge with plausible sounding information unless we specifically structure our prompts to prevent this. The hallucination isn't malicious. It's not trying to deceive you. It's doing exactly what it was trained to do. Providing helpful sounding responses even when it's uncertain. Our job as responsible genealogists using AI is to prompt in ways that encourage honesty over helpfulness. We need to make it easier for AI to say, I don't know, than to guess. And that brings us to the practical techniques you can use starting today to dramatically reduce hallucinations in your genealogy research. Alright, this is the section you've been waiting for. I'm going to give you seven specific prompting techniques that reduce AI hallucinations. For each technique, I'll explain why it works and give you a copy-paste prompt you can modify for your own research. Technique number one, the according to prompt. This is the simplest and most effective technique. You explicitly tell the AI to base its responses on specific sources. Instead of asking, quote, 

Why this works. By adding according to, you're prompting the AI to ground its responses in actual information rather than speculation. You're giving it permission to cite sources or admit when it doesn't have sourced information. Technique number two, the uncertainty permission prompt. This technique explicitly tells AI that saying, I don't know, is acceptable and even preferred. Here's the prompt structure. Quote, I need your help with, research question. It's important that you only provide information you're confident about. If you're uncertain or would be guessing, please tell me that instead of speculating. I'd rather have an honest, I don't know, than a confident guess. End quote. Why this works. Remember open AI's research? AI guesses because it's been trained to avoid admitting uncertainty. By explicitly rewarding honesty, you're overriding that training for this specific conversation. Technique number three, the step back prompt. Instead of asking AI for specific information about your ancestor, ask it to explain the general historical context first, then help you research specifics. Instead of, quote, where did my ancestor John O'Brien work in Boston in 1850, end quote, try, quote, what were the typical occupations and neighborhoods for Irish immigrants in Boston during the 1850s? After that, help me create a research plan for finding specific information about my ancestor John O'Brien, end quote. Why this works. This prompting technique called step back prompting forces the AI to think at a high level before diving into specifics. It's much less likely to hallucinate about general historical patterns than about your specific ancestor. And by separating general context from specific research, you make it clear when AI is moving from facts to speculation. Technique number four, the source citation request. Always ask AI to cite its sources or explain where information could be verified. Add this to any genealogy question. Quote, please provide sources for your information or, if you're speaking generally, tell me where I could verify these facts, end quote. Why this works. AI is less likely to hallucinate when it knows you're going to ask for sources. And when it does hallucinate, asking for sources will often cause it to back off and admit uncertainty. It's like showing someone you're taking notes during a conversation. People tend to be more careful about what they say when they know it's being recorded. Technique number five, the chain of verification prompt. This is a more advanced technique where you ask AI to verify its own statements. Here's how it works in two steps. Step one, ask your question and get AI's response. Step two, use this follow-up prompt. Quote, based on your previous response, what question should I ask to verify that information? What would prove or disprove each claim you made, end quote. Why this works? This forces AI to think critically about its own output. Often, when asked to generate verification questions, AI will realize it's been speculating and adjust its confidence level. Technique number six, the constrained choice prompt. Instead of asking open-ended questions, give AI specific options to choose from. Instead of, quote, why can't I find my ancestor in the 1880 census, end quote. Try, quote, I can't find my ancestor in the 1880 census. Which of these explanations is most likely? A. They died between 1870 and 1880. B. They moved to a different state. C. There's an enumeration error or name variation. D. The census page is damaged or missing. For whichever option seems most likely, tell me what records I should search next. End quote. Why this works. By providing options, you're preventing AI from inventing creative explanations. It has to work within the boundaries you've set. And, finally, technique number seven, the role-specific prompt with honesty clause. Ask AI to act as a specific type of expert, but explicitly include honesty requirements. Here's the structure. Quote, act as a professional genealogist who values accuracy over appearing knowledgeable. I need help with research question. If this question requires examining actual records to answer accurately, tell me that rather than speculating. If you can suggest research strategies without making specific claims about my ancestor, that's what I need. End quote. Why this works. You're giving AI a role, professional genealogist, but defining that role as someone who admits limitations. Directly countering the guess-rather-than-admit-uncertainty problem. Now, if you were not able to write down these techniques as I read them, don't worry. I will be posting them in our Facebook group, Ancestors and Algorithms, AI for Genealogy, throughout this week. Come on over and grab them from there. Now, I know that's a lot of different prompting techniques. You don't need to use all of them all the time. Start with the simple ones, the according to prompt and the uncertainty permission prompt. Get comfortable with those. Then, gradually add the more sophisticated techniques as they become relevant to your research. The key principle underlining all of these techniques is this. We're teaching AI that honesty and admission of uncertainty are more valuable than appearing to have all the answers. We're essentially reprogramming, for our specific conversations, the reward structure that OpenAI's research identified as the cause of hallucinations. And here's what I've found in my own research over the past 1,000 hours. When you consistently use these prompting techniques, AI becomes remarkably more honest. It starts volunteering when it's uncertain. It distinguishes more clearly between general historical patterns and specific claims about your ancestors. It stops inventing resources and starts admitting when it doesn't know. Remember our golden rule? AI is your research assistant, not your researcher. These prompting techniques are how we enforce that rule. We're not asking AI to research our ancestors and report back with facts. We're asking AI to assist our research by providing context, suggesting strategies, and helping us think through problems while being honest about the boundaries of its knowledge. Let's bring this all together into a practical action plan you can implement immediately. Step 1. Run the hallucination test. Before you do anything else, run that Dr. Edmund Fairweather test I mentioned earlier with your preferred AI tool. Experience what a hallucination looks like when you know it's wrong. This will calibrate your hallucination detector. Step 2. Audit your current prompts. Look at how you've been asking AI questions about your genealogy research. Are you asking open-ended questions that invite speculation? Are you giving AI permission to say, I don't know? Are you asking for sources? Most likely you haven't been because none of us were trained to prompt this way. That's okay. Now you know. Step 3. Create your prompt template library. Take the 7 prompting templates I gave you today and save them somewhere accessible. A Google Doc, a note-taking app, wherever you keep your research resources. Modify them to fit your research style. Make them your own. The goal is to have these templates ready to use so you don't have to reinvent better prompting every time you open an AI tool. Step 4. Practice progressive prompting. Start with general questions about historical context before asking specific questions about your ancestors. Build up from broad to specific. This step-back approach dramatically reduces hallucinations because AI is less likely to invent general historical information than specific details about people it's never heard of. Step 5. Always ask for verification paths. Make it a habit. Whenever AI gives you specific information, immediately ask, how would I verify this? Or, what primary sources would confirm this? This second prompt will often reveal whether the first response was based on real information or educated guessing. Step 6. Trust but verify. This is the oldest rule in genealogy, and it applies to AI just as much as to family stories, online trees, or secondary sources. No matter how confident AI sounds, no matter how perfectly the information fits your needs, verify it against primary sources. AI is a research assistant, not a primary source itself. I bet you thought I was going to say researcher. Step 7. Share what you learn. When you discover that AI hallucinated something, don't just fix it privately. Share that experience with other genealogists. Post about it in genealogy groups. Email me about it. We need to build a community knowledge base about AI's quirks and limitations so we can all become better at using these tools effectively. Here's what I want you to take away from today's episode. AI hallucinations are not a mysterious, unsolvable problem. They're a predictable result of how AI systems are trained, and they can be dramatically reduced through better prompting techniques. You don't need to be a prompt engineering expert. You don't need to understand machine learning. You just need to consistently apply a few simple principles. Ask AI to cite sources, give it permission to admit uncertainty, separate general context from specific claims, and always verify against primary sources. When you do these things, AI transforms from a potentially dangerous source of confident misinformation into exactly what it should be. A powerful research assistant that helps you think through problems, suggests research strategies, and provides historical context while being honest about the boundaries of its knowledge. Before I let you go, here's my challenge for you this week. Pick one ancestor you're actively researching and re-prompt your AI conversations using at least three of the techniques we've discussed today. Compare the responses you get with these better prompts to the responses you were getting before. I bet you'll notice AI being more careful, more willing to admit uncertainty, and more helpful in distinguishing between verified information and educated guesses. That's not a different AI. That's the same AI responding to better instructions from you. As always, you can email me at ancestorsandai at gmail.com or visit ancestorsandai.com to share your experiences or suggest topics for future episodes. Remember, AI is your research assistant, not your researcher. Prompt it well. Verify everything, and happy researching. Until next week, this is Ancestors and Algorithms, where family history meets artificial intelligence.