fact-checker

Fact-checking LLM outputs with self-ask

306
Stars
+0
Stars/month
0
Releases (6m)

Star Growth

300306312Mar 27Apr 1

Overview

A proof-of-concept tool that implements fact-checking for LLM outputs using a self-interrogation approach through prompt chaining. The system works by having an LLM generate an initial answer to a question, then self-examine the assumptions underlying that answer, systematically verify each assumption, and finally generate a corrected response incorporating the new information. This creates a four-step verification loop: initial response → assumption identification → assumption verification → corrected answer. The tool demonstrates how large language models can be prompted to catch and correct their own factual errors through structured self-questioning. While simple in implementation, it showcases an important technique for improving AI reliability by making models explicitly examine their reasoning process. The approach is particularly effective at catching obvious factual errors where the model has conflicting knowledge, as shown in the example where it correctly identifies that mammals don't lay eggs after initially claiming elephants lay the biggest eggs. As a research demonstration with 306 GitHub stars, it provides a clear foundation for understanding self-verification techniques in AI systems.

Deep Analysis

Key Differentiator

vs single-pass LLM responses: iterative assumption-surfacing and self-verification — systematically reveals reasoning flaws by forcing the model to examine its own assumptions

Capabilities

  • Iterative fact verification through prompt chaining
  • Assumption surfacing from initial LLM answers
  • Sequential assumption validation and correction
  • Self-interrogation of reasoning steps
  • Jupyter notebook interface for interactive exploration

🔗 Integrations

LLM API (Claude/OpenAI implied)

Best For

  • Validating assumptions in LLM responses
  • Educational demonstrations of prompt chaining techniques
  • Identifying logical flaws in AI-generated answers

Not Ideal For

  • Real-time fact-checking against current information
  • Replacing authoritative fact-checking organizations
  • Legal or compliance verification

Languages

Python

Deployment

CLI script executionJupyter notebook

Known Limitations

  • Relies entirely on LLM reasoning quality
  • No independent verification against external sources
  • Vulnerable to confidently false assumptions
  • Simple demonstration, not production tool

Pros

  • + Simple and elegant demonstration of LLM self-verification through structured prompt chaining
  • + Effectively catches factual errors by forcing explicit examination of underlying assumptions
  • + Lightweight implementation that can be easily understood and modified for research purposes

Cons

  • - Limited to proof-of-concept status rather than production-ready fact-checking solution
  • - Relies on the same LLM for both initial answers and verification, creating potential circular reasoning
  • - May not catch subtle factual errors or complex reasoning flaws that require external knowledge sources

Use Cases

  • Educational tool for teaching AI safety and self-verification concepts to students and researchers
  • Research foundation for developing more sophisticated LLM fact-checking and self-correction systems
  • Demonstration platform for understanding how prompt chaining can improve AI reasoning reliability

Getting Started

1. Clone the repository and ensure Python 3 is installed on your system. 2. Run the fact-checker with your question: `python3 fact_checker.py 'your question here'` (remember to wrap questions in quotes). 3. Alternatively, open and run the provided `fact_checker.ipynb` Jupyter notebook for an interactive experience.

Compare fact-checker