Documentation Index
Fetch the complete documentation index at: https://docs.thig.ai/llms.txt
Use this file to discover all available pages before exploring further.
How AI Gets Smart
Every time you send a message in thig.ai, the AI doesn’t just read your message — it assembles a rich context from multiple sources to give you the most relevant response. This page explains exactly what happens.The Context Pipeline
When you send a chat message, four sources of context are assembled before the AI sees your message:Layer 1: User Profile
Your professional context from Settings > Profile:
- Role and experience level (PM, Developer, Founder, etc.)
- Company name, size, and industry
- Preferred document style (Formal, Conversational, Technical, Concise)
- Detail level preference
- Custom context notes (free-text instructions to the AI)
- Preferred PRD sections
Layer 2: AI Memory
Your personal memories, ranked by relevance to the current conversation:
- Preferences — “Prefers bullet-point PRDs over narrative”
- Decisions — “Team chose React Native for mobile”
- Context — “B2B SaaS in healthcare, 50 employees”
- Feedback — “Technical specs were too detailed last time”
approved (not pending) are used. The system caches memory context in Redis for performance.Why it matters: The AI remembers what worked and what didn’t from your past conversations, so you don’t have to repeat yourself.Layer 3: Knowledge Base
The smart KB system selects the most relevant documents within a token budget:Step 3a — Phase Detection
The system detects what stage of PRD creation you’re in (discovery, requirements, technical, review) based on your conversation keywords. Zero AI cost — purely keyword-based.Step 3b — Budget Allocation
Token budget is split across three KB sources based on the detected phase:
- Project KB (highest priority for project-specific context)
- Shared KB (org-wide reference docs)
- Cross-project search (patterns from sibling projects — Professional+ plans)
- Keyword overlap with your current message
- Document type match to the current phase
- Recency (newer files rank higher)
Layer 4: Conversation History
Your recent messages in the current conversation, trimmed to fit the remaining context window. The token optimizer preserves the most recent and most important exchanges.Why it matters: The AI maintains continuity within a conversation, referencing what you discussed earlier.
What Happens After Each Conversation
Three things happen automatically (fire-and-forget, no delay to your experience):1. Memory Extraction
The AI analyzes the conversation and extracts memorable items:- New preferences you expressed
- Decisions you made
- Company/team context you mentioned
- Feedback on generated content
2. Knowledge Base Auto-Save
Key information from the conversation is saved to the project’s knowledge base:- Conversation exports
- Stakeholder views
- Technical specifications generated by AI tools
3. Embedding Processing
New KB entries are queued for embedding generation (OpenAItext-embedding-3-small). Text is chunked into paragraphs with overlap, then converted to vectors. This enables semantic search for cross-project knowledge retrieval.
There may be a short delay (seconds to minutes) before newly saved entries appear in cross-project semantic search results, as embedding processing happens asynchronously.
The Three Knowledge Sources Explained
Project Knowledge Base
Scope: One specific project Files stored here are only used when chatting within that project. Includes:- Documents you upload (PDF, DOCX, TXT)
- Chat exports and conversation summaries
- AI-generated stakeholder views and technical specs
Shared Knowledge Base
Scope: Entire organization Reference documents organized in folders that benefit all projects:- Tech stack documentation
- Design guidelines and brand voice
- Process documents and templates
- Competitive analysis
/admin/shared-kb — create folders, upload files, or create text entries. Admin/Owner only.
Cross-Project Search
Scope: All projects in your organization (excluding current project) Uses semantic similarity (vector embeddings) to find relevant patterns, decisions, and context from other projects. This means insights from your mobile app PRD can inform your web dashboard PRD.Cross-project search requires Professional plan or higher. Free and Starter plans only use project-specific and shared KB context.
Memory Management
Viewing and Editing Memories
Go to Settings > Knowledge Base (/admin/settings/memories) to:
- See stats: total memories, preferences, decisions, usage count
- Search and filter by type (Preference, Decision, Context, Feedback)
- Edit memory content or adjust importance (1-10 scale)
- Delete outdated memories
Memory Conflict Resolution
When a new memory contradicts an existing one (e.g., “prefers detailed PRDs” vs. “prefers concise PRDs”), the system:- Detects the conflict automatically
- Shows both memories side by side
- Lets you choose which one to keep
- Archives the other
Memory Decay
Memories that haven’t been referenced in a long time have their relevance scores automatically reduced by a daily cron job. This ensures the AI prioritizes recent, actively-used context.Approval Status
Newly extracted memories can have an approval status:- Approved — Active and used in AI context (default for auto-extracted)
- Pending — Awaiting your review before being used
AI Provider & Model Selection
The AI model used for each request follows a priority chain: Additionally, different tasks use different model tiers automatically:| Task | Tier | Why |
|---|---|---|
| Suggestions, coaching | Fast | Quick, cheap, good enough |
| Chat, conversations | Balanced | Quality + speed balance |
| PRD generation, tech specs | Powerful | Maximum quality |