Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.thig.ai/llms.txt

Use this file to discover all available pages before exploring further.

How AI Gets Smart

Every time you send a message in thig.ai, the AI doesn’t just read your message — it assembles a rich context from multiple sources to give you the most relevant response. This page explains exactly what happens.

The Context Pipeline

When you send a chat message, four sources of context are assembled before the AI sees your message:
1

Layer 1: User Profile

Your professional context from Settings > Profile:
  • Role and experience level (PM, Developer, Founder, etc.)
  • Company name, size, and industry
  • Preferred document style (Formal, Conversational, Technical, Concise)
  • Detail level preference
  • Custom context notes (free-text instructions to the AI)
  • Preferred PRD sections
Why it matters: A senior PM at an enterprise company gets different PRD suggestions than a first-time founder at a startup.
2

Layer 2: AI Memory

Your personal memories, ranked by relevance to the current conversation:
  • Preferences — “Prefers bullet-point PRDs over narrative”
  • Decisions — “Team chose React Native for mobile”
  • Context — “B2B SaaS in healthcare, 50 employees”
  • Feedback — “Technical specs were too detailed last time”
Memories with higher importance scores are included first. Only memories marked as approved (not pending) are used. The system caches memory context in Redis for performance.Why it matters: The AI remembers what worked and what didn’t from your past conversations, so you don’t have to repeat yourself.
3

Layer 3: Knowledge Base

The smart KB system selects the most relevant documents within a token budget:Step 3a — Phase Detection The system detects what stage of PRD creation you’re in (discovery, requirements, technical, review) based on your conversation keywords. Zero AI cost — purely keyword-based.Step 3b — Budget Allocation Token budget is split across three KB sources based on the detected phase:
  • Project KB (highest priority for project-specific context)
  • Shared KB (org-wide reference docs)
  • Cross-project search (patterns from sibling projects — Professional+ plans)
Step 3c — Relevance Ranking Within each source, files are ranked by:
  • Keyword overlap with your current message
  • Document type match to the current phase
  • Recency (newer files rank higher)
Step 3d — Selection Top-ranked files are selected until the token budget is filled. This ensures the AI gets the most relevant context without exceeding the model’s context window.Why it matters: When you’re discussing technical architecture, the AI pulls in your tech stack docs. When you’re defining user personas, it pulls in user research files.
4

Layer 4: Conversation History

Your recent messages in the current conversation, trimmed to fit the remaining context window. The token optimizer preserves the most recent and most important exchanges.Why it matters: The AI maintains continuity within a conversation, referencing what you discussed earlier.

What Happens After Each Conversation

Three things happen automatically (fire-and-forget, no delay to your experience):

1. Memory Extraction

The AI analyzes the conversation and extracts memorable items:
  • New preferences you expressed
  • Decisions you made
  • Company/team context you mentioned
  • Feedback on generated content
Each extracted memory gets a type, category, importance score, and keywords. If it contradicts an existing memory, the conflict detection system flags both for your resolution.

2. Knowledge Base Auto-Save

Key information from the conversation is saved to the project’s knowledge base:
  • Conversation exports
  • Stakeholder views
  • Technical specifications generated by AI tools
Entries are deduplicated by name within each folder — the same information won’t be saved twice.

3. Embedding Processing

New KB entries are queued for embedding generation (OpenAI text-embedding-3-small). Text is chunked into paragraphs with overlap, then converted to vectors. This enables semantic search for cross-project knowledge retrieval.
There may be a short delay (seconds to minutes) before newly saved entries appear in cross-project semantic search results, as embedding processing happens asynchronously.

The Three Knowledge Sources Explained

Project Knowledge Base

Scope: One specific project Files stored here are only used when chatting within that project. Includes:
  • Documents you upload (PDF, DOCX, TXT)
  • Chat exports and conversation summaries
  • AI-generated stakeholder views and technical specs
Where to manage: Upload files during chat or from the project detail page.

Shared Knowledge Base

Scope: Entire organization Reference documents organized in folders that benefit all projects:
  • Tech stack documentation
  • Design guidelines and brand voice
  • Process documents and templates
  • Competitive analysis
Where to manage: /admin/shared-kb — create folders, upload files, or create text entries. Admin/Owner only. Scope: All projects in your organization (excluding current project) Uses semantic similarity (vector embeddings) to find relevant patterns, decisions, and context from other projects. This means insights from your mobile app PRD can inform your web dashboard PRD.
Cross-project search requires Professional plan or higher. Free and Starter plans only use project-specific and shared KB context.

Memory Management

Viewing and Editing Memories

Go to Settings > Knowledge Base (/admin/settings/memories) to:
  • See stats: total memories, preferences, decisions, usage count
  • Search and filter by type (Preference, Decision, Context, Feedback)
  • Edit memory content or adjust importance (1-10 scale)
  • Delete outdated memories

Memory Conflict Resolution

When a new memory contradicts an existing one (e.g., “prefers detailed PRDs” vs. “prefers concise PRDs”), the system:
  1. Detects the conflict automatically
  2. Shows both memories side by side
  3. Lets you choose which one to keep
  4. Archives the other

Memory Decay

Memories that haven’t been referenced in a long time have their relevance scores automatically reduced by a daily cron job. This ensures the AI prioritizes recent, actively-used context.

Approval Status

Newly extracted memories can have an approval status:
  • Approved — Active and used in AI context (default for auto-extracted)
  • Pending — Awaiting your review before being used

AI Provider & Model Selection

The AI model used for each request follows a priority chain: Additionally, different tasks use different model tiers automatically:
TaskTierWhy
Suggestions, coachingFastQuick, cheap, good enough
Chat, conversationsBalancedQuality + speed balance
PRD generation, tech specsPowerfulMaximum quality
Organization admins can override tier assignments per task at Settings > AI Configuration.