MedRecords AI Documentation

Step-by-step guides designed for busy small firms. Get from install to first case in 5 minutes — whether you set it up yourself or hand it to your IT provider.

Getting Started

System Requirements

Component Minimum Recommended
Operating System Windows 10 (64-bit) Windows 10 or later (64-bit)
RAM 8 GB 16 GB (for local AI)
Disk Space 2 GB for installation Plus space for medical records
Python 3.10 or later Installed automatically if missing
GPU Not required Optional NVIDIA GPU accelerates local AI models
Internet Required for AWS Bedrock Optional for Ollama local processing

Quick Start Guide

Get MedRecords AI running in under five minutes:

  1. Download the demo from aiproductivity.dev
  2. Extract the ZIP to any folder (e.g., C:\MedRecordsAI)
  3. Launch by double-clicking MedRecordsAI.bat
  4. Follow the setup wizard in your browser
  5. Log in with your credentials and start processing records
Tip: The setup wizard will walk you through API key configuration, credential setup, and email preferences on first launch.

Your First Case

Once you are logged in, follow these steps to process your first set of medical records:

  1. Click the "Files" tab in the dashboard
  2. Drag and drop your medical records (PDF, images, text files)
  3. Records are automatically organized and queued for processing
  4. Click "Summarize" to process — choose Fast, Auto, or Thorough mode
  5. Review the generated summary, timeline, and detected issues
  6. Use AI Chat to ask questions about the records

Installation

Windows Installation

Follow this step-by-step walkthrough to install MedRecords AI on your Windows machine:

  1. Download the latest release ZIP from aiproductivity.dev
  2. Extract the archive to your preferred directory (e.g., C:\MedRecordsAI)
  3. Run MedRecordsAI.bat — this launches the application and opens your default browser
  4. The Setup Wizard appears on first launch:
    • Accept the End User License Agreement (EULA)
    • Configure AWS Bedrock credentials or Ollama for local AI
    • Set your login credentials (username and password)
    • Configure email settings (optional, for automated record retrieval)
  5. After setup completes, MedRecords AI opens the dashboard and is ready for use
Note: If Python 3.10+ is not detected, the installer will automatically download and install it for you. No manual steps required.

License Activation

After purchasing MedRecords AI, activate your license to unlock your tier:

  • After purchasing, you receive a license key via email
  • Navigate to Settings → License in the dashboard
  • Enter your license key and click Activate
  • Your Pro license unlocks immediately
  • License is tied to one machine — contact support for transfers

AWS Bedrock Setup

MedRecords AI uses AWS Bedrock for HIPAA-eligible cloud-based AI processing:

  1. Sign up for an AWS account at aws.amazon.com
  2. Enable Bedrock model access in the AWS Console for Claude models
  3. Create IAM credentials (Access Key ID + Secret Access Key) with Bedrock permissions
  4. Enter the credentials during the setup wizard or later in Settings → AI Backend
  5. Costs approximately $1–2 per case depending on record size
HIPAA Coverage: AWS Bedrock is covered by the AWS Business Associate Agreement (BAA), making it suitable for processing medical data. MedRecords AI uses Claude via Bedrock's Converse API for optimal accuracy and HIPAA compliance.

Ollama Local AI Setup

For fully air-gapped processing with no internet dependency:

  1. Download Ollama from ollama.com
  2. Install Ollama and launch it once.
  3. MedRecords AI will automatically detect Ollama and pull the required models (qwen2.5:7b and glm4:9b) on first run.
Zero-Configuration: You no longer need to manually pull models via the command line. The built-in Dependency Manager handles model updates and health checks automatically.
Ultra Mode (Full Air-Gap): For firms requiring complete network isolation, configure Ollama as your AI backend and disable all outbound connections in Settings. MedRecords AI operates with zero internet dependency — no telemetry, no update checks, no cloud calls. Ideal for high-security environments.

AWS Bedrock Onboarding Wizard

The built-in AWS Wizard (v2.5.0+) provides a guided setup experience for connecting to AWS Bedrock:

  1. Navigate to Settings → AWS Wizard in the dashboard
  2. Enter your AWS credentials (ABSK Bearer Token or Access Key ID + Secret Key pair)
  3. The wizard validates connectivity and verifies model access automatically
  4. Credentials are securely stored in your system keyring — never written to plain-text files

You can also run the wizard via CLI: medcli aws-wizard --bearer-token YOUR_KEY

Features

Durable Processing & Task Queue

MedRecords AI uses an enterprise-grade task queue system to ensure your records are processed reliably, even if the application is closed or your computer restarts.

  • Persistence: Tasks are stored in a local SQLite database. If processing is interrupted, it resumes automatically upon restart.
  • Background Worker: A dedicated worker process handles the "heavy lifting" of OCR and AI analysis, keeping the UI fast and responsive.
  • Automatic Retries: If a temporary network or API error occurs, the system automatically retries the task before alerting you.

Ironclad Privacy: Vault Encryption

Security is the core of MedRecords AI. All HIPAA-sensitive data is protected using industry-standard AES-256 encryption at rest.

  • Secure Storage: Every summary, audit log, and processed metadata file is encrypted before being written to your disk.
  • Unique Local Keys: Encryption keys are derived uniquely for your specific installation and machine ID.
  • Transparent Access: Files are decrypted in-memory only when viewed by an authorized user, ensuring no PHI is ever left exposed on the file system.

AI Chat Over Your Records

Ask questions about any case in plain English and get structured answers grounded in your actual case data.

  • Context-Aware: The chat draws on your summaries, billing data, treatment timelines, and analysis to answer questions about specific cases.
  • Natural Language: Ask "What are the treatment gaps in the Thompson case?" or "What's the total billed for case 2024-MVA?" and get direct answers.
  • Case References: Answers reference the source case and relevant findings so you can verify the underlying data.

Automatic Updates

Stay up to date with the latest security patches and features using the built-in Self-Updater.

  • One-Click Check: Use the Admin dashboard to check for new versions against our official manifest.
  • Safe Patching: Updates are downloaded and verified before being applied to your installation.

AI Medical Summarization

MedRecords AI offers three processing modes to match your needs:

  • Fast (1–3 minutes): Quick overview, key findings, and basic timeline. Ideal for initial case screening and single documents.
  • Auto (2–10 minutes): Balanced analysis with treatment gap detection and alerts. The default for most cases. Processes each document individually for maximum citation accuracy. Automatically upgrades to Thorough mode for large cases.
  • Thorough (15–45 minutes): Deep analysis with parallel processing, cross-document contradiction detection, and comprehensive synthesis. Handles cases up to 500+ pages across 20+ files by processing in intelligent chunks. If a chunk fails, remaining chunks still complete — you can retry only the failed portions.
Large Cases: Cases with more than 10 files or 5 MB of content are automatically routed to Thorough mode, which splits the documents into manageable chunks for reliable parallel processing. A real-time progress bar tracks each stage. If the pipeline encounters an error, it fails gracefully with a clear message and a Retry button. See the Processing Modes Guide for detailed scenarios on choosing the right mode.

Timeline & Chronology

Automatically generates a chronological timeline from all medical records:

  • Each entry is linked to the source page in the original document
  • Filterable by date range, provider, and event type
  • Export to PDF or Excel for inclusion in case files

Treatment Gap Detection

Identifies gaps and inconsistencies in the patient's treatment history:

  • Identifies gaps between recommended follow-up and actual visits
  • Highlights missed appointments and delayed treatments
  • Flags inconsistencies between provider recommendations and patient actions

Smoking Gun Detection PRO

Advanced analysis that surfaces critical findings across the entire record set:

  • Detects critical findings that may have been overlooked
  • Identifies contradictions between records from different providers
  • Flags pre-existing conditions and prior injuries
  • Highlights documentation inconsistencies that may impact the case

Source Page Linking

Every finding in the summary is traceable back to the original document:

  • Every finding linked back to the exact source page
  • Click any citation to view the original document
  • Page numbers match the original medical records

AI Chat

Ask questions about the medical records in natural language:

  • "What medications was the patient prescribed after the surgery?"
  • "Were there any gaps in physical therapy?"
  • "Summarize all radiology findings."

The AI is context-aware — it has read all processed records for the case and can answer questions across the entire document set.

Demand Package Generation PRO

Generate formatted demand packages ready for opposing counsel:

  • Includes medical chronology, treatment summary, and damages analysis
  • Customizable templates per state and case type
  • Export to Word or PDF

Case Valuation PRO

AI-powered case value estimation to support settlement decisions:

  • Based on jurisdiction, injury type, and treatment duration
  • Comparable verdict and settlement data
  • Range estimate (low / mid / high) with confidence bands across five severity tiers
  • Multiplier-based calculations calibrated to case type and state

Negligence Detection PRO

Analyzes processed cases for potential standard-of-care deviations:

  • Detects timing rule violations (e.g., ER imaging delays, follow-up gaps)
  • Flags medication contraindications and documentation inconsistencies
  • Each finding includes severity level, citation, description, and recommendation
  • Accept or reject individual findings with one click — feedback is stored for continuous improvement

Auto Task Generation PRO

Generates a prioritized action list from case findings so nothing falls through the cracks:

  • Tasks categorized by type: depositions, records requests, expert review, follow-up, and more
  • Five priority levels from Critical to Routine with assignee suggestions (Attorney vs. Paralegal)
  • Priority breakdown dashboard showing Critical / High / Medium counts at a glance
  • Export the full task list to CSV for import into your practice management system

Injury Visualization PRO

Generates an SVG body diagram highlighting every injury found in the medical records:

  • Maps diagnoses to 14 anatomical regions (head, cervical/thoracic/lumbar spine, arms, legs, abdomen, etc.)
  • Color-coded overlays show injury location and distribution
  • Download the diagram as an SVG file for inclusion in demand packages or trial exhibits

Billing Extraction

Automatically extracts and organizes billing data from medical records:

  • Identifies CPT codes, charges, payments, and adjustments
  • Generates a summary of total medical costs and future medical estimates
  • Provider-level billing breakdown
  • Export to Excel for review and inclusion in demand packages

Deposition Summarizer

Upload deposition transcripts for AI-powered analysis:

  • Extracts key testimony, contradictions, and admissions from transcripts
  • Identifies statements that support or undermine the case
  • Cross-references deposition testimony with medical record findings

Platform Integrations

Push summaries and case data directly into your existing practice management system:

  • Filevine: Direct API integration — push summaries into case records with API key and Organization ID
  • CASEpeer: Case data sync and summary push via API key
  • Litify: Salesforce-based integration using OAuth 2.0 authentication
  • Webhook: Generic HTTP POST integration compatible with Zapier, Make, n8n, or any custom endpoint

Configure integrations in Settings → Integrations. Use the built-in Test Connection button to verify connectivity before pushing data.

Contact Management

Manage contacts for medical records retrieval and case tracking:

  • Add contacts individually or batch import from an Excel spreadsheet
  • Search and filter contacts with pagination
  • Download a pre-formatted Excel template for bulk import
  • Integrates with the Email Retrieval Bot for automated records requests with escalation tiers

Virtual Paralegal Agent NEW in v2.6

The Virtual Paralegal is an AI-powered agent that executes real CLI commands on your behalf through a natural language chat interface. It can manage your entire workflow autonomously — from checking pipeline status to sending follow-up emails and generating demand packages.

  • Natural Language Commands: Type what you want in plain English — "follow up on all pending records for case 2024-Smith" — and the agent translates it into the correct CLI commands
  • Approval for Risky Actions: Read-only commands execute instantly. Write operations (sending emails, creating campaigns, modifying settings) always require your explicit approval before executing
  • Ambient Context: The agent automatically gathers your current system state (active cases, running pipelines, dashboard stats) at the start of each session so it can make informed decisions
  • Self-Correction: Built-in evaluation detects unnecessary clarification questions and premature responses, keeping the agent focused on action
  • Turn Budget: Each conversation has a configurable turn limit (default 25) to prevent runaway execution. The agent is aware of remaining turns and prioritizes efficiently
Access: Open the Virtual Paralegal from the Virtual Paralegal tab in the dashboard, or via CLI with medcli sentinel chat.

New in v2.9 NEW

  • medcli sentinel: Natural language command center. Type any case question or instruction in plain English. Example: medcli sentinel "what's the status of the Johnson case?"
  • medcli intelligence search: Query the Case Intelligence Engine for settlement precedents by injury type, jurisdiction, or treatment profile. Example: medcli intelligence search --injury "L5-S1 radiculopathy" --jurisdiction "CA"
  • medcli verify: Split-screen verification — view the AI summary alongside original source pages to verify any finding with one click. Example: medcli verify --run-id <id>

Watch & Push Folders NEW in v2.6

Automate document ingestion and report delivery by configuring watch folders and push folders directly through the Virtual Paralegal agent or CLI.

  • Watch Folders: Any folder you designate as a "watch folder" is automatically monitored. New PDF or image files dropped into the folder are ingested into the pipeline and processed without any manual upload
  • Push Folders: Completed summary reports are automatically delivered to your designated push folder(s) — ideal for syncing with network drives, cloud storage, or case management system intake folders
  • Agent Setup: Paste a folder path into the Virtual Paralegal chat and it will ask whether you want it as a watch folder or push folder, then configure it with your approval
  • CLI Management: Use medcli folders list-watch, medcli folders add-watch --path "X", medcli folders list-push, medcli folders add-push --path "X"

Records Collection Tracker

A dedicated records collection management center with three sub-tabs for tracking medical record retrieval across all your cases.

  • Contacts Tab: Manage all medical providers, custodians, and facilities with sortable tables, filters, batch import/export, and one-click follow-ups
  • Cases Tab: Groups contacts by case reference with visual progress bars showing collection status (requested, received, pending)
  • Activity Tab: Chronological timeline of all email correspondence for full audit visibility
  • CLI Access: medcli tracker cases, medcli tracker follow-up, medcli tracker stats, medcli tracker detail

System Diagnostics

Built-in diagnostic tools to quickly identify and resolve common issues without contacting support.

  • System Doctor: Runs 10 automated health checks (database integrity, port availability, API connectivity, disk space, process health, and more). Use medcli system doctor to diagnose or medcli system doctor --fix to auto-repair
  • Nuclear Reset: Emergency reset that kills zombie processes, clears stale locks, and resets application state. Use medcli system nuclear-reset --confirm when all else fails
Tip: The Virtual Paralegal agent can run diagnostics for you. Just type "run system doctor" in the Virtual Paralegal chat.

Supported File Formats

MedRecords AI processes a wide range of medical document formats:

Category Formats
PDF Documents .pdf (native text and scanned with OCR)
Images .png, .jpg, .jpeg, .tiff, .tif
Medical Imaging .dcm, .dicom (DICOM format)
Office Documents .doc, .docx, .xlsx, .xls
Plain Text & Data .txt, .csv, .json, .xml, .log, .html, .htm

Maximum upload size: 100 MB per file.

Configuration

config.yaml Reference

The config.yaml file in your installation directory controls core application settings:

# AI Backend
llm_backend: bedrock         # bedrock | ollama

# AWS Bedrock Settings (HIPAA BAA-eligible)
bedrock:
  model: us.anthropic.claude-sonnet-4-6-20250514-v1:0
  max_tokens: 32768          # max output tokens per call
  timeout: 300               # seconds per API call
  combined_timeout: 600      # seconds for multi-file cases
  combined_max_tokens: 65536 # max output for combined mode
  retries: 3                 # retry attempts on failure

# Pipeline Safety
pipeline:
  max_duration_fast: 1800    # 30 min watchdog limit (fast/combined)
  max_duration_thorough: 5400 # 90 min watchdog limit (thorough)

# Ollama Local AI Settings
ollama:
  endpoint: http://localhost:11434
  model: qwen2.5:7b
  timeout: 900               # 15 min per call (local is slower)
  retries: 3

# Upload Limits
web_ui:
  max_upload_mb: 100         # max file size in megabytes
Adaptive Timeouts: MedRecords AI automatically scales API timeouts based on document size. Large cases receive extended timeouts (up to 30 minutes per stage) so processing completes without interruption. A built-in watchdog monitors all running pipelines and safely terminates any that exceed the configured limits.
Tip: Changes to config.yaml take effect after restarting MedRecords AI.

AI Backend Selection

MedRecords AI supports two AI backends that you can switch between at any time:

  • AWS Bedrock: Fastest and most accurate. Requires internet connection and AWS credentials. HIPAA BAA-eligible. Best for firms that prioritize speed and output quality.
  • Ollama (Local AI): Fully local and air-gapped. Free after initial setup. Slightly slower than Claude but offers maximum data security.

Switch between backends in Settings → AI Backend or by editing the llm_backend value in config.yaml.

Choosing the Right Processing Mode

MedRecords AI offers three processing modes. Each is optimized for different case sizes and accuracy requirements. The system defaults to Auto, which selects the best mode for you — but understanding the trade-offs helps you make the right call for time-sensitive work.

Mode Best For Speed Accuracy
Fast Single documents under 50 pages 1–3 min Good
Auto Most cases (1–5 files). Automatically upgrades to Thorough for large inputs. 2–10 min High
Thorough Large cases (10+ files, 200+ pages), voluminous records, complex multi-provider histories 15–45 min Highest

How Each Mode Works

Fast & Auto process each document individually through a four-stage AI pipeline: ingestion, privacy screening, extraction, and summarization. Every finding is traced to a specific source page. When multiple files are uploaded for the same case, each file is analyzed in isolation and then merged into a unified case summary, evolving narrative, and case brief. This per-document approach maximizes citation accuracy because the AI focuses on one record at a time.

Thorough mode activates automatically for large cases and adds parallel chunk processing. Documents are split into manageable segments (approximately 20 pages each), and multiple chunks are processed simultaneously. After all chunks complete, a dedicated merge stage consolidates findings, resolves cross-document contradictions, and produces a unified case summary. For a 500-page case, Thorough mode runs roughly 40% faster than processing files individually while maintaining the same accuracy standard.

Scenario Guide

Use these real-world scenarios to decide which mode fits your workflow:

Scenario 1: Quick Case Screening

You receive a potential new case and need to decide within the hour whether to accept it.

  • Recommended: Fast mode
  • Upload the initial ER visit or intake records (typically 5–20 pages)
  • Get key findings, diagnoses, and a preliminary timeline in under 3 minutes
  • Review the Quick Overview in the Case Brief for an instant snapshot
  • If you accept the case, re-process with the full record set on Auto or Thorough later

Scenario 2: Standard PI Case (3–5 Providers)

A typical motor vehicle accident or slip-and-fall with records from the ER, orthopedist, physical therapy, and a pain management clinic.

  • Recommended: Auto mode (default)
  • Upload all files at once — the system processes each individually and merges results
  • Expect 3–8 minutes depending on total page count
  • Each provider's records are analyzed in isolation, preserving citation accuracy
  • The evolving narrative and case brief auto-generate as soon as processing completes

Scenario 3: Complex Multi-Provider Case (10+ Files, 200–500 Pages)

A workplace injury with records from multiple hospitals, specialists, occupational therapy, an IME, and billing summaries.

  • Recommended: Auto mode (will auto-escalate to Thorough)
  • Upload all files at once — Auto detects the volume and switches to Thorough automatically
  • Parallel processing handles multiple document chunks simultaneously
  • A real-time progress bar tracks each stage so you can step away and check back
  • Expect 15–30 minutes for the full analysis

Scenario 4: Voluminous Records (500+ Pages, 20+ Files)

A catastrophic injury or medical malpractice case with years of treatment history across dozens of providers.

  • Recommended: Thorough mode (select explicitly)
  • Documents are chunked into ~20-page segments and processed in parallel
  • Cross-document contradiction detection catches conflicting findings between providers
  • A dedicated merge stage ensures nothing is missed in the final synthesis
  • Expect 30–45 minutes — significantly faster than manual review of the same volume
  • If any chunk fails, retry processes only the failed portion (no lost work)

Scenario 5: Urgent Demand Deadline

You have a demand letter deadline tomorrow and just received the final batch of records.

  • Recommended: Fast mode for the new records, then regenerate the Case Brief
  • Fast-process the new documents to extract key findings immediately (1–3 min each)
  • The case summary and brief automatically incorporate the new data
  • Run Case Valuation and Demand Package tools from the Analysis tab
  • Export the Executive Summary PDF for your demand package

Scenario 6: Pre-Deposition Preparation

You need to prepare for a deposition and want the deepest possible analysis of the medical records.

  • Recommended: Thorough mode
  • Thorough mode performs the most comprehensive cross-referencing and contradiction detection
  • Use the Deposition Summarizer tool after processing for targeted question preparation
  • Review the ForensicGuard score to understand citation confidence before relying on specific findings
  • The Smoking Gun detector surfaces critical findings you might otherwise miss
Pro Tip: You can always start with Fast or Auto for an initial review, then re-process with Thorough later when you need deeper analysis. Re-processing replaces the previous summary with an updated version — your case data and original documents are never affected.

Processing Defaults

Configure how MedRecords AI processes cases by default:

  • The default processing mode (Fast, Auto, or Thorough) can be set in config.yaml
  • Per-case overrides are available in the UI when clicking "Summarize"
  • Batch processing uses the default mode unless overridden

Troubleshooting

Common Issues

Port 8080 already in use

Another application is using port 8080. Either change the port in config.yaml or close the conflicting application. On Windows, you can identify the process with:

netstat -aon | findstr :8080

Python not found

The installer automatically downloads and installs Python if it's missing. If auto-install fails, you can manually install Python 3.10 or later from python.org. Make sure "Add Python to PATH" is checked. You may need to close and reopen the installer window after manual installation.

OCR not working

Tesseract OCR is included in the MedRecords AI setup. If OCR fails, verify that Tesseract is installed and accessible in your system PATH.

Slow processing

If processing is slow with Ollama, consider upgrading to AWS Bedrock for faster results, or try a smaller/faster Ollama model. Ensure your system meets the recommended 16 GB RAM for local AI processing.

Pipeline timed out on large case

Very large cases (500+ pages, 10+ files) may exceed default timeout limits. MedRecords AI automatically scales timeouts for large inputs, but if you still see timeouts:

  • Click Retry — the pipeline resumes from the last completed stage (no lost work)
  • Try processing fewer files at once, then merge the results
  • Increase combined_timeout in config.yaml (default: 600 seconds)
  • Use Thorough mode, which chunks documents automatically for reliable processing

Pipeline stuck at "Processing"

A built-in watchdog automatically detects pipelines stuck for longer than 30 minutes (fast mode) or 90 minutes (thorough mode) and marks them as failed with a Retry button. If a pipeline appears stuck, wait a moment for the watchdog to intervene, or restart the application.

Log Files

MedRecords AI maintains detailed logs for debugging. All log files are located in the logs/ directory:

Log File Description
logs/medical-ui.log Application logs (web UI and API)
logs/medical_app_errors.log Error logs and stack traces
logs/summarization-watcher.log AI processing pipeline logs

Logs are automatically rotated to prevent excessive disk usage.

Reporting Bugs

If you encounter a bug, please send the following information to our support team:

  • Email: dan.direnfeld@aiproductivity.dev
  • Include: the error message, relevant log excerpts, and steps to reproduce the issue
  • Priority Support subscribers receive guaranteed response times
Tip: Before reporting, check the Common Issues section above — your problem may already have a known solution.