Foundry Local exposes an OpenAI-compatible API on localhost:PORT. Any chat UI that talks to OpenAI also talks to Foundry Local — no code, no CLI, just a browser tab. This guide covers the two best options.
The most popular open-source AI chat frontend. Clean ChatGPT-style interface with conversation history, model switching, image generation, and a plugin system. Officially recommended by Microsoft for use with Foundry Local.
All-in-one AI desktop app with built-in RAG (chat with your documents), workspaces, and native Foundry Local integration — automatically starts the service and manages models for you. Official partner of Microsoft Foundry.
A powerful, self-hosted ChatGPT-style interface. Microsoft's official documentation points to Open WebUI as the recommended chat frontend for Foundry Local.
Open WebUI connects to a running Foundry Local service. You must start a model before launching Open WebUI, otherwise no models will appear in the dropdown.
# Start a model — phi-4 is the recommended model
foundry model run phi-4
# In a SECOND terminal — get the service port
foundry service status
✓ Service is running
Endpoint: http://localhost:5272/v1
Models loaded: phi-4
foundry model run phi-4
# In a second terminal
foundry service status
✓ Endpoint: http://localhost:5272/v1
foundry service status to get the current port before connecting any UI.Open WebUI is installed via the BrainDriveAI Conda Installer — a self-contained package that installs Miniconda and Open WebUI together. This is the method referenced in the official Microsoft Community Hub guide.
Download OpenWebUIInstaller.exe from the BrainDriveAI releases page and copy it to C:\Temp\.
# Download from:
https://github.com/BrainDriveAI/OpenWebUI_CondaInstaller/releases
# Copy the installer to C:\Temp\
copy %USERPROFILE%\Downloads\OpenWebUIInstaller.exe C:\Temp\
Open PowerShell as Administrator and run the following commands in order. They install Miniconda system-wide, accept the conda terms of service, then launch the Open WebUI installer.
# 1. Install Miniconda system-wide
winget install -e --id Anaconda.Miniconda3 --scope machine
# 2. Add Miniconda to the current session PATH
$env:Path = 'C:\ProgramData\miniconda3;' + $env:Path
$env:Path = 'C:\ProgramData\miniconda3\Scripts;' + $env:Path
$env:Path = 'C:\ProgramData\miniconda3\Library\bin;' + $env:Path
# 3. Accept conda Terms of Service (required before first use)
conda.exe tos accept --override-channels --channel https://repo.anaconda.com/pkgs/main
conda.exe tos accept --override-channels --channel https://repo.anaconda.com/pkgs/r
conda.exe tos accept --override-channels --channel https://repo.anaconda.com/pkgs/msys2
# 4. Launch the Open WebUI installer
C:\Temp\OpenWebUIInstaller.exe
Open a browser to http://localhost:8080. The first time you open Open WebUI, it asks you to create an admin account — this is a local account, nothing is sent anywhere.
After logging in, click your avatar (top right) → Admin Settings → Connections.
Before you can add a custom endpoint, you need to enable the Direct Connections toggle. This is a one-time admin setting.
Navigate: Avatar → Admin Settings → Connections → enable "Direct Connections" toggle → Save.
Now navigate to your personal Settings (not Admin Settings). Go to Connections → click + next to "Manage Direct Connections".
http://localhost:PORT/v1 — replace PORT with the number from foundry service status. Set Auth to None. Click Save.Close Settings. At the top of the page, your Foundry Local models appear in the model dropdown. Select one and start typing. Every request runs entirely on your local hardware.
The all-in-one AI desktop app with built-in RAG, workspaces, and a native Foundry Local integration. AnythingLLM is an official Microsoft Foundry partner and the recommended choice if you need to chat with your documents.
AnythingLLM will detect Foundry Local automatically if it's installed. If you've already followed earlier chapters, skip to Step 2.
# Windows
winget install Microsoft.FoundryLocal
# macOS
brew tap microsoft/foundry && brew install foundry-local
# Verify
foundry --version
AnythingLLM Desktop is a native app for Windows, macOS, and Linux. Download the installer for your platform from the official website.
# Download from:
https://anythingllm.com/download
# Run AnythingLLMDesktop-Setup.exe
# Accept the installer defaults
# AnythingLLM launches automatically
# Download AnythingLLMDesktop.dmg from:
https://anythingllm.com/download
# Open the .dmg and drag AnythingLLM to /Applications
# Launch from Launchpad or Spotlight
# Download the AppImage from:
https://anythingllm.com/download
chmod +x AnythingLLMDesktop.AppImage
./AnythingLLMDesktop.AppImage
AnythingLLM has a dedicated Foundry Local option in its LLM provider list. It will automatically detect and start the service for you.
Navigate to Settings (⚙) → LLM Preference → select "Microsoft Foundry Local" → choose your model → Save.
foundry model download phi-4-mini — then reload AnythingLLM.AnythingLLM organizes conversations into Workspaces — think of them as separate chat contexts, each with its own document library and conversation history. Create one and start chatting.
AnythingLLM's killer feature: drag PDF, Word, TXT, or web URLs into any workspace and the AI answers questions using your documents — all processed locally. No document ever leaves your machine.
Supported document types:
PDF, DOCX, TXT, MD, HTML, CSV, JSON, YouTube URLs
How to upload:
1. Open a Workspace
2. Click the Upload icon (📎) in the chat input
3. Drag and drop files or paste a URL
4. AnythingLLM embeds and indexes locally
5. Ask questions — answers cite your documents
Troubleshooting
A model isn't running yet, or the port has changed.
foundry service status
# Note the current port
foundry model run phi-4-mini
# In Open WebUI Settings → Connections
# Update the URL with the current port
# Reload Open WebUI
Foundry Local service isn't running, or port mismatch.
foundry service status
# If error: restart it
foundry service restart
# Then update the port in Open WebUI settings
# Docker users: use host.docker.internal:PORT
Models must be downloaded via CLI before they appear.
foundry model list
# Download the model you want
foundry model download phi-4-mini
# Restart AnythingLLM — it will now
# show the downloaded model in the dropdown
The Admin toggle was not enabled — it controls access for all users.
# You must be logged in as Admin
# Profile avatar → Admin Settings
# (not regular Settings)
# → Connections → enable Direct Connections toggle
# → Save
# Now go to Profile → Settings → Connections
# to add the endpoint
Running on CPU, or a large model on limited hardware.
# Use phi-4-mini — smallest fastest model
foundry model run phi-4-mini
# Check hardware routing
foundry service status
# Shows: NPU / GPU / CPU being used
# Copilot+ PC NPU = fastest option
Foundry Local dynamically assigns a port. Always check it.
# Every time you restart Foundry Local:
foundry service status
# Copy the new port and update:
# Open WebUI → Settings → Connections
# AnythingLLM → Settings → LLM Preference
# (AnythingLLM with native integration
# handles this automatically)