Two fully functional AI applications that run entirely on your hardware — no cloud, no API keys, no subscriptions. Just download, install Foundry Local, and launch.
Available Apps
A privacy-first event photo processor that automatically blurs all faces in your photos — except yours. Upload your selfie once, drop your event photos, and get processed images with date/time/location overlays. Your photos never leave your machine.
A distraction-free writing assistant that uses your locally running AI model to improve, expand, shorten, or formalize any selected text. Select text in the editor, pick a mode, hit Run — suggestions stream back in under 50ms. Zero cloud, zero telemetry.
A polished local image generation studio powered by Automatic1111 and Stable Diffusion. Switch between Juggernaut Reborn (photorealism) and DreamShaper 8 (artistic), build prompts with style chips, and control every parameter — all images generated on your hardware at zero cost per image.
Foundry Local is Microsoft's runtime for running AI models on your own hardware. It installs a background service that exposes an OpenAI-compatible API on your machine.
# Open PowerShell or Terminal as normal user (no admin needed)
winget install Microsoft.FoundryLocal
# Open Terminal
brew tap microsoft/foundry
brew install foundry-local
foundry --version should print the version numberPOE Photo Processor uses Foundry Local for AI caption generation. The Phi-4-Mini NPU model is recommended — small, fast, and runs on the dedicated NPU in Copilot+ PCs. If you don't have an NPU, it falls back to CPU automatically.
# Download and start the recommended model
foundry model run phi-4-mini
# Or list all available models first
foundry model list
✓ Server ready on localhost:5272 — the AI endpoint is now liveThe POE Photo Processor is a single self-contained HTML file. No installation, no server needed — just open it in your browser.
Double-click the downloaded HTML file — it opens directly in your default browser. No localhost server required for the app itself.
# Option A — double-click the file in Explorer
# Option B — drag & drop onto your browser window
# Option C — right-click → Open with → Chrome / Edge
# Or open from terminal
start poe_photo_processor_v2_22.html
# Option A — double-click the file in Finder
# Option B — drag & drop onto your browser
# Option C — right-click → Open With → Chrome / Safari
# Or open from terminal
open poe_photo_processor_v2_22.html
Three steps to process your event photos:
Step 1 — Upload your selfie in the left sidebar
→ Your face will be recognized and KEPT sharp in all photos
Step 2 — Drop your event photos into the main area
→ JPG, PNG, HEIC all supported · EXIF data read automatically
Step 3 — Click ⚡ Process All
→ All faces blurred except yours · date/time/GPS overlaid
→ Click ⬇ Download All to save as ZIP
foundry model run phi-4-mini) before clicking Process All. The sidebar shows connection status.LocalPen requires Foundry Local to be running — all AI text processing calls your local model. Nothing is ever sent to the cloud.
# Open PowerShell or Terminal
winget install Microsoft.FoundryLocal
brew tap microsoft/foundry
brew install foundry-local
foundry --version should print the installed versionLocalPen works best with Phi-4-Mini NPU for fast, responsive writing suggestions. On a Copilot+ PC this runs on the dedicated NPU — near-instant responses. On any other device it uses the GPU or CPU.
# Start the recommended model for writing tasks
foundry model run phi-4-mini
✓ Downloading phi-4-mini (INT4, 2.5GB)...
✓ Optimizing for your hardware...
✓ Server ready on localhost:5272
5272.# Check which port Foundry Local is using
foundry service status
# Note the port number shown — use it in the app settings
LocalPen is a single HTML file. Download it and open in any modern browser. No installation, no server, no build step.
⬇ Download local-ai-writer.htmlDouble-click the HTML file to open it. Then configure the connection settings to match your running Foundry Local instance.
start local-ai-writer.html
# Or double-click in File Explorer
open local-ai-writer.html
# Or double-click in Finder
In the app, click ⚙ Settings (top right) and configure:
Port: 5272 # or 60084 if using NPU endpoint
Model: phi-4-mini
(copy exact alias from foundry model list)
Max tokens: 512 # increase for longer text expansions
Start writing in the editor. Select any text you want AI to work on, then choose a mode from the sidebar and click Run.
1. Type or paste your text in the editor
2. Select the text you want AI to improve
(click & drag, or Ctrl+A for all)
3. Choose a mode in the right sidebar:
✦ Improve — clarity and flow
→ Expand — add detail and context
← Shorten — remove filler, be concise
↑ Formalize — professional / business tone
✎ Custom — describe what you want
4. The suggestion streams into the panel below
→ Apply replaces your selection in the editor
↺ Retry generates a new suggestion
Automatic1111 needs Python 3.10.6 and Git. Install them before running the startup script.
# Download Python 3.10.6 (check "Add Python to PATH")
https://www.python.org/ftp/python/3.10.6/python-3.10.6-amd64.exe
# Download Git
https://git-scm.com/download/win
# Verify in a new PowerShell window:
python --version && git --version
brew install python@3.10 git
Download all three files into the same folder (e.g. C:\Atelier\). The script clones Automatic1111 into a subfolder automatically on first run.
Your folder should look like:
Atelier/
├── atelier.html
├── START_A1111.bat ← Windows
└── start_a1111.sh ← Mac / Linux
Download both models and copy the .safetensors files to stable-diffusion-webui/models/Stable-diffusion/.
Run the startup script. First run takes 5–15 min — it clones Automatic1111, installs Python deps, and starts with API + CORS enabled. Subsequent runs take ~30 sec.
cd C:\Atelier
.\START_A1111.bat
# SmartScreen warning → More info → Run anyway
chmod +x ~/atelier/start_a1111.sh
bash ~/atelier/start_a1111.sh
Running on local URL: http://127.0.0.1:7860 — keep the terminal openDouble-click atelier.html or open it from terminal. It connects automatically to 127.0.0.1:7860.
1. Select model — Juggernaut Reborn or DreamShaper 8
2. Type or click prompt chips to build your prompt
3. Adjust steps, CFG, resolution as needed
4. Press ⚡ Generate (or Ctrl+Enter)
→ Images appear in the gallery · ↓ Save all as ZIP
Troubleshooting
Most issues are caused by the model not running or a port mismatch. Here's how to fix them.
Foundry Local is not running or the port is wrong.
# Start Foundry Local
foundry model run phi-4-mini
# Check the port in app Settings
# Default: 5272 or 60084 (NPU)
Open the file via a local server instead of double-clicking.
# Python (any OS)
python -m http.server 8080
# Then open: http://localhost:8080/app.html
The face detection model requires internet for first load (downloads ~5MB WebAssembly). After first load, it caches in your browser.
# Check browser console for errors
# Try: Chrome or Edge (best WebAssembly support)
# Ensure you're not in Private/Incognito mode
Large model loaded, or running on CPU. Switch to Phi-4-Mini for fast responses.
# Use the smallest fast model
foundry model run phi-4-mini
# Copilot+ PC NPU = fastest
# GPU = fast · CPU = slower but works
The model alias in app settings doesn't match what's loaded.
# List loaded models
foundry model list
# Copy the exact alias shown
# Paste into app Settings → Model
winget requires Windows 10 (1809+) or Windows 11. Update Windows or install from the Microsoft Store.
# Check Windows version
winver
# Or install App Installer from Microsoft Store
# Search: "App Installer" in Microsoft Store