Files
altstack-data/docs/app/stacks/ai-first/page.mdx
2026-02-25 22:36:27 +05:30

39 lines
1.5 KiB
Plaintext
Raw Blame History

This file contains ambiguous Unicode characters
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
---
title: "The AI-First Stack"
description: "Own your AI. Run LLMs, image generation, and code assistants locally with zero API keys, zero usage limits, and zero data leaving your machine."
---
# 🤖 The AI-First Stack
**Own your AI.** Run powerful AI locally. No API keys, no usage limits, no data leaving your machine.
| What | Tool | Replaces |
|---|---|---|
| LLM Inference | [Llama](/deploy/llama) | ChatGPT ($20/mo) |
| Coding Model | [DeepSeek](/deploy/deepseek) | GitHub Copilot ($10/mo) |
| Image Generation | [Stable Diffusion](/deploy/stable-diffusion) | Midjourney ($10/mo) |
| IDE Assistant | [Continue.dev](/deploy/continue-dev) | Copilot extension ($10/mo) |
| Code Autocomplete | [Tabby](/deploy/tabby) | Tabnine ($12/mo) |
**Total saved: ~$69/mo** (nice)
## Hardware Requirements
Running AI locally requires GPU horsepower. Here's what you need:
| Model Type | Minimum VRAM | Recommended GPU |
|---|---|---|
| Small LLMs (7B params) | 6 GB | RTX 3060, RTX 4060 |
| Large LLMs (70B params) | 48 GB | 2× RTX 3090, A6000 |
| Image Generation (SDXL) | 8 GB | RTX 3070+ |
| Code Models (DeepSeek) | 8 GB | RTX 4060+ |
> 🔥 **Pro Tip:** Start with Ollama + Llama 3. It runs well on an 8GB GPU and gives you a local ChatGPT replacement in under 5 minutes.
## Deploy Guides
→ [Deploy Ollama (LLM Runner)](/deploy/ollama)
→ [Deploy Stable Diffusion](/deploy/stable-diffusion)
→ [Deploy Tabby (Code AI)](/deploy/tabby)
→ [Deploy Continue.dev](/deploy/continue-dev)