Zubnet AIसीखेंWiki › Edge AI
Infrastructure

Edge AI

On-Device AI, Local AI
AI models को directly end-user devices पर run करना — phones, laptops, IoT sensors, cars — cloud में नहीं। Edge AI का मतलब आपका data कभी आपके device से बाहर नहीं जाता, latency near-zero है (कोई network round-trip नहीं), और model offline work करता है। Apple Intelligence, Google का on-device Gemini Nano, और llama.cpp और Ollama जैसे local LLM runners सब Edge AI हैं।

यह क्यों matter करता है

Edge AI वो जगह है जहाँ privacy, latency, और cost intersect होते हैं। Cloud AI का मतलब अपना data किसी और के server पर भेजना, response का wait करना, और per token pay करना। Edge AI का मतलब instant, private, free-after-download inference। Trade-off model size है: edge devices की memory limited है, इसलिए on-device models cloud models से smaller और less capable हैं। लेकिन कई tasks के लिए, आपके phone पर एक fast 3B model एक data center में slow 400B model को beat करता है।

Deep Dive

The key constraint for edge AI is memory. A phone might have 6–12 GB of RAM shared between the OS, apps, and the model. A laptop might have 8–32 GB. This limits model size: a 3B parameter model at 4-bit quantization needs about 1.5 GB, feasible on a phone. A 7B model needs about 4 GB, feasible on a decent laptop. Anything larger requires aggressive quantization or offloading to disk (slow).

The Apple Silicon Effect

Apple's M-series chips (M1–M4) with unified memory architecture made edge AI practical for laptops. Unlike discrete GPU setups where model weights must fit in VRAM, Apple Silicon shares memory between CPU and GPU, so a MacBook with 32 GB unified memory can run a 24B model at 4-bit quantization smoothly. This, combined with llama.cpp's Metal optimization, created the local LLM movement.

Beyond Text

Edge AI isn't limited to language models. On-device speech recognition (Whisper), image classification, real-time translation, and predictive text all run locally. The trend is toward NPUs (Neural Processing Units) — dedicated AI accelerator chips built into phones and laptops that handle AI workloads more efficiently than general-purpose CPU/GPU. Apple's Neural Engine, Qualcomm's Hexagon, and Intel's NPU are all examples.

संबंधित अवधारणाएँ

← सभी Terms
← Dual Use ElevenLabs →