Skip to content

Edge AI

Running AI models directly on edge devices (phones, IoT sensors, embedded systems) rather than in the cloud. Edge AI reduces latency, improves privacy, and works offline. Model compression and quantization make it feasible on resource-constrained hardware.

Related terms

InferenceQuantization
← Back to glossary