Google’s AI Edge Gallery Brings Smarter, Offline AI to Mobile Devices
Google has launched a new experimental app called AI Edge Gallery , available on GitHub. The app is designed for Android and allows users to run powerful generative AI and large language models directly on their devices — completely offline. An iOS version is also in the works.
What Is AI Edge Gallery?
Unlike cloud-based AI tools like Gemini or ChatGPT, which require an internet connection and send your data to remote servers, AI Edge Gallery runs entirely on your phone. This means all processing happens locally, offering better privacy and the ability to use AI without being online.
It’s still early days — the app isn’t even listed on the Google Play Store yet — but it shows Google’s push toward more private, on-device AI experiences.
Once installed, the app opens to a home screen featuring three main options: “Ask Image,” “Prompt Lab,” and “AI Chat.” Tapping any of these takes you to a new screen where you can choose an AI model to process your request.
- Ask Image lets you upload an image and type a question about it for the AI to analyze.
- Prompt Lab offers tools like Free Form , Tone Rewriter , Text Summarizer , and Code Snippet Generator .
- AI Chat allows for a conversational back-and-forth with the selected AI model.
The app comes with four pre-supported AI models (though you need to download them individually):
- Gemma-3n-E2B-it-int4
- Gemma-3n-E4B-it-int4
- Gemma3-1B-IT q4
- QWEN2.5-1.5B-Instruct q8
You can also add custom models by tapping the “+” button in the bottom-left corner of the home screen and importing a local model file.
Depending on the model and your device’s hardware, the app will use either the GPU or CPU to process your requests. Using the GPU generally results in faster response times.
Who Is AI Edge Gallery For?
This app is ideal for tech enthusiasts and developers who want to run cutting-edge AI models locally on their devices with more privacy. Compared to other offline AI tools I’ve tried on Android, this one feels the most refined — like a true Google product.
Developers may especially appreciate it for testing LiteRT .task models , Google’s optimized framework for running AI efficiently on-device. While the ability to use AI offline is a big plus, keep in mind that processing complex queries locally can be demanding on your phone’s hardware — which might lead to faster battery drain.
How Does It Work?
AI Edge Gallery uses LiteRT (Lite Runtime) , a specialized engine from Google that optimizes AI performance on mobile devices. This allows advanced AI features to run smoothly without relying on cloud servers.
Things to Keep in Mind
- Performance: Running large AI models on your phone will naturally be slower than using powerful cloud servers.
- Storage Space: The included language models range from 560 MB to 4.4 GB , so make sure you have enough storage available.
- Availability: As of now, the app is only available for Android as an APK download via GitHub — not from the Play Store. An iOS version is reportedly in development but not yet released.
Looking Ahead
It will be interesting to see how Google evolves its offline AI strategy. In the future, we might even see built-in support for local AI models in device settings, similar to how Gemini works today.
For now, AI Edge Gallery offers a glimpse into that future. You can learn more and download it directly from GitHub .