EmbeddingGemma Google 308M: On-Device Embedding 100+ ภาษา 200MB RAG Ready
返回文章列表

EmbeddingGemma Google 308M: On-Device Embedding 100+ ภาษา 200MB RAG Ready

EmbeddingGemma 308M Gemma 3 encoder 768D→128D MRL 2048 tokens 100+ ภาษา MTEB #1 on-device RAG Android/iOS/macOS Python JS deployment latency benchmarks

ai 更新: January 8, 2026

EmbeddingGemma: Google DeepMind 2GB RAM Embedding 768D - รองรับ 100+ ภาษา ออฟไลน์

EmbeddingGemma 308M params Gemma 3 encoder รองรับมือถือ/แล็ปท็อป 200MB quantized 2048 tokens context 100+ ภาษา MTEB top rank Matryoshka 768→128D on-device RAG

EmbeddingGemma Technical Specs

Specรายละเอียด
Params308M
Memory<200MB (INT8)
Embedding Dim768 (MRL: 512/256/128)
Context2048 tokens
Languages100+
ArchitectureGemma 3 Bi-directional
Training Data320B filtered tokens
LicenseOpen weights

Matryoshka Representation Learning (MRL)

768D → 512D: 1.2% perf drop
768D → 256D: 3.5% perf drop
768D → 128D: 8.2% perf drop
Storage: 768D=2.3MB → 128D=400KB

MTEB Leaderboard: Sub-500M Category

1. EmbeddingGemma 308M: 64.12
2. E5-small-v2 33M: 62.53
3. BGE-small 33M: 61.22
4. Snowflake 300M: 60.85

On-Device Deployment Targets

📱 Android (TensorFlow Lite)
🍎 iOS (CoreML)
💻 macOS (Metal)
🖥️ Windows (DirectML)
🤖 Edge TPU / NPU

Quick Start Code Examples

Python (HuggingFace):

from sentence_transformers import SentenceTransformer
model = SentenceTransformer('google/embedding-gemma')
embeds = model.encode(["สวัสดี", "Hello"])
similarity = cosine_similarity(embeds[0], embeds[1])

JavaScript (ONNX):

import { InferenceSession } from 'onnxruntime-web';
const session = new InferenceSession();
await session.loadModel('embedding-gemma.onnx');
const embeddings = await session.run(input);

RAG Pipeline with EmbeddingGemma

1. Chunk docs → 512 token segments
2. Embed with 256D (MRL)
3. FAISS index (on-device)
4. Query → Top-K retrieve
5. Gemma 2B generate answer

Latency: 45ms/query on Snapdragon 8 Gen 3

Use Cases On-Device Embedding

🔍 Semantic search (notes, docs)
💬 Chatbot RAG (privacy-first)
🎵 Music recommendation
📧 Email clustering
📚 Book passage retrieval

Multi-Language Performance

🇹🇭 Thai: 92% MTEB multilingual
🇯🇵 Japanese: 89%
🇰🇷 Korean: 91%
🇻🇳 Vietnamese: 87%
🇮🇩 Indonesian: 88%

Model Quantization Options

PrecisionSizeSpeedupPerf Drop
FP16600MB1x0%
INT8200MB1.8x0.5%
INT4120MB3.2x1.2%

Edge Hardware Compatibility

✅ Snapdragon 8 Gen 3 (12ms/inference)
✅ Apple A18 (8ms)
✅ MediaTek Dimensity 9400 (15ms)
✅ Intel Lunar Lake NPU (10ms)
D

DRITESTUDIO

DRITESTUDIO COMPANY LIMITED - Cloud, VPS, Hosting and Colocation provider in Thailand

管理您的 Cookie 设置

我们使用不同类型的 Cookie 来优化您在网站上的体验。点击下方类别了解更多信息并自定义您的偏好设置。请注意,阻止某些类型的 Cookie 可能会影响您的体验。

必要 Cookie

这些 Cookie 对于网站正常运行至关重要。它们支持页面导航和访问安全区域等基本功能。

查看使用的 Cookie
  • 会话 Cookie(会话管理)
  • 安全 Cookie(CSRF 保护)
始终开启

功能性 Cookie

这些 Cookie 启用语言偏好和主题设置等个性化功能。没有这些 Cookie,某些功能可能无法正常工作。

查看使用的 Cookie
  • lang(语言偏好)
  • theme(深色/浅色模式)

分析性 Cookie

这些 Cookie 通过匿名收集和报告信息,帮助我们了解访问者如何与网站互动。

查看使用的 Cookie
  • _ga(Google Analytics)
  • _gid(Google Analytics)

营销 Cookie

这些 Cookie 用于跨网站追踪访问者,以便根据您的兴趣展示相关广告。

查看使用的 Cookie
  • 广告 Cookie
  • 再营销像素

隐私政策