Embedding models
Embedding models for AI search and analysis
Модели в коллекции
Сортировка: по популярности (run_count)Return CLIP features for the clip-vit-large-patch14 model
multilingual-e5-large: A multi-language text embedding model
Generate CLIP (clip-vit-large-patch14) text & image embeddings
A model for text, audio, and image embeddings in one space
This is a language model that can be used to obtain document embeddings suitable for downstream tasks like semantic search and clustering.
General Text Embeddings (GTE) model.
Embed text with Qwen2-7b-Instruct
Jina-CLIP v2: 0.9B multimodal embedding model with 89-language multilingual support, 512x512 image resolution, and Matryoshka representations
snowflake-arctic-embed is a suite of text embedding models that focuses on creating high-quality retrieval models optimized for performance
BAAI's bge-en-large-v1.5 for embedding text sequences
Llama2 13B with embedding output
nomic-embed-text-v1 is 8192 context length text encoder that surpasses OpenAI text-embedding-ada-002 and text-embedding-3-small performance on short and long context tasks
Query embedding generator for BAAI's bge-large-en v1.5 embedding model
Granite-Embedding-278M-Multilingual is a 278M parameter model from the Granite Embeddings suite that can be used to generate high quality text embeddings
E5-mistral-7b-instruct language embedding model
An 8k context text embedding model served FAST with ONNX on GPU. Check the examples tab to see different ways to run it.