βοΈ Liquid Nanos
Collection
Library of task-specific models: https://www.liquid.ai/blog/introducing-liquid-nanos-frontier-grade-performance-on-everyday-devices
β’
26 items
β’
Updated
β’
103
LFM2-ColBERT-350M is a late interaction retriever with excellent multilingual performance. It allows you to store documents in one language (for example, a product description in English) and retrieve them in many languages with high accuracy.
Find more information about LFM2-ColBERT-350M in our blog post.
π Try our demo: https://huggingface.co/spaces/LiquidAI/LFM2-ColBERT
Example usage with llama.cpp:
Start llama-server
llama-server -hf LiquidAI/LFM2-ColBERT-350M-GGUF --embeddings
Make requests to embed queries and documents, and compute similarity scores
β― uv run colbert-rerank.py
Score: 29.69 | Q: What is panda? | D: hi
Score: 29.83 | Q: What is panda? | D: it is a bear
Score: 30.47 | Q: What is panda? | D: The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.
# /// script
# requires-python = ">=3.10"
# dependencies = [
# "transformers",
# "huggingface-hub",
# "numpy",
# "requests",
# "torch",
# ]
# ///
# colbert-rerank.py
from transformers import AutoTokenizer
from huggingface_hub import hf_hub_download
import numpy as np, requests, torch, torch.nn.functional as F, json
model_id = "LiquidAI/LFM2-ColBert-350M"
tokenizer = AutoTokenizer.from_pretrained(model_id)
config = json.load(open(hf_hub_download(model_id, "config_sentence_transformers.json")))
skiplist = set(
t
for w in config["skiplist_words"]
for t in tokenizer.encode(w, add_special_tokens=False)
)
def maxsim(q, d):
return (q @ d.T).max(dim=1).values.sum().item()
def preprocess(text, is_query):
prefix = config["query_prefix"] if is_query else config["document_prefix"]
toks = tokenizer.encode(prefix + text)
max_len = config["query_length"] if is_query else config["document_length"]
if is_query:
toks += [tokenizer.pad_token_id] * (max_len - len(toks))
else:
toks = toks[:max_len]
mask = None if is_query else [t not in skiplist for t in toks]
return toks, mask
def embed(content, mask=None):
emb = np.array(
requests.post(
"http://localhost:8080/embedding",
json={"content": content},
).json()[0]["embedding"]
)
if mask:
emb = emb[mask]
emb = torch.from_numpy(emb)
emb = F.normalize(emb, p=2, dim=-1) # L2 normalize each token embedding
return emb.unsqueeze(0)
docs = [
"hi",
"it is a bear",
"The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.",
]
query = "What is panda?"
q = embed(*preprocess(query, True))
d = [embed(*preprocess(doc, False)) for doc in docs]
s = [(query, doc, maxsim(q.squeeze(), di.squeeze())) for doc, di in zip(docs, d)]
for q_text, d_text, score in s:
print(f"Score: {score:.2f} | Q: {q_text} | D: {d_text}")
Find more details in the original model card: https://huggingface.co/LiquidAI/LFM2-ColBERT-350M
4-bit
5-bit
6-bit
8-bit
16-bit
Base model
LiquidAI/LFM2-ColBERT-350M