How Embeddings Extend Your AI Model’s Reach - .NET | Microsoft Learn
Embeddings are the way LLMs capture semantic meaning. They are numeric representations of non-numeric data that an LLM can use to determine relationships between concepts.
You can use embeddings to help an AI model understand the meaning of inputs so that it can perform comparisons and transformations, such as summarizing text or creating images from text descriptions.
LLMs can use embeddings immediately, and you can store embeddings in vector databases to provide semantic memory for LLMs as-needed.
I'm getting the idea of embeddings.
Now I think I understand the whole picture. As a web developer, it's hard to train a model myself, so it makes sense to use the third-party trained model.
However, sometimes I want to add some knowledge to the trained model without training one by myself. For example, company internal Q&A chatbot needs some company information that are not public.
To achieve the needs, I generate embeddings in the same way as LLM training beforehand and store them somewhere. It seems I don't have to use the same model as my Generation part; there are more efficient models for embedding creation.
Then, I retrieve relevant information and augment generation by adding the retrieved context/information to the input query.
It looks like I can find a model on Hugging Face huggingface.co/spaces/mteb/leaderboard.
The name “Hugging Face” is funny.
Puchao 200 Sushi bowl 800 Chicken wings 400 Protein shake 200 Protein bar 200
Total 1800 kcal
MUST:
TODO: