tl;dr:“While semantic search is trendy, good old lexical search is still the backbone. Semantic techniques can improve results, but they work best when added to a solid text-based search foundation. In this post, we’ll explore how to use Postgres to create a robust search engine.”
tl;dr:“When working with LLMs, you usually want to store embeddings, a vector space representation of some text value. During the last few years, we’ve seen a lot of new databases pop up, making it easier to generate, store, and query embeddings: Pinecone, Weaviate, Chroma, Qdrant. The list goes on. But having a separate database where I store a different type of data has always seemed off to me. Do I really need it?”