FAISS as Vector Store for Semantic Search with Bert

Hello There,

I am working to develop an application which performs semantic search on bunch of data from CSV files, images, docx. Idea is to parse and clean the data first and then create embeddings using sentence transformer model. For now I have chosen Bert. Then save that embedding in some vector store like FAISS/ElasticSearch/Weaviate/Vespa. Currently I am using FAISS. Perform search operation on the embeddings. and return n related results.

I am confused at this point for points as below:

Is FAISS the correct choice in terms of memory, speed and accuracy? How to measure these things like if I need to work on 1 Billion records in CSV file, How to calculate the memory and accuracy? I have read its about 70GB… Is it good /bad?
Can I use Faiss for production?
Can Faiss be hosted on AWS/Google cloud or only on premises?
What other alternative should I look into?

Thank you

Did little more research on the topic. Have found follwing page:
Faiss vs. Milvus vs. Qdrant vs. Weaviate Comparison (sourceforge.net)

Look like “Weaviate” is better option. Any suggestions?

Thanks