Hello all, I am using OpenAI with langchain and a CSV loader to feed a test dataset (around 15KB) of 100 German customer reviews including star ratings into ChatGPT3.5. On another occasion I used langchain PDF loader for an 80 pages document, after quite a few requests I only accumulated only a fraction of a cent. However, this drastically changed with my CSV dataset. After just 10-20 requests I noticed the usage spiking within one afternoon to almost 4 €. The PDF file contained definitely much more text than my small CSV dataset, however, the PDF was in English could that cause this?
I would be very glad if I could start querying data like the dataset or a long document with natural language using ChatGPT, but the mentioned costs are quite prohibitive.
Any idea on how to solve this or why this is so expensive?
Best n4n0b1t3