Course - Langchain for LLM Application Development (Short Course).
About - The Short Course Platform.
Background
The API key is visibile in the course notebooks.
But I came to know that, it is not the real API Key.
But then how can it be used to access the real API?
I came to know that there is a proxy server that swaps this API key with the real API key.
I want to know how this proxy server is created ? I want to understand this process.
Why does it matter
Actually, I want to implement a client side Langchain based application.
In the client side, I can not use the real API key, because then somehow it will be visible to the users.
The Lab Notebook is similar to a client side app where the API key is visible.
So I want to know how the implementation is done for the lab.
Questions
Can anyone please tell me:
how the Lab is set up with a false API key and everything works without errors. Why does it not complain that it is a false API key?
how a Proxy server is created? can you suggest an article with step by step process?
how do we swap the false API key at the proxy server with the real one?
Any suggestions on how we can we implement a client side Langchain app without revealing the API keys?
The course notebooks use a configuration file named “.ENV”. This is what this code accomplishes:
from dotenv import load_dotenv, find_dotenv
_ = load_dotenv(find_dotenv()) # read local .env file
openai.api_key = os.environ['OPENAI_API_KEY']
You could do the same in any notebook you create. Add a file named .env in the same directory as your notebook, and add a line with your API key inside this file.
It would be best to consider doing all calls to OpenAI on the server side. You need to create some custom API that your client side will use. This custom API has the API key and uses it to call OpenAI API. This way, your client side would never need to know the OpenAI API Key.
I think you are talking about creating LLM apps without using Langchain. I am aware of this method and using it. Also, it is very difficult to create LLM Apps without Langchain.
However, do you know a way of doing the same thing using Langchain? That is, using Langchain on the client side and by creating a custom server?
If we are doing everything from scratch, we are free to do anything. We are not required to put API key on the client.
However, if we are creating the app using Langchain, we are required to put the API key on the client side.
So my question is:
How can we put false API key on the client side where the app using Langchain is running, and then swap the false with true API key on some proxy or normal custom server?
@elirod - Thank you for your response. If this hampers the security in any way, I will not ask this question. Please reply, if I should stop writing in this thread.
Don’t worry, my intention was to illustrate why this is a question that can be difficult to answer.
I think it’s a common mistake for all members to assume that the deeplearning.ai community encompasses all teams that envolved across his courses and platforms.
It is possible to use langchain on the client side or on the server side. This is an design decision that you can make.
If you do it on the server side, you will probably create an api that receives a question, use langchain to solve it with openai or other model, and return the solution/result.
If you do it on the client side, you will use langchain to make your app directly call openai or anyother llm or resource you want.