How the DLAI team implemented the Proxy Server to swap with real API key?

Course - Langchain for LLM Application Development (Short Course).
About - The Short Course Platform.

Background
The API key is visibile in the course notebooks.
But I came to know that, it is not the real API Key.

But then how can it be used to access the real API?
I came to know that there is a proxy server that swaps this API key with the real API key.

I want to know how this proxy server is created ? I want to understand this process.

Why does it matter
Actually, I want to implement a client side Langchain based application.
In the client side, I can not use the real API key, because then somehow it will be visible to the users.
The Lab Notebook is similar to a client side app where the API key is visible.
So I want to know how the implementation is done for the lab.

Questions
Can anyone please tell me:

  1. how the Lab is set up with a false API key and everything works without errors. Why does it not complain that it is a false API key?
  2. how a Proxy server is created? can you suggest an article with step by step process?
  3. how do we swap the false API key at the proxy server with the real one?
  4. Any suggestions on how we can we implement a client side Langchain app without revealing the API keys?

The course notebooks use a configuration file named “.ENV”. This is what this code accomplishes:

from dotenv import load_dotenv, find_dotenv
_ = load_dotenv(find_dotenv()) # read local .env file
openai.api_key = os.environ['OPENAI_API_KEY']

You could do the same in any notebook you create. Add a file named .env in the same directory as your notebook, and add a line with your API key inside this file.

OPENAI_API_KEY="<your_openai_key>"
1 Like

Yes @leonardo.pabon, but then you could just do

print(os.environ['OPENAI_API_KEY'])

in the course notebook and get the key it is using. If you try it, you will find that it is not a valid key.

1 Like

Hi @snehil001

Welcome to the community.

Here it is a space to share and discuss the courses topics. So, no one here know how this implementations are made. Sorry.

I think that even if someone here have had access to this configurations would be allow to share it on the forum it may cause security issues.

Best regards

1 Like

Hi,

It would be best to consider doing all calls to OpenAI on the server side. You need to create some custom API that your client side will use. This custom API has the API key and uses it to call OpenAI API. This way, your client side would never need to know the OpenAI API Key.

1 Like

Thank you @leonardo.pabon , @choutos , @elirod for your responses.

@choutos - Correct!

@leonardo.pabon - Nice suggestion.

I think you are talking about creating LLM apps without using Langchain. I am aware of this method and using it. Also, it is very difficult to create LLM Apps without Langchain.
However, do you know a way of doing the same thing using Langchain? That is, using Langchain on the client side and by creating a custom server?

If we are doing everything from scratch, we are free to do anything. We are not required to put API key on the client.
However, if we are creating the app using Langchain, we are required to put the API key on the client side.

So my question is:
How can we put false API key on the client side where the app using Langchain is running, and then swap the false with true API key on some proxy or normal custom server?

@elirod - Thank you for your response. If this hampers the security in any way, I will not ask this question. Please reply, if I should stop writing in this thread.

1 Like

Don’t worry, my intention was to illustrate why this is a question that can be difficult to answer.

I think it’s a common mistake for all members to assume that the deeplearning.ai community encompasses all teams that envolved across his courses and platforms.

It is possible to use langchain on the client side or on the server side. This is an design decision that you can make.

If you do it on the server side, you will probably create an api that receives a question, use langchain to solve it with openai or other model, and return the solution/result.

If you do it on the client side, you will use langchain to make your app directly call openai or anyother llm or resource you want.