Prompt through GUI Vs API When Using Openai ChatGPT

When using ChatGPT on, the model remember what you did, and can answer subsequent questions about the original prompt. However, is this also true when using the API?

If you watched the lectures from the short course on chatgpt, the instructor mentions that an interaction with the model via openai api is stateless.

I’ve moved this thread to the appropriate forum (for the ChatGPT course).

Following up on @balaji.ambresh 's precise answer, you can programmatically keep a ‘running’ context to simulate the Web UI. Just keep in mind that this running context, plus the new prompt, plus the expected output, have to be within the constraints of the model.

Thank you for the clarification