Hi,

I’m trying to do the C1M3 Graph assignment. I’m trying to do the prompt to remember my previous prompt. I’m using the suggested role and describing my problem, providing the example. I’m trying to update my response, but the prompt seems to ignore or forget my previous comments, so is there a way to get ChatGPT to reference my previous comments as I try to ask it to modify or clarify answers.

# C1M3 - General prompt question - not remembering my previous question

Hi @tienle ,

Can you please share how you are doing the prompt so that we can help you?

Regards,

Samuel

Hi Samuel,

Thanks for responding. I’m on the latest lab ChatGPT 4.0 provided in the lab link.

Here is what i’m trying to do. I’m trying to tackle the first part of the exercise.

I get a response, but when i try to clarify that the return values should be “dist” and “path” it doesn’t return or seem to understand my request.

As a software developer, Python mentor and security expert, help me write a shortest path function. The function should be called shortest_path(self, start, end) and must return a tuple pair named “dist” and “path”. The shortest path function should use the Graph class provided below code. Please include comments, error handling and and guidance.

class Graph:

def **init**(self, directed=False):

“”"

Initialize the Graph.

```
Parameters:
- directed (bool): Specifies whether the graph is directed. Default is False (undirected).
Attributes:
- graph (dict): A dictionary to store vertices and their adjacent vertices (with weights).
- directed (bool): Indicates whether the graph is directed.
"""
self.graph = {}
self.directed = directed
def add_vertex(self, vertex):
"""
Add a vertex to the graph.
Parameters:
- vertex: The vertex to add. It must be hashable.
Ensures that each vertex is represented in the graph dictionary as a key with an empty dictionary as its value.
"""
if not isinstance(vertex, (int, str, tuple)):
raise ValueError("Vertex must be a hashable type.")
if vertex not in self.graph:
self.graph[vertex] = {}
def add_edge(self, src, dest, weight):
"""
Add a weighted edge from src to dest. If the graph is undirected, also add from dest to src.
Parameters:
- src: The source vertex.
- dest: The destination vertex.
- weight: The weight of the edge.
Prevents adding duplicate edges and ensures both vertices exist.
"""
if src not in self.graph or dest not in self.graph:
raise KeyError("Both vertices must exist in the graph.")
if dest not in self.graph[src]: # Check to prevent duplicate edges
self.graph[src][dest] = weight
if not self.directed and src not in self.graph[dest]:
self.graph[dest][src] = weight
def remove_edge(self, src, dest):
"""
Remove an edge from src to dest. If the graph is undirected, also remove from dest to src.
Parameters:
- src: The source vertex.
- dest: The destination vertex.
"""
if src in self.graph and dest in self.graph[src]:
del self.graph[src][dest]
if not self.directed:
if dest in self.graph and src in self.graph[dest]:
del self.graph[dest][src]
def remove_vertex(self, vertex):
"""
Remove a vertex and all edges connected to it.
Parameters:
- vertex: The vertex to be removed.
"""
if vertex in self.graph:
# Remove any edges from other vertices to this one
for adj in list(self.graph):
if vertex in self.graph[adj]:
del self.graph[adj][vertex]
# Remove the vertex entry itself
del self.graph[vertex]
def get_adjacent_vertices(self, vertex):
"""
Get a list of vertices adjacent to the specified vertex.
Parameters:
- vertex: The vertex whose neighbors are to be retrieved.
Returns:
- List of adjacent vertices. Returns an empty list if vertex is not found.
"""
return list(self.graph.get(vertex, {}).keys())
def _get_edge_weight(self, src, dest):
"""
Get the weight of the edge from src to dest.
Parameters:
- src: The source vertex.
- dest: The destination vertex.
Returns:
- The weight of the edge. If the edge does not exist, returns infinity.
"""
return self.graph[src].get(dest, float('inf'))
def __str__(self):
"""
Provide a string representation of the graph's adjacency list for easy printing and debugging.
Returns:
- A string representation of the graph dictionary.
"""
return str(self.graph)[
```

](https

See my response and screenshot. It returns a partial solution but it just doesn’t seem to complete the function. I’m trying to modify and regenerate the request but it keeps giving me the same incomplete answer.

Hi @tienle,

I see your point. First of all, there is no one solution or straightforward response to this, because we are dealing with generative AI. However, we can take a couple of steps to try to steer the model in the right direction. For example, we can use few shots prompting, by showing GPT 1 or 2 examples of how we want the output to be (Example 1: … Example 2: …). We can also try to focus the model’s attention on the steps that it needs to take to give the final solution, via a chain of thought. There are many ways on how to achieve this. For example, we can build the prompt step by step: Step 1: Build this function… Step 2: Ensure that the output is a dictionary… and so on.

One needs to be creative with the prompt building, and with experience, we can quickly steer the LLM in the right direction.

I hope this helps.

Regards,

Samuel

You can give them the already implemented graph class, the empty method and ask them to implement that function, focusing on efficiency and low time consuming. Then you should be able to get a working function.

I think the issue I’ve been seeing is the same as the other forum chat, LLM environment keeps locking up!. The LLM becomes unresponsive after several prompts or trying to regenerate a response. I’ve seen it error out with a pop-up with no message.

Hi there @tienle

It seems you are having the exact same issue I was having. I would have to start multiple conversations and still end up where I started somehow since the LLM would not remember my request.

Someone in another thread suggested using the ChatGPT website, which I have added to my strategy. It operates like it does on your phone where you will have to wait about 3 hrs after ChatGPT has after you’ve reached the response limit. The wait is long but I figure it would be more helpful than starting from the beginning with a new conversation.

Hope this was helpful.

Thanks folks for the suggestions.

Here is what i did for those who get stuck on this issue.

- In the Lab provided ChatGPT, i ended up creating a “New Chat” per exercise or function. The LLM limit seems to be limited after 3 or 4 prompts, before it becomes unresponsive. I wish there was a disclaimer on the lab or note somewhere to let folks know this can be a common problem. Not a great user experience.
- Try to get much information in the first prompt and message and re-use it again when you create a “New Chat”. This gives you at least 2 or 3 more times to clarify your questions. Once you get a satisfactory response and finish the exercise. Copy over what you need from your primary prompt and just update the function or issue you are trying to solve in the next “New Chat”. I ended up creating 4 new chats.
- Hope this helps.