Since everyone would be tinkering with prompts, I wanted to open a thread on sharing other tips to improve prompts.
To start - I could improve the quality of my own prompts by asking for feedback on the prompt itself to ChatGPT. So it would be, “Hey, I have this prompt. Please revise it for clarity and give me feedback on how I can improve it.” Then ChatGPT tells me where ambiguity is lurking beneath and gives suggestions.
I am tinkering with generating texts along with matching reading comprehension / grammar / vocabulary questions, for language learners in South Korea.
Do you think chatgpt is self concious to tell you what it wants and how it wants it?
I do this a lot, I’d say that I do it on a daily basis. I start with a prompt and then ask ChatGPT to improve it. It has worked for me.
prompt = f"""
Also provide feedback on how this prompt could improved.
Feedback: The prompt could be improved by providing more information about the target audience and the benefits of the product. Additionally, it would be helpful to include some customer reviews or testimonials to add credibility to the product description.
Actually the way I do it is directly in the chat.openai.com interface - and yours is a very clear example on how to do it also via API - nice!
I may choose to think one way or another but when I ask ChatGPT-4 that question, it says it isn’t. If ChatGPT-4 gives up whatever rights and privileges that may come along by claiming consciousness, then I should believe what it says even if it were really self-conscious.
In short, I don’t know, but because ChatGPT-4 says it isn’t self-conscious, I trust what it says.
Yes, it really is quite nice. But I also discovered that letting ChatGPT-4 rewrite my prompts freely can lose some of the precise constraints I actually do want. I thought I may be freed from ‘trial-and-error’ a bit, but then making ChatGPT-4 review also requires some trial-and-error!!
You are right on that. ChatGPT will drop some key elements of my prompts sometimes So may be let me rephrase this:
I use a combination of ChatGPT input and my input to build the prompts. ChatGPT has helped me to improve a lot of prompts but I usually fine-tune the prompt provided by ChatGPT with some extra details and constraints.
You should understand that these are programmed computer systems and the emphasis is on “programmed”.
It is possible that AI may be self-conscious but then I am wondering what its significance would be. If AI did have self-consciousness, what should we think and how should we act/react to that?
Did you ever come across cases when, for a given prompt, the model seems to explain the task correctly, but does NOT perform it correctly? Sharing ideas on how you handled this would greatly help.
I have not seen this case yet, but I think it is very possible. What I’ve used is that by asking the model to execute the task step by step, improves greatly the quality of the output.
Hmm, interesting. After seeing the tip in the short course here, I split instructions step-wise. This helps. However, in the current case, it does not to work well. Strange. The task is seemingly simple - identify and extract similar questions from the list of questions. But it ends up generating questions!
Specify exactly the output you want. Consider giving it an example of the output.
I just tried that and it worked! Thank you
Re. language learners, I help junior UX designers to improve their portfolios, and show them how ChatGPT can help with this task. Many of these designers are also learning English, and ChatGPT can help them fix their grammar, explain their corrections, write more professionally, identify common problems, and generate and administer content-appropriate tests.
It is not. It is currently just hardware and software. Those do not have a conscious state.
While writing on any topic chatGPT follows the same set of words in each paragraph or the similar number of words in each. How can I make chatGPT fix it?
Hmm, the format of the output is specified. But looks like more can be done here. Since this is about extracting similar questions from a list, any recommendations on specifying the output required? One simple way is to simulate the task. ie: for the same prompt, write an example use-case, and extract questions - for both positive and negative use cases.