Prompting Considered Harmful
As systems graduate from labs to the open world, moving beyond prompting is central to ensuring that AI is useful, usable, and safe for end users as well as experts.
As a computer scientist with one foot in artificial intelligence (AI) research and the other in human-computer interaction (HCI) research, I have become increasingly concerned that prompting has transitioned from what was essentially a test and debugging interface for machine-learning (ML) engineers into the de facto interaction paradigm for end users of large language models (LLMs) and their multimodal generative AI counterparts. It is my professional opinion that prompting is a poor user interface for generative AI systems, which should be phased out as quickly as possible.
My concerns about prompting are twofold. First, prompt-based interfaces are confusing and non-optimal for end users (and ought not to be conflated with true natural-language interactions). Second, prompt-based interfaces are also risky for AI experts—we risk building a body of apps and research atop a shaky foundation of prompt engineering. I will discuss each of these issues in turn, below.