Andrew Ng says giving AI 'lazy' prompts is sometimes OK. Here's why.
Briefly

Andrew Ng advocates for 'lazy prompting' as an effective strategy when interacting with large language models (LLMs), suggesting that providing minimal context allows the models to infer user intent effectively. He cites the example of developers debugging code, often pasting extensive error messages into LLMs without further instructions, and yet achieving satisfactory results. Ng highlights this approach as an 'advanced' technique, best employed when LLMs possess sufficient context and reasoning abilities, but notes its limitations in scenarios that require detailed prompts or extensive context.
Lazy prompting involves giving minimal context to LLMs; in some scenarios, it's efficient, as models can infer intent without many specifics.
Ng stated, 'When debugging code, many developers copy-paste error messages—sometimes pages of them—into an LLM without further instructions.'
Lazy prompting is an advanced technique that works best when the LLM has enough pre-existing context and the ability to infer intent.
Ng emphasized, 'It's not useful if the LLM needs a lot of context to provide a detailed response or if it's unable to detect hidden bugs.'
Read at Business Insider
[
|
]