Why Prompt Engineering Still Matters
With every new model release, people ask whether prompting still matters — surely modern AI is smart enough to understand plain English? The answer is definitively yes, prompting still matters, and in some ways more than ever.
The Fundamentals
**Be specific about format.** "Write a summary" gives you something. "Write a 3-bullet executive summary of the following, each bullet under 20 words, in plain English" gives you exactly what you need.
**Provide context and role.** "You are a senior UK employment lawyer" immediately improves the quality and accuracy of legal questions. Role context shapes the model's frame of reference.
**Use examples.** Few-shot prompting — showing the model 2–3 examples of the output you want before asking — consistently outperforms zero-shot for structured tasks.
Chain-of-Thought Prompting
Adding "Think step by step" to your prompt is not a gimmick. It genuinely improves performance on reasoning tasks by forcing the model to externalise its reasoning process. For complex multi-step problems, this is essential.
Example: "Think step by step: If a company has 150 employees and reduces headcount by 12%, how many remain?"
Structured Output
Modern LLMs are excellent at outputting JSON, XML, or Markdown if you ask precisely. Use schema definitions to get consistent structure:
Return your analysis as JSON with these exact fields:
{
"summary": "string (max 100 words)",
"sentiment": "positive | negative | neutral",
"key_actions": ["string", "string", "string"]
}
What Doesn't Work
The Meta-Skill
The best prompt engineers aren't just good at writing prompts — they're good at understanding what the model is doing and why, so they can diagnose failures and iterate quickly. Our AI Prompt Engineer certification covers all of this systematically.