Prompts tuned for one LLM (e.g., GPT) may underperform on others; rewriting aligns with model defaults.
prompts = ["Original", "Rewritten"]
for p in prompts:
response = model.generate(p)
score = evaluate(response)
4. **Iterate**: Use few-shot examples if needed.
## Example
GPT prompt: "Think step-by-step..." → Claude: "Analyze logically: [task]".
## Evaluation
- Metrics: ROUGE for tasks, human preference.
- Trade-offs: Brevity vs. guidance.
## Conclusion
Prompt rewriting ensures portability; always validate on target models.