Leverage Claude's exceptional reasoning, long context, and nuanced writing for deep work.
Claude is purpose-built for sustained, nuanced intellectual work. Its 200K token context window means you can feed it entire codebases, books, or research papers in a single prompt. Claude tends toward careful, hedged reasoning — which is a strength for analysis but can produce overly cautious outputs when you need decisive recommendations. Knowing how to push through that caution is key.
Each prompt is annotated with the reasoning behind its structure.
I'm going to paste a 40-page investor report. After reading it, answer: 1) What is the company's core thesis for growth? 2) What risks do they acknowledge vs what risks are conspicuously absent? 3) Are any of the financial projections internally inconsistent? Be specific — cite page numbers and quote relevant passages. [paste document]
Claude's long context makes it uniquely suited for full-document analysis. Asking for absent risks and internal inconsistencies prompts genuinely critical thinking rather than a recitation of the document's claims. Citing quotes forces grounded answers.
I need a direct, confident recommendation — not a hedged overview of considerations. Based on the two marketing strategies I described, which one should I pursue and why? Give me your actual recommendation in the first sentence, then support it.
Claude's training makes it naturally hedge. Explicitly requesting a direct recommendation in the first sentence forces the structure you want. This is one of the most practically useful Claude-specific techniques.
Rewrite the following paragraph 4 times: Version A: formal academic tone, passive voice, third person. Version B: conversational blog post, second person, contractions allowed. Version C: bullet points, each under 12 words, action verbs only. Version D: as a tweet thread, max 5 tweets at 280 chars each. [paste paragraph]
Claude is exceptionally good at holding multiple simultaneous constraints. GPT-4o often drops some constraints when there are many. This type of structured multi-output prompt is where Claude outperforms most models.
Review this Python function for: 1) correctness (does it do what the docstring claims?), 2) edge cases it doesn't handle, 3) performance with 1M input records, 4) security issues if inputs come from untrusted users. For each issue, rate severity: Critical / High / Medium / Low. [paste code]
Claude's careful reasoning style is a strength for code review. Severity ratings force prioritization. Specifying the scale of input (1M records) and threat model (untrusted users) gives Claude the context needed to make relevant performance and security judgments.
I believe remote work increases developer productivity. Steelman the opposite position using the best available evidence. Then identify the 2-3 conditions under which each side is most correct. Finally, tell me what evidence would change your assessment.
Asking Claude to steelman — not just argue against — produces genuinely strong counterarguments. The conditional framing ('under which conditions') and the epistemic question ('what would change your assessment') generate the kind of nuanced analysis Claude does better than most models.
Browse our full library of Claude prompts.
Browse Claude Prompts