How to Write Your First AI Prompt (Without the Jargon)
You don't need a prompt engineering course. You just need to understand three things: context, specificity, and iteration. Here's how to get useful results from AI on your first day.
TL;DR
Good prompts aren't about magic formulas. Give the AI context about who you are and what you need, be specific about what you want, and treat your first prompt as a starting point, not a final request. Most bad results come from prompts that are too short and too vague.
The first time I used ChatGPT, I typed "write a blog post about Python" and got something so generic I thought the tool was broken. It wasn't. I just had no idea how to talk to it.
Three years and thousands of prompts later, here's what I wish I'd known on day one.
The Only Three Things That Matter
Forget templates. Forget "act as a world-class copywriter with 20 years of experience." The stuff that actually makes a difference comes down to three things.
1. Tell It Who You Are
Every prompt is missing context the AI can't see. Your job title. Your skill level. Why you're asking. What you're going to do with the answer.
A short setup sentence changes everything:
I'm a marketing manager who doesn't know how to code. Explain APIs to me using analogies, not code examples.
Now the AI knows to skip the technical details and focus on concepts. One sentence. Huge difference.
If you're going to use AI regularly, set up Custom Instructions (in ChatGPT: Settings → Personalization). Tell it your role, preferred tone, and what to avoid. This saves you from repeating context in every prompt.
2. Be Annoyingly Specific
Vague prompts get vague answers. This is the single most common mistake.
Bad:
Write me a cover letter.
Better:
I'm applying for a project manager role at a 50-person SaaS startup. I have 4 years of experience in logistics, not tech. Write a cover letter that addresses the career change honestly and emphasizes transferable skills, managing timelines, coordinating teams, dealing with unclear requirements.
The second prompt works because it answers the questions the AI would otherwise guess at: what job, what company, what background, what concern, what to emphasize.
You don't need special vocabulary. You need to include the details that are already in your head but not in your prompt.
3. Iterate Like You're Editing, Not Like You're Searching
Google is one query, one answer. AI is a conversation.
My typical workflow:
- Write a decent prompt → get an okay response
- "This paragraph is too formal, make it conversational" → better
- "The second example doesn't make sense, replace it with something about email marketing" → good
- "Cut the introduction, it's too long" → done
Each step takes five seconds. The final result is usually 3-4x better than the first response. The people who think AI produces mediocre output usually stopped after step 1.
A Real Prompt, Step By Step
Here's a prompt I actually used last week:
I'm writing a tutorial about React hooks for junior developers who've only used class components. Explain useState with a before-and-after example. The "before" should use a class component, the "after" should use a function component with hooks. Keep it under 300 words. Don't explain what React is, they already know.
Let me break down what each part does:
- "I'm writing a tutorial about React hooks for junior developers...", context: who I am, who the audience is
- "Explain useState with a before-and-after example", the specific task
- "The 'before' should use a class component...", format constraint
- "Keep it under 300 words", length constraint
- "Don't explain what React is", what to avoid
Every piece of this prompt reduces the chance the AI will go in a direction I don't want.
What Not to Do
Don't ask yes/no questions. "Is React good?" → pointless. "What are three situations where React is a bad choice, and why?" → useful.
Don't accept the first answer. The first response is the AI's best guess at what you want. Your job is to refine that guess.
Don't trust numbers or facts without checking. AI confidently invents statistics, dates, and citations. If something sounds specific, verify it.
Don't write one giant prompt for a complex task. Break it down. "Write a marketing plan" will get you generic garbage. "Brainstorm 5 angles for marketing a developer tool" → "For angle #2, write a one-page outline" → way better.
When You're Ready for More
Once you've got the basics, two things will level up your results quickly:
Find a good starter prompt for your main use case. Writing emails? Spend 10 minutes crafting one solid base prompt with your preferred tone, typical audience, and common constraints. Save it. Reuse it. Tweak per situation.
Learn what the AI is bad at. Every model has blind spots. ChatGPT overexplains. Claude is overly cautious about certain topics. Gemini sounds corporate. Knowing these tics lets you compensate in your prompts instead of fighting the output.
FAQ
Do I need to learn prompt engineering?
No. The term "prompt engineering" makes it sound like a technical discipline. It's not. It's clear communication. If you can write a good email brief to a colleague, you can write a good prompt.
Why do some people get better results than others with the same AI?
Two reasons. One: they're more specific in their prompts. Two: they iterate more. That's honestly most of the gap.
How long should my prompt be?
Long enough to include the context the AI needs, short enough that you're not rambling. Most good prompts are 3-5 sentences plus any examples or constraints. If your prompt could fit in a tweet, it's probably too vague.
Can I use the same prompt style for ChatGPT, Claude, and Gemini?
Mostly. Claude handles longer, more detailed prompts slightly better. Gemini takes prompts more literally, so be extra specific about what you don't want. ChatGPT is the most forgiving of vague prompts. But the same principles apply across all three.