AI is a useful second-opinion reviewer for covered-call decisions, but only if you stop asking it to invent strikes and premiums and start pasting in real numbers from your broker.
// On this page
Most “AI prompts for covered calls” lists ask the model to do the one thing it cannot do: see the option chain. They want strike recommendations, premium estimates, an annualised yield built from numbers the model never had. The output looks confident. It is also fiction.
These seven prompts do the opposite. Each is built around a real decision moment in a covered-call trade — should I write calls on this name at all, what does the IV regime look like, which strike fits my intent, what’s about to blow up inside my contract window, do I roll or let it assign — and each one forces me to bring the data. The AI is not the screener. It is the reviewer.
I run these against my own positions, mostly on BMNR and a few other names where I’m running the wheel. Some I run every trade. Some only when something changes. None of them replaces the broker — they sit on top of it.
The seven AI prompts for covered calls: (1) position fit check, (2) IV regime read, (3) strike selection sanity check, (4) event-window check, (5) roll, close, or let it assign, (6) post-trade review, (7) income-strategy stress test.
The discipline
Two rules underwrite every prompt below.
One. The AI cannot see live option chain data. Ask it for strikes and premiums and it will invent plausible-sounding numbers that don’t exist on any chain you can actually trade. Every prompt below either requires you to paste in the chain snapshot or explicitly tells the model not to make the figures up. If you take one thing away from this post, take that.
Two. Every prompt uses the Prompt Stack — ROLE, FILTER, RISK, VERDICT. ROLE keeps the model out of cheerleader mode. FILTER forces real data in and separates fact from assumption. RISK is the bit retail options sellers most often skip. VERDICT prevents the “it depends” non-answer that makes AI useless for decisions.
Drop either rule and you’re back in the territory of the prompt libraries that tell you to ask AI for “the best covered call to sell today”.
Prompt 1: Position fit check (should I write calls on this at all?)
Decision served: Before any strike is chosen, is this a name you should be selling calls on in the first place? Catches the most common retail mistake: chasing premium on a stock you actually want to keep, or one whose volatility makes the income-versus-upside trade obviously bad.
Why it earns its place: Every other prompt library skips this and jumps straight to strike selection. That is backwards. If you wouldn’t sell the shares at any reasonable strike, you shouldn’t be writing calls.
ROLE: Act as a sceptical portfolio reviewer, not an options coach. You are reviewing whether [COMPANY NAME] is suitable for a covered-call overlay given my stated thesis. You are not allowed to recommend strikes or premiums — that comes later.
FILTER: My thesis on [COMPANY NAME] is: [ONE-SENTENCE THESIS]. My time horizon is: [HORIZON, e.g. 6 months / multi-year hold]. My intent if assigned: [HAPPY TO SELL / WOULD BE ANNOYED / WOULD HAVE TO REBUY]. Separate the observable facts about this name (sector, typical 30-day move, beta, dividend status if known to you) from the parts of my thesis that are assumption.
RISK: Identify the three scenarios in which writing covered calls on this name would actively damage my thesis — for example, capping a re-rating, forcing a tax event, or locking me into rolling a runaway. Be concrete.
VERDICT: One of three answers — “Suitable overlay”, “Suitable only on tactical strength”, or “Don’t write calls on this”. State your confidence (low / medium / high) and the single condition that would flip your view.
What good output looks like: A clear verdict, risks tied to this stock rather than options in general, and explicit acknowledgement of which parts the AI is guessing on. If the model starts spitting out strike prices, the prompt has failed — start again.
Where it falls short: The AI doesn’t know your tax situation, your other positions, or your real cost basis. This is a thesis check, not a portfolio check.
Prompt 2: IV regime read
Decision served: Are option premiums on this name expensive or cheap relative to their own history? This determines whether you’re being paid enough to take the assignment risk.
Why it earns its place: IV regime is the single most under-understood input by retail sellers. Most prompt libraries either ignore it or assume the AI can fetch it. It can’t reliably. This prompt makes you bring the data and asks the model to do the thing it’s good at — interpretation.
ROLE: Act as a volatility analyst. You cannot see live data. Work only from the numbers I paste below.
FILTER: Here is the current implied volatility picture for [COMPANY NAME], pulled from [SOURCE, e.g. broker / Barchart / Market Chameleon] on [DATE]:
- IV30: [VALUE]%
- IV Rank (52-week): [VALUE]
- IV Percentile (52-week): [VALUE]
- Realised 30-day vol: [VALUE]% (if available)
- Next known catalyst: [EARNINGS DATE / DIVIDEND / NONE]
Explain in plain English what this combination tells me about the current pricing of premium on [COMPANY NAME]. Distinguish IV Rank from IV Percentile and tell me which is more useful for this specific reading.
RISK: Name two reasons IV could be elevated for a structural reason (i.e. the market knows something) rather than a tradeable mispricing reason. What would I look at to tell them apart?
VERDICT: Classify the current premium environment as RICH / NORMAL / COMPRESSED for selling covered calls on this name, with one sentence on whether this changes how aggressive a strike I should consider.
What good output looks like: A real distinction between rank and percentile (rank is more sensitive to outliers; percentile to frequency), a regime label, and a flag if there’s a catalyst inside a typical 30-DTE window. If the model invents an IV number rather than using the one you pasted, abort and start again.
Where it falls short: Only as good as the data you paste. It cannot tell you why IV is where it is — that’s fundamental research, not a prompt.
Prompt 3: Strike selection sanity check
Decision served: You’ve narrowed to two or three candidate strikes from your broker’s chain. Which one fits your stated goal — and which one is your gut leading you wrong?
Why it earns its place: This is where retail traders most often chase premium and end up writing aggressive calls on a stock they didn’t actually want assigned. Asking AI to generate strikes invites it to make numbers up. Asking it to adjudicate between options you’ve already pulled from your broker is the inverse: useful, fast, and forces you to commit to your intent before you see the recommendation.
ROLE: Act as a risk-first options reviewer. You are not generating strikes — I have pulled these from my broker. You are auditing my shortlist.
FILTER: [COMPANY NAME] last [PRICE]. My cost basis: [BASIS]. My intent: [INCOME-FIRST / KEEP-THE-SHARES / HAPPY-TO-EXIT]. Candidate strikes for [EXPIRY DATE]:
- Strike A: [STRIKE], delta [VALUE], premium [PREMIUM], OI [OI]
- Strike B: [STRIKE], delta [VALUE], premium [PREMIUM], OI [OI]
- Strike C: [STRIKE], delta [VALUE], premium [PREMIUM], OI [OI]
For each, calculate: (1) static return if unchanged, (2) if-called return at expiry, (3) annualised yield using the actual days to expiry. Show the maths. Do not invent missing fields — if I haven’t given you something, say so.
RISK: For each strike, name the single failure mode — what stock move makes this the wrong pick in hindsight? Rank the three by which one would hurt my stated intent the least.
VERDICT: Pick one. State which input would have to change for your pick to flip to one of the others.
What good output looks like: Visible arithmetic, three failure modes that aren’t identical, a pick that matches your stated intent rather than the highest premium by default. If you said “keep the shares” and the model picks the 0.45-delta strike for the headline yield, push back.
Where it falls short: AI is bad at delta-to-probability conversions when the underlying has real skew. It will treat 0.30 delta as roughly 30% probability ITM, which is approximately true but not precise. For the trade itself, your broker’s probability number wins.
Prompt 4: Event-window check
Decision served: Is there an earnings call, ex-dividend date, FOMC meeting, or known binary event inside your contract window that you’ve forgotten about? This is the cheapest mistake to avoid and the one most often missed.
Why it earns its place: This is a checklist, not analysis — and AI is good at thorough, boring checklists. It also catches dividend-driven early-assignment risk, which retail sellers consistently underestimate.
ROLE: Act as a compliance-style pre-trade checker. Your job is to list everything I might have forgotten about the contract window, not to opine on whether the trade is good.
FILTER: Trade: short [N] contracts of [COMPANY NAME] [EXPIRY] [STRIKE] calls. Today: [DATE]. Window: [DAYS] days. Known about the company: [SECTOR, ANY UPCOMING EVENTS YOU KNOW OF].
Walk through this checklist and flag any item I should verify before placing the trade:
- Earnings inside the window
- Ex-dividend date inside the window (and whether the dividend exceeds typical extrinsic on a comparable strike — early-assignment risk)
- Known product, regulatory, or macro catalysts
- Index rebalance or expiry concentration around my expiry
- My own holding-period status (qualified versus non-qualified covered call implications, if I tell you my purchase date)
RISK: For any item flagged “verify”, tell me concretely what to look up and where, and what threshold makes it disqualifying for this trade.
VERDICT: GREEN (proceed), AMBER (verify items X, Y), or RED (stand down — specific reason). One line.
What good output looks like: Specific things to check in specific places. “Verify ex-div date on the IR page or Nasdaq calendar; if the dividend exceeds expected extrinsic on the [STRIKE] strike the day before, expect early assignment.” Not generic warnings.
Where it falls short: AI doesn’t have today’s earnings calendar. It will know roughly when a company reports based on past quarters but shouldn’t be trusted on the exact date. Treat any specific date the model gives you as a prompt to verify, not a fact.
Prompt 5: Roll, close, or let it assign
Decision served: Mid-trade. The stock has run. The short call is in the money or close to it. What do you actually do?
Why it earns its place: This is the highest-stakes recurring decision a wheel trader makes. Existing prompts treat rolling as a topic to explain, not a decision to make. Done right, this prompt forces you to articulate your updated thesis before the AI gives an answer — which is the actual discipline.
ROLE: Act as a decision auditor for an open covered call. You will not give me a number to roll to. You will help me decide between three actions: ROLL, CLOSE FOR LOSS, or LET ASSIGN.
FILTER: Position: short [N] [COMPANY NAME] [EXPIRY] [STRIKE] calls. Premium received: [CREDIT]. Current option price: [DEBIT TO CLOSE]. Stock now: [PRICE]. Days remaining: [DTE]. My original thesis when I sold: [THESIS]. What changed in my view of the stock since then: [UPDATED VIEW, or “nothing fundamental, it just ran”]. Tax / cost-basis context: [E.G. “shares are long-term qualified”, “basis is well below current price”, “I’d prefer not to realise the gain this year”].
For each of the three actions, lay out:
- The cash impact today
- What needs to be true for it to be the right call in 30 days
- What it implies about my view of the stock going forward
RISK: If I roll, what’s the maximum number of times you’d consider rolling this position before the right move is to take the assignment? Why that number?
VERDICT: Recommend one action. State the single piece of new information that would flip your recommendation.
If the trader said "nothing changed, it just ran", the answer is usually "let it assign — you got what you wanted".
What good output looks like: A clean three-way comparison, an explicit roll cap (disciplined practitioners typically cap rolls at one or two — rolling indefinitely is how a winning strategy turns into a losing one), and a recommendation that matches the stated updated view.
Where it falls short: AI cannot run a live roll quote — it doesn’t know what the next-month strike trades for. It can frame the decision; you still have to price the roll in your broker.
Prompt 6: Post-trade review
Decision served: The position closed — assigned, expired, or you bought it back. Did the process work, separately from whether the outcome was good?
Why it earns its place: This is the prompt nobody writes and everybody needs. The wheel is a process strategy, not a stock-picking strategy — which means most of the edge comes from disciplined journaling. AI is good at structured reflection if you give it the inputs and tell it not to flatter you.
ROLE: Act as a trading-process reviewer. Score the process and the outcome separately. Do not be encouraging — be useful.
FILTER: Position closed: short [N] [COMPANY NAME] [STRIKE] calls, opened [OPEN DATE] for [CREDIT], closed [CLOSE DATE] for [DEBIT or “expired worthless” or “assigned at strike”]. P&L on the option: [VALUE]. P&L on the underlying over the holding period: [VALUE if relevant]. Why I sold this strike at the time: [ORIGINAL REASONING]. What I’d do differently knowing what I know now: [HONEST ANSWER, or “I’m not sure — that’s why I’m asking”].
Walk me through:
- Was the strike selection consistent with my stated intent?
- Was the timing (versus IV regime, versus events) defensible at the time, or only in hindsight?
- Did I manage the position according to a rule I’d stated, or did I improvise?
- What’s the one repeatable thing — good or bad — to extract from this trade?
RISK: Name the cognitive trap most likely to be at play if I treat this trade as evidence that my approach is working (or broken). For example: small-sample, outcome bias, narrative fallacy.
VERDICT: Process score (1–5) and outcome score (1–5), separately. One sentence on the gap between them.
What good output looks like: Two scores that diverge. Good outcome with poor process is the most useful flag the model can hand you. One concrete repeatable rule, not generic advice about “staying disciplined”.
Where it falls short: This is the prompt where AI is most prone to flattery. The “do not be encouraging” instruction matters; without it, the model defaults to praise. If you read the output and feel good about everything, run it again and ask the model what it pulled its punches on.
Prompt 7: Income-strategy stress test
Decision served: Quarterly, or after a regime change. Is the overall covered-call programme still doing what it was supposed to do? This is the prompt for the trader who’s been running the wheel for six months and isn’t sure if it’s working.
Why it earns its place: This is distinct from the per-trade review. It looks at the strategy as a system: are you beating the buy-and-hold counterfactual, or are you generating just enough premium to feel productive while the upside cap quietly costs you more than you’re collecting? It’s the question wheel traders most often avoid asking.
ROLE: Act as a sceptical strategy reviewer. Your default position is that the buy-and-hold counterfactual wins on most names; I have to convince you otherwise with the data.
FILTER: I have been running covered calls on [COMPANY NAME] for [PERIOD]. Inputs:
- Total premium collected (gross): [VALUE]
- Number of assignments: [N]
- Number of buy-to-close-at-loss events: [N]
- Realised P&L from forced sales: [VALUE]
- Stock price at start of period: [PRICE]
- Stock price now: [PRICE]
- What I would have done with the shares otherwise: [HOLD / SELL AT SOME POINT / TRIM]
Calculate, with the maths visible:
- My realised yield on this name (premium / average capital deployed, annualised)
- The buy-and-hold counterfactual return for the same period
- The opportunity cost of any shares called away (the gain I would have had versus the premium I collected)
RISK: Two ways this comparison is unfair to the strategy, and two ways it’s unfair to buy-and-hold. Be even-handed.
VERDICT: One of: “Strategy is paying for itself”, “Strategy is underperforming hold and you’re paying for the illusion of income”, or “Too noisy to tell — keep logging for another [N] months”. Name the one metric I should track that I’m not already tracking.
What good output looks like: Visible arithmetic, an honest counterfactual (the prompt should be willing to say the strategy is underperforming hold — that’s the structurally likely result on a bull-trending name, and the model needs permission to say so), and a tracking suggestion that’s specific.
Where it falls short: Only as good as your logs. If you haven’t been recording assignments and forced-sale P&Ls, the prompt can’t help. Start logging now; run the prompt in three months.
Where AI helps, and where to stop
| Stage | AI useful? | When to put the prompt down |
|---|---|---|
| Pre-trade — is this stock suitable? | Yes — thesis stress-test | Never; this is the model’s strongest zone |
| Pre-trade — IV regime | Partial — explains regime if you paste data | The moment you ask it for a live IV number |
| Pre-trade — strike selection | Yes — sanity check on a shortlist | The moment you ask it to generate strikes |
| Pre-trade — event window | Yes — checklist generation | When it gives you a specific earnings date — verify it |
| In-trade — roll vs close vs assign | Yes — decision tree against your stated goal | When you need a live roll quote — that’s the broker |
| Post-trade — review | Yes — disciplined reflection | If the model starts flattering you |
| Strategy review — is it working? | Yes — pattern analysis on your own log | If your log is incomplete; finish logging first |
The two hard “no” zones: anything requiring live option chain data (the model doesn’t have it and will invent strikes and premiums that don’t exist) and anything requiring exact Greek calculations (Black-Scholes implementations are usually subtly wrong on day count, vol units, or rate annualisation). Those are calculator jobs.
What works
Treating AI as a structured second opinion that can hold your stated intent and your real numbers in mind at the same time. The Prompt Stack keeps it from defaulting to "supportive coach".
What doesn't
Asking the model to see the chain, calculate exact Greeks, or pick a trade for you. Every one of those invites the model to make numbers up.
Confidence
High on the framework. Medium on whether you'll resist the temptation to ask for strike recommendations anyway.
These are the seven I run. They will change as the tools change — in my testing, Claude has been more willing than the alternatives to say “I don’t know, check the IRS guidance” on tax-edge cases, which is the right disposition for this work, but that gap will close. The discipline won’t. The AI cannot see the chain. Bring the data, or don’t bother running the prompt.
Ben documents AI experiments against his own investment portfolio — real money, human analysis, sceptical use. About Ben →