ChatGPT Usage Insights

Working Around Its Limits - Part 5

Working Around Its Limits

ChatGPT is powerful but it’s not perfect. Knowing where it stumbles can save you time, frustration, and a lot of backspacing. Here’s how to work with its quirks instead of fighting them.

Specify what to ignore

Say "don't summarize" or "don't rephrase" when needed. The model defaults to helpfulness, which can sometimes mean rewriting or simplifying things you wanted to keep as-is. Clear directives matter.

I’ve asked it to fix typos in a personal message but preserve the tone. It reworded entire sentences. I re-prompted with "correct grammar only, no phrasing changes" and got what I wanted. Sometimes, it makes the mistake multiple times. Stay persistent.

When drafting terms for a product, I had to explicitly say "do not simplify legal phrases" to keep it compliant.

Use analogies carefully

They're helpful, but often inaccurate if not grounded. Analogies are one of ChatGPT's favorite tools. But unless you guide it with real context or constraints, they can mislead more than help.

Example: I asked for a metaphor to explain “zero-knowledge proofs.” It gave a cake recipe analogy that confused the concept even more. I had to specify "real-world analogy for an engineer with basic crypto understanding."

For a leadership talk, I asked it to avoid analogies entirely and just stick to first principles. It did much better.

Recognize hallucination risk

This is a term I only recently came to understand in this context. Especially high in citations, math, and real-time facts. The model will confidently give you incorrect information if prompted the wrong way. If accuracy matters, double-check. Always.

I asked it who won a recent election. It gave a confident, wrong answer (seemingly, hopefully) because the event occurred after its last training date.

I asked for citations in a whitepaper. The references looked real, but half were fabricated. I now check every link.

I’ve asked to review my resume, and a job posting accessible via URL. Needless to say, sometimes it’s accessing an older version, or none at all.

Explain how ChatGPT "remembers"

It doesn't learn or store new knowledge during chat. It replays the full message history each time. Longer chats often get slower and less responsive. It's not memory, it's recursion.

I noticed the model taking longer to respond during a multi-day strategy doc session. Splitting the session into shorter, focused chats sped things up.

After 80+ messages, ChatGPT started forgetting earlier constraints. Restarting the thread fixed it.

Watch for chat decay

As the conversation gets longer, older messages can "fall off" or lose influence, especially in complex threads. Important instructions may fade, even if you stated them earlier.

Example: In a long edit session, I told it to always keep British spelling. After 50 messages, it reverted to American. I had to reassert the rule.

While drafting a technical spec, I noticed it ignored previous formatting conventions until I re-prompted.

Know when to start fresh

If the chat feels sluggish, off-topic, or loses structure, it's time to open a new one. You can always restate what you need.

I was revising a script (not my typical use-case, but a new creative endevor) and noticed the model was looping or giving shallow responses. I copied key notes into a new chat and everything clicked.

How to link chats

You can copy-paste or reintroduce prior context. For continuity across sessions, you'll often need to restate key details manually. There's no cross-chat memory (as far as i’m aware).

I typically copy the last few key bullets from an older chat and pasted them into a new one to generate follow-up ideas.

Don't assume it will follow your rules

Double and triple check against any constraints or formatting rules you've specified in your prompt. It may violate these (even repeatedly). You'll have to enforce your own guardrails.

Mind the difference in how humans and AI communicate

The way humans informally phrase their needs often isn't specific or structured enough. ChatGPT needs clarity to execute well. Vagueness = noise.

Ask for a "cool headline" and you might get nonsense. Change the prompt to "4-word headline, clever but clear, for a cybersecurity product" and the results dramatically improved.

I asked it to "help improve this paragraph" and it added fluff. When I said "clarify this without changing tone," the output worked.

Expect missing features or clunky UX

Some features you might assume exist like full file management, prompt history tracking, or image-layer editing don't exist (yet). Workarounds like describing images as prompts or reloading chat context are often required.

I wanted to add a watermark to an image, but there's no layering tool. I had to regenerate the whole image with the logo baked in.

I assumed I could edit a DOCX output after export, but the formatting broke in Word. I now often finalize structure before export.

#ChatGPT #AIPrompts #WorkingWithAI #AIWorkflows #PromptEngineering #WritingWithAI #AIForWriters #AIProductivity #HumanInTheLoop #BuildWithAI

Reply

or to participate.