Transcript
LJYh5LcldK8 • AI Prompting in 2026 How to Get Better Results From ChatGPT, GPT 5 & Gemini
/home/itcorpmy/itcorp.my.id/harry/yt_channel/out/BitBiasedAI/.shards/text-0001.zst#text/0310_LJYh5LcldK8.txt
Kind: captions
Language: en
You're probably spending way more time
fixing AI outputs than you should be.
Maybe you're rewriting prompts three,
four, or five times just to get
something usable.
Or worse, you're getting confident
sounding answers that are completely
wrong.
Well, I've spent the last year testing
every major AI model out there, GPT5,
Claude, Gemini, and here's what
surprised me. The models aren't the
problem. Most people are just asking the
wrong way. And that gap between good
prompts and bad prompts, it's costing
you hours every single week. So, in this
video, I'm going to show you exactly how
AI prompting has evolved in 2026, and
the specific techniques that will help
you get better results in less time.
We're talking about practices that
reduce hallucinations, boost accuracy,
and unlock features most people don't
even know exist.
By the end of this, you'll understand
how to structure prompts that actually
work, and you'll save yourself from that
frustrating back and forth with AI.
First up, let's talk about what's
actually changed with these models,
because the difference between 2023 and
now is bigger than you think.
The evolution, what's actually different
now?
Remember when chat GPT first dropped? We
were all amazed that we could just talk
to it, type a question, get an answer.
Simple.
But here's where it gets interesting.
That was basically the training wheels
version of what we have now. The models
we're working with in 2026 are
fundamentally different beasts. Take
GPT40. The O stands for omni, by the
way. It doesn't just read text anymore.
You can throw images at it, audio files,
even video, and it'll respond in kind.
Needed to analyze a chart? Done. Want it
to listen to a recording and transcribe
it? Easy. The response time is insane,
too. We're talking about 232
milliseconds for audio responses. That's
basically human conversation speed. But
that's not even the craziest part. The
context window, essentially how much
information these models can hold in
their working memory, has exploded.
Early GPT4 could handle about 8,000
tokens. Now we're looking at millions.
Claude can process up to 2 million
tokens in a single prompt. Google's
Gemini handles 10 million.
Do you know what that means practically?
You can literally feed it entire books,
multiple documents, whole research
papers, and it'll actually remember all
of it while answering your questions.
And then there's the reasoning
capabilities. You've probably noticed
that when you ask AI to think step by
step, it suddenly gets way smarter.
That's not a coincidence. Researchers
discovered that explicitly prompting
models to break down their thinking,
what they call chain of thought
prompting, dramatically improves
accuracy, especially on complex
problems.
In 2026, this isn't some advanced
technique anymore. It's standard
practice.
What really changed the game though is
how these systems now integrate with
actual tools.
Earlier models had knowledge cut offs.
They literally couldn't tell you about
anything that happened after their
training date. Now they can search the
web, pull from databases, access
realtime information.
You're not just talking to a static
knowledge base anymore. You're talking
to something that can actively go find
what it doesn't know. the best
practices, how to actually use this
stuff. All right, now that you
understand what's possible, let's talk
about what actually matters.
How do you get these models to do what
you want? First rule, be ridiculously
specific. I cannot stress this enough.
The difference between write a poem and
write a short inspiring poem about
artificial intelligence in the style of
Maya Angelo focusing on themes of human
potential is the difference between
generic garbage and something actually
usable. Every detail you include narrows
down the possibilities and gives the
model clearer guard rails to work
within.
Think about it like giving directions.
If someone asks you how to get to the
coffee shop and you just say go
downtown, that's useless. But if you say
take Main Street for three blocks, turn
left at the red building, it's the
second shop on your right. Now they can
actually get there. Same principle with
AI.
The more specific you are about what you
want, the less the model has to guess.
Here's a practical example. Instead of
asking, "Tell me about climate change,"
try something like, "List three major
renewable energy breakthroughs that
happened between 2020 and 2026 and
explain each in one sentence focusing
specifically on solar technology.
See the difference?
One prompt gets you a Wikipedia style
essay. The other gets you exactly what
you asked for. Nothing more, nothing
less.
Second, put your instructions at the
beginning.
This might sound obvious, but you'd be
shocked how many people bury the actual
task in the middle of a long
explanation.
Start with what you want. Use clear
labels, something like task or
instructions,
followed by the specific request. Then
add context if needed, not the other way
around. Third, use examples when words
aren't enough.
Sometimes explaining what you want takes
longer than just showing it. This is
called fshot prompting and it's
incredibly powerful.
Let's say you want the AI to rewrite
customer service emails in a specific
tone.
Don't just describe the tone. Show it an
example of a before and after
transformation.
The model will pattern match and
replicate that style way more accurately
than if you just described it in words.
Fourth, force it to show its work. For
anything complex, math problems, logical
reasoning, strategic planning, add
something like, "Let's think step by
step or explain your reasoning before
giving the final answer."
This activates that chain of thought
capability we talked about earlier. The
model will literally write out its
thinking process, and you'd be amazed
how much this improves accuracy. In
tests, adding just the letter A at the
end of a math problem made models go
from random guessing to showing complete
correct solutions. Fifth, trim the fat.
Every unnecessary word in your prompt is
wasting space and potentially confusing
the model. Be direct. Cut out fluffy
phrases like, "I was wondering if you
could possibly just ask for what you
want." Remember, in API usage, every
token costs money. In practice, every
extra token costs time. Keep it tight.
And here's something most people don't
know. You can frame the AI's role to
completely change how it responds.
Instead of just asking a question, try
starting with something like, "You are
an expert historian who specializes in
explaining complex events to beginners.
Now, explain the causes of World War I."
That role framing gives the model a lens
to view the task through and it'll
adjust its language, depth, and style
accordingly.
One more thing, iteration is normal.
Your first prompt probably won't be
perfect, and that's fine. The pros don't
get it right on the first try either.
They start simple, check the output,
then refine. Maybe the response was too
long, so they add in under 100 words.
Maybe it was too technical, so they
specify explain like I'm a beginner.
Prompting is a conversation, not a
oneshot command. Real examples. Let's
see this in action. Theory is great, but
let's look at what this actually looks
like in practice. Example one, specific
queries. Bad prompt, tell me about
renewable energy. Good prompt. Summarize
the three most significant solar energy
innovations between 2020 and 2026 in
under 75 words, focusing on efficiency
improvements.
The first one gets you an essay. The
second gets you a targeted summary you
can actually use. One guide I tested
showed that when they made this exact
change, the output went from generic
rambling to a focused three-point answer
that hit every requirement. Example two,
chain of thought reasoning. Here's a
real test. Ask the model. Is 41 the sum
of two distinct odd numbers? Without
guidance, it might just guess. But add
this. Check step by step if 41 is the
sum of two distinct odd numbers. A. Now
watch what happens. The model lists out
odd numbers. 1 5 7 13 15. checks
different combinations and correctly
concludes no because all sums of two odd
numbers are even. That a triggered the
step-by-step thinking and suddenly we
got perfect accuracy instead of a coin
flip. Example three, coding prompts.
When you want code, you can actually
trick the model into coding mode with
strategic leadin words. Instead of write
a Python function to convert miles to
kilome, try this. Write a Python
function that asks for miles and
converts to kilometers. import.
That word import at the end signals
start writing Python now and the model
immediately shifts into code generation
mode.
It's a small hack that makes a huge
difference in output quality.
Example four, using the system role in
chat GPT or API calls. You can set a
system message that defines the AI's
behavior for the whole conversation.
Something like, "You are a friendly
customer service agent who solves
problems politely and never makes
excuses."
Then every user message gets filtered
through that lens. You can even combine
this with example conversations to
really dial in the exact tone and style
you want.
The pros and cons, what you need to
know. All right, let's be real about
what's good and what's still broken. The
good news, these 2026 models are
legitimately impressive.
GPT5 can generate complex, well
ststructured code from a single prompt.
GPT40 can analyze images and hold voice
conversations in real time. The context
windows mean you can work with entire
documents without losing track. And when
you nail your prompts, the efficiency
gains are massive. Less back and forth,
fewer hallucinations,
consistent results. For specific tasks,
you can even build custom GPTs that
package your perfect prompt with tools
and knowledge, making it reusable
without having to type the same
instructions every time. But here's the
catch. Even with all these improvements,
the model still hallucinate if your
prompts are vague.
Open AAI literally said reducing
hallucinations was a major focus for
GPT5 which tells you it's still a
problem.
These models also have knowledge cut
offs. GPT4's was October 2023. So they
won't know about anything after that
unless you feed it external data.
Security is another issue. Prompt
injection attacks where malicious users
trick the AI into ignoring your
instructions are a real concern if
you're building anything public-f
facing. And there's the cost factor.
Every token in your prompt burns compute
resources.
Inefficient prompts aren't just slower,
they're more expensive.
Plus, prompt engineering can get
brittle.
Small wording changes might completely
shift the output, so you end up spending
time fine-tuning prompts when you really
just want results. The bottom line,
prompting in 2026 is powerful, but it
requires skill. You can't just throw
questions at AI and expect magic. You
need to understand how these systems
work and craft your prompts accordingly.
Wrap-up. What you should do next. So,
here's what it all comes down to.
Prompting is both an art and a science.
Now, the basic rule hasn't changed.
Clarity and precision get you better
results. But the tools are so much more
powerful than they were even a year ago.
Multimodal inputs, massive context
windows, integrated tool use. All of
this means your prompts can do things
that were impossible in 2023. The
techniques we covered, being specific,
using examples, forcing step-by-step
reasoning, framing the AI's role. These
aren't optional nice to haves.
They're the difference between spending
10 minutes fighting with AI and getting
exactly what you need in 30 seconds. And
that gap compounds. Better prompts mean
better outputs, which mean less editing,
which means more time for the work that
actually matters.
My advice, start experimenting.
Pick one task you do regularly with AI
and spend time crafting a really good
prompt for it. Test different phrasings.
Add examples. See what happens when you
force reasoning. Save the prompts that
work. Over time, you'll build up a
library of patterns you can remix for
new situations and stay updated.
Tools like custom GPTs and AI agents are
evolving fast. What works today might be
outdated in 6 months, but if you
understand these core principles,
specificity, structure, iteration,
you'll adapt as the technology changes.
The people who master prompting in 2026
aren't going to be replaced by AI.
They're going to be the ones using AI to
do work that nobody else can match. So
get good at talking to these models.