File TXT tidak ditemukan.
Transcript
0IHE8VyLhwM • GPT-5 Prompts Have Changed: 5 New Techniques Sam Altman Wants You To Use
/home/itcorpmy/itcorp.my.id/harry/yt_channel/out/BitBiasedAI/.shards/text-0001.zst#text/0218_0IHE8VyLhwM.txt
Kind: captions
Language: en
I know how frustrating it feels when you
spend 20 minutes crafting what you think
is the perfect chat GPT prompt only to
get a response that completely misses
the mark. You hit regenerate, try
rephrasing, maybe even start over from
scratch.
I've been there too many times staring
at my screen wondering why this tool
that's supposed to save me time is
actually costing me hours.
But here's what I discovered after
months of trial and error. We're not bad
at using chat GPT. We're just using
outdated techniques.
The prompting methods that worked last
year are actually sabotaging your
results. Now, welcome back to
bitbiased.ai,
where we do the research so you don't
have to join our community of AI
enthusiasts with our free weekly
newsletter. Click the link in the
description below to subscribe. You will
get the key AI news, tools, and learning
resources to stay ahead. So, in this
video, I'm sharing five powerful prompt
engineering techniques that actually
work in 2025. These methods will help
you get better answers faster, stop
wasting time on regenerations, and
finally, unlock chat GPT's full
potential without feeling like you need
a computer science degree. We'll cover
the conversational approach that treats
chat GPT like a collaborator, advanced
role-based prompting, how to get
perfectly structured outputs, and
techniques for continuously improving
your prompting skills. First up, let me
show you why the way most people prompt
is making everything harder than it
needs to be. The foundation shift.
Here's what most people don't realize
about chat GPT in 2025. The model has
evolved dramatically. Those elaborate,
hyperdetailed prompts everyone was
teaching in 2023,
they're actually counterproductive now
because the newer models are trained to
understand context and infer intent much
better. Think about it like talking to a
close friend versus giving instructions
to a stranger. With your friend, you
don't need to explain every tiny detail
because they already understand your
context. That's where we are with modern
chat GPT.
The sweet spot isn't about writing
longer, more detailed prompts. It's
about giving the right information in
the right way.
The biggest mistake I see is treating
every task the same. A creative writing
request needs a completely different
approach than data analysis or code
debugging.
Most guides treat prompting like
oneizefits-all,
and that's leaving massive productivity
on the table. The new foundation comes
down to three principles.
First, context layering, giving
information in the right order and
depth. Second, adaptive specificity,
knowing when to be precise and when to
let the model reason. Third, outcome
focused structuring, telling chat GPT
what you want to achieve, not exactly
how to get there.
These aren't just theories. I've tested
hundreds of prompts to find what
consistently produces better results.
and wait until you see the difference in
your daily workflow,
the conversational context method. Let's
dive into the first game-changing
technique, the conversational context
method. This is probably the most
underutilized strategy out there, and it
completely transformed how I work with
chat GPT.
Here's the core idea. Instead of dumping
everything into one massive prompt, you
build understanding through progressive
conversation.
Think of it like teaching someone a
concept. You start with the foundation,
check understanding, then layer on more
details. Let me show you with a real
example. Say you need a marketing email
for a product launch.
The old way, write one giant prompt with
your product details, audience
demographics, tone requirements, key
features, call to action, length, specs,
everything at once. Sometimes it works,
but usually you get something close but
not quite right. Then you're stuck doing
endless regenerations.
The conversational method flips this.
Start simple.
I'm launching a productivity app for
remote teams next month. That's it. Let
ChatGpt respond.
Then layer in the next detail. The
unique feature is real-time
collaboration that feels more natural
than current tools. You're building
understanding step by step. Next. My
audience is tech-savvy project managers
at midsize companies who are frustrated
with existing tools.
Then I want an announcement email that's
exciting but not hypy professional but
approachable. See what's happening by
the time you ask for the actual email.
Chat GPT has absorbed all this context
through natural dialogue. The output
feels like it came from someone who
truly gets your project because you
built that understanding
collaboratively.
But here's where the magic really
happens. After chat GPT generates the
email, instead of hitting regenerate,
you refine through conversation.
This is great, but the opening feels too
formal. Can we make it more
conversational?
Each refinement builds on established
context.
You're sculpting the output through
collaboration, just like working with a
human assistant.
And surprisingly, this is often faster
than the single prompt approach when you
factor in all those regenerations and
heavy edits you'd otherwise do.
Plus, the quality is consistently higher
because the model genuinely understood
your context rather than just parsing
requirements. Role-based prompting 2.0.
You've probably heard about basic role
prompting. act as an expert marketer or
you are a professional developer. That's
the old version. The new iteration is
significantly more powerful. The
difference is depth.
Old role prompting was surface level.
Chat GPT would adjust its language
slightly.
Role-based prompting 2.0 creates a
complete professional identity with
specific expertise, philosophical
approach, and contextual experience that
fundamentally shapes how the model
thinks about your problem. Here's the
comparison.
Old way, act as a senior financial
adviser and help me create a budget. New
way, you're a financial adviser
specializing in freelancers with
irregular income. Your approach
prioritizes psychological sustainability
over aggressive optimization because
you've seen clients burn out from
restrictive budgets. You favor flexible
frameworks with built-in buffers.
Given this philosophy, help me create a
budget for my variable income. See the
difference?
You're not assigning a job title. You're
creating a coherent professional
perspective with values, experience, and
specific expertise.
The output changes completely because
the model operates from this rich
professional identity. But here's where
it gets really interesting.
Multi-perspective analysis.
Ask chat GPT to approach the same
problem from multiple expert angles and
identify where they conflict.
For a business decision about adding a
new product feature, you might prompt
analyze this from three perspectives. A
growth focused marketing director
prioritizing user acquisition, a UX
designer prioritizing simplicity, and a
CFO prioritizing profitability.
Explain each recommendation and the
underlying priorities. Then identify the
tension points between perspectives.
This prevents single lens thinking.
You're getting multiple expert
consultations in one conversation and
the most valuable insights emerge from
the friction between different
professional viewpoints. That's where
the non-obvious solutions live. You can
also use constrained expertise
prompting. You're a startup adviser with
deep B2B SAS expertise but limited
consumer app experience.
You're working with incomplete data. So
focus on first principles. thinking and
testable hypothesis rather than market
statistics.
By deliberately limiting the expert
role, you get more realistic, actionable
advice that matches real world scenarios
where you don't have perfect
information.
Structured output prompting. Now, let's
talk about structured output prompting.
Getting responses in exactly the format
you need, which saves hours of manual
reformatting.
Here's the problem. You ask ChatGpt for
information and get paragraphs of text.
But what you really need is a comparison
table, a step-by-step workflow, or
categorized pros and cons. When the
format doesn't match your needs, you
waste time restructuring everything
manually. Structured output.
Prompting solves this by defining your
exact desired structure up front. But
it's not just saying, "Give me a list."
The advanced version defines
hierarchies, relationships, and semantic
categories that match how you actually
think about the information.
Analyze remote work using this
framework.
For each category, productivity,
well-being, collaboration, cost. Provide
one primary benefit with evidence, one
challenge with mitigation strategy, and
one unexpected consequence often
overlooked.
Format to make trade-offs immediately
visible. You're not just requesting a
format. You're defining a mental model
that serves a specific analytical
purpose. Another powerful application is
template-based prompting. Provide an
actual template structure and ask
ChatGpt to fill it. Meeting date, date,
attendees, list, key decisions, numbered
list, action items, table with task,
owner, deadline columns, discussion
points, bullets, next meeting, details.
Fill this for our quarterly planning
meeting on product road map. You can
also enforce quality standards through
structure.
Create a product comparison where each
gets overview exactly two sentences, key
features, three features with one
sentence explanations, pricing X month
with inclusions, best for one specific
use case, and one distinctive advantage,
no other product shares. Every product
gets all five components with consistent
depth. This ensures consistency. Chat
GPT can't give uneven comparisons
because the structure itself enforces
balance.
If you frequently need competitive
analyses or project proposals, develop
standardized templates that consistently
produce highquality outputs.
You're creating custom output formats
optimized for your specific workflow.
Iterative refinement and
constraint-based prompting.
Let's combine two powerful techniques,
iterative refinement and constraintbased
prompting. First, iterative refinement.
This separates beginners from advanced
users. When chat GPT gives you an
output, resist accepting it as is or
completely regenerating.
Instead, analyze it like an editor.
What's working? What's missing? What
needs adjustment?
Then give specific surgical refinement
prompts that improve targeted aspects
while preserving the good parts. Say
Chat GPT wrote a product description
that's decent but not perfect. The
technical details are great, but the
emotional appeal is missing and the
opening is too generic.
Your refinement. Keep all technical
specifications exactly as they are
perfect, but rewrite the opening to
start with a specific customer pain
point. Then enhance the middle section
to include emotional benefits alongside
features. The ending can stay.
You're directing attention to specific
improvements while protecting what
works.
This builds on established context
rather than starting over. Each
refinement compounds on previous
improvements.
Now, constraint-based prompting.
This seems counterintuitive, but
consistently produces better results.
Deliberately add limitations to force
creativity and focus.
When chat GPT has unlimited freedom,
responses can be generic or unfocused.
Constraints force prioritization and
creative solutions within boundaries.
Think haikus or elevator pitches.
Constraints force clarity. Strategic
length constraints.
Explain this concept in exactly three
sentences. What it is, why it matters,
how it applies. Vocabulary constraints.
Explain blockchain without using.
Digital technology currency computer
network decentralized.
This forces analogies and fundamental
concepts instead of jargon, making
complex ideas accessible.
Resource constraints. Create a marketing
strategy with $500 budget, one part-time
person, 3 months, no paid ads. Must be
fully implementable.
You get actually executable strategies
instead of idealistic plans assuming
unlimited resources.
Forced prioritization.
Recommend only three actions, each
requiring under one hour. No general
advice. Every recommendation must be
specific enough to start immediately.
This eliminates vague suggestions and
forces practical guidance. The key is
choosing strategic constraints that
align with your goals. Random
constraints don't help, but the right
boundaries eliminate weak solutions and
force better ones.
Meta prompting for continuous
improvement.
Finally, let's talk about metarrompting.
Using Chat GPT to improve your own
prompting skills, creating a learning
loop that compounds over time. Instead
of just asking Chat GPT to complete
tasks, ask it to analyze your prompts
and suggest improvements. You're using
AI as a prompting coach, which
accelerates learning dramatically. The
simplest application is prompt analysis.
Before sending a prompt, ask
analyze this prompt I'm about to use
your prompt. What assumptions am I
making? What information might be
missing? What ambiguities could cause
unclear outputs? How could this be
restructured for better results? Chat
GPT provides detailed analysis, pointing
out gaps you missed, then refine your
prompt based on this feedback before
using it.
Each prompt becomes better than the
last. More powerfully ask chat GPT to
generate example prompts for your needs.
I frequently need to specific task
generate three different prompt
structures using different strategies.
Explain the advantages and limitations
of each. This exposes you to different
patterns and builds your mental library.
Over time, you internalize these
patterns and apply them naturally. Try
prompt decomposition.
I need to create customer personas for a
new product. What information should I
provide? What structure should I
request? What pitfalls should I avoid?
Give me a complete framework. The
framework becomes a reusable template or
ask chat GPT to predict interpretation
issues.
Pretend you're a novice user. If I send
this prompt, your prompt, what would
confuse you? What assumptions would you
make? how might you misinterpret it?
This helps identify where prompts might
fail and makes your prompting more
robust. You can also do quality
assurance backwards.
I got this output from a previous prompt
output. Based on this result, what was
likely wrong with my original prompt?
How should I have prompted for a better
result?
Analyzing outputs backwards teaches you
which choices lead to which results. The
ultimate technique is iterative prompt
evolution.
Maintain a document of your best
prompts. Regularly analyze them for
improvements. Implement changes and
track which versions perform better.
You're essentially AB testing your own
prompting strategy so your skills
compound over time. There you have it.
Five powerful prompt engineering
techniques that'll transform how you use
chat GPT in 2025.
We covered the foundation shift showing
why old methods don't work anymore. The
conversational context method for
building understanding through dialogue,
role-based prompting 2.0 for creating
complete professional perspectives.
structured output prompting for getting
exactly the format you need, plus
iterative refinement and constraintbased
techniques for better results and
metaprompting for continuous skill
improvement.
Here's what I want you to do. Pick just
one technique, the one most relevant to
how you use Chat GPT daily, and commit
to practicing it consistently for the
next week. Don't try to master
everything at once. Get comfortable with
one method until it feels natural. then
come back and add another. The real
power emerges when you combine these
techniques.
Use conversational context with
role-based prompting. Apply structured
output while using constraints.
Layer iterative refinement with meta
prompting analysis.
You'll discover combinations that work
perfectly for your specific needs.
Drop a comment and tell me which
technique you're trying first and what
you hope to achieve.
If you discover a combination that works
particularly well, share it. We all
improve when we learn from each other.
If this helped you level up your chat
GPT skills, hit that like button so
others can find these techniques.
Subscribe for more AI tools,
productivity strategies, and ways to
stay ahead in this rapidly evolving
space. Remember, prompt engineering
isn't about memorizing formulas. It's
about understanding principles and
developing intuition. These techniques
are frameworks, not rigid scripts. Adapt
them to your style, experiment, and
trust your instincts about what works
for you. Thanks for watching. In the
next video, we're diving into ChatGpt's
custom instructions feature to automate
your personal prompting style. You won't
want to miss it.