Transcript
JvbwPjVjR8Q • I Tried EVERY Google AI Tool (These are my Favorites)
/home/itcorpmy/itcorp.my.id/harry/yt_channel/out/parkerprompts/.shards/text-0001.zst#text/0014_JvbwPjVjR8Q.txt
Kind: captions
Language: en
I just spent two weeks testing all 30 of
Google's AI tools because I kept seeing
people confused about which ones are
actually worth their time. What I found
is that most people spend time on tools
that don't really help them while
ignoring the ones that could actually
save them hours every week in real work.
And if you're trying to learn all of
them, you're doing it wrong. You only
need about five of these to completely
transform your entire workflow no matter
what you do. So, I'm ranking every
single one from the tools I never touch
to the ones I use daily. I've broken
everything down into four main
categories. Developer tools, creative
and media tools, research and
productivity tools, and the experimental
stuff from Google Labs. An important
thing you need to understand before we
jump in is that none of the tools that
I'm about to reveal today are actually
bad. I'm ranking them based on how I and
most people use them and how they fit
into my workflow. Starting off, we have
Pamelli. This one scans your website and
automatically creates social media posts
that match your brand. It pulls in your
colors, your fonts, your messaging, and
generates content that actually looks
like it came from you. It's useful if
you hate designing social graphics, but
the posts aren't award-winning. They're
just good enough for most campaigns.
Moving on to Notebook LM. This tool has
blown up recently because of its podcast
feature, but that's not even the best
part. What makes Notebook LM special is
that it's fully grounded in your
sources. You upload documents, and it
only pulls information from what you
gave it. Like Gemini, you can upload
whatever you need. Then you can ask it
questions, generate summaries, or even
create study guides. And yes, it can
turn your research into a podcast with
two AI hosts discussing your sources.
This is the brief on Ocean Sunfish.
>> The audio quality is legitimately
impressive, but the real power is in how
it organizes information. You can create
flashcards, quizzes, mind maps, and
briefing documents all from the same set
of sources. It's like having a research
assistant that actually understands your
project. I use Notebook LM every time
I'm working on a complex project that
involves multiple sources. It keeps
everything organized and makes it easy
to find exactly what I need. Next up are
Nano Banana Pro and VO3.1. Nano Banana
Pro is Google's image generation model,
and it's built specifically for creating
high-quality visuals. The quality here
is actually really impressive. It
handles details way better than most
other AI image generators, especially
when it comes to realism and accuracy.
It's become one of the top models for
anyone who needs professionallook images
without the usual AI artifacts you see
in other tools. And then there's VO 3.1.
This is Google's newest video generation
tool, and it's one of the best models
available right now. It creates
cinematic video with realistic motion,
professional lighting, and it even
includes audio. You can animate static
images, create product videos, or
generate entire scenes from text
prompts. The quality is insane. You can
use it for anything you can think of.
Now, I need to tell you something
important. I've been using both of these
models almost daily, but I don't
actually access them through Google's
platform. The reason for that is
incredibly simple. Just a couple of
months ago, Clling was the best platform
out there. Then, VO dropped and it
became the go-to. Now, Sora 2 is making
waves. And I can promise you that in a
month, yet another platform is going to
be breaking the market. If you subscribe
directly to Google just for VO3.1,
you're basically locked in. And the
moment something better comes out,
you're stuck paying for a tool that's
already outdated. So instead, I use
Higsfield, which is an all-in-one
platform that gives you direct access to
Nano Banana, VO, Sora, Cling, and all
the other top-of-the-line models in one
place. I'll leave a link down in the
description if you want to check it out.
Now, we have Gemini Nano and the smaller
models like Project Astra, and Gemma.
These are designed to run locally on
devices like Raspberry Pi or inside
mobile apps. If you're a developer
building lightweight AI features into
hardware, these are useful. But for the
average person creating content or doing
research, you'll never need to think
about these. There's also Google
Workspace integration. Gemini is now
built into Gmail, Docs, Sheets, and
Slides. It can draft emails, summarize
documents, and generate charts. That's
powerful if you live inside Google
Workspace. Then there is Google AI
Studio. This is Google's developer
playground where you can test different
models, build prototypes, and experiment
with advanced features. If you're
serious about building AI tools or
testing prompts at scale, this is
essential. But if you're just trying to
get work done, you can skip it and stick
with the main Gemini interface I'll talk
about in a bit. Next is Mixboard, which
is designed for creating mood boards.
You can use it to organize visual ideas
and inspiration, which is helpful if
you're working on creative projects or
trying to nail down a specific
aesthetic. Mixboard is essentially
Google's answer to a Pinterest board,
but with AI assistance. You can upload
images, add notes, and organize
everything visually. The AI can suggest
related images, identify color palettes,
and even help you find visual themes
across your collection. Now, let's talk
about Opal. This is Google's automation
tool, and it's basically their answer to
Zapier or Make. You can build workflow
apps using plain English. You describe
what you want, and it constructs the
logic using a node-based system. For
example, I built a tool where I enter a
topic and the AI generates 10 viral
YouTube video titles about it. That took
me about 2 minutes to set up. You can
use this for client onboarding, lead
screening, or anytime you need a
consistent way to collect information
and have AI handle it right away. The
key difference between Opel and other
automation tools is simplicity. It's not
trying to do everything, so it does a
few things really well, and it's
completely free. There's also Firebase
Studio, where you can actually build
complex software, and tools like
anti-gravity, which is basically
Google's answer to Cursor. There's Jules
for delegating coding tasks and Stitch
for website designs. But honestly, if
you're serious about building complex
code right now, the best way to do that
is still with Cursor and Claude. Then
there's Disco, which creates interactive
tabs from your browser history. The idea
is interesting, but in practice, I found
it more confusing than helpful. It's
trying to turn your browsing into
something more organized, but it just
didn't click for me. Lens is another
one. Google's visual search tool that
lets you point your camera at something
and get information about it. It's
incredibly useful for getting
information fast without having to type
anything. And then there's Gemini
itself. This is Google's flagship AI
model, and it's built completely
different from chat, GPT, or Claude.
It's multimodal, which means it doesn't
just understand text, but it can process
any form of document you give it, even
entire code bases, all at once. You can
upload PDFs, spreadsheets, images,
videos, and audio files, and Gemini will
analyze all of them together. What makes
Gemini stand out is its context window.
It can hold up to 2 million tokens in
memory, which is roughly 1,500 pages of
information in a single conversation.
That means you can upload massive
documents and it'll remember every
detail without losing track. To put this
in perspective, Chat GPT's largest
context window is 128,000 tokens, which
is about 96 pages. Claud's is 200,000
tokens, roughly 150 pages. Gemini's 2
million token context window is 10 times
larger than the competition. This isn't
just a numbers game. It fundamentally
changes what you can do with the AI. You
can upload an entire book, a year's
worth of meeting notes, a complete
codebase, or dozens of research papers,
and Gemini will understand all of it
simultaneously. However, the best part
about Gemini is deep research. This
isn't just a search engine. It's an
autonomous research agent. You give it a
research question and instead of pulling
up a few links, Gemini goes out, reads
dozens of sources, synthesizes all the
findings, and writes you a full
multi-page report with citations. I've
used deep research for everything from
market analysis to technical research to
content planning. The quality of the
reports is consistently high, and the
time savings are massive. Instead of
spending hours reading through articles
and taking notes, I can get a
comprehensive overview in minutes and
then spend my time on higher level
thinking and decision-making. Let me
show you what that looks like. I'll
type, "Research the five most discussed
AI breakthroughs from the last 90 days.
Focus on practical applications for
marketers. Include source links." Gemini
takes a few minutes, scans the web, and
comes back with a structured report with
detailed insights backed by multiple
sources. This is the kind of research
that would normally take me at least 2
hours, and Gemini just did it in 2
minutes. If you need to gather
information fast, this feature alone
makes Gemini worth using. There are also
some learning apps from Google Labs that
could be useful for education. The cool
thing about Google Labs is that tools
like Notebook LM and Opal started here
as experiments. So if you want to stay
ahead of the curve and see what's coming
next, this is worth checking regularly.
In this category, there's also Imagine
3, another image generation tool, though
not as good as Nano Banana, and Whisk
and Whisk Animate for rapid creative
exploration. Plus, Music AI Sandbox,
which is still in its early days, but if
you're a musician, you might want to
keep your eye on that. With Gemini's
Deep Research Notebook LM, and access to
Nano Banana Pro and VO 3.1 through
Higsfield, you can automate research,
create professional content, and build
workflows that save you hours every
week. If you want to start using the
best AI models without paying for
multiple subscriptions, I'll leave a
link to Higsfield in the description
below. Having access to these tools is
important, but the real difference comes
down to how you prompt them, which is
why I watched Google's 6-hour prompt
engineering course and broke it down
into 10 minutes. Click the video on the
screen to watch that, and I'll see you
there.