AI Pioneer Shows The Power of AI AGENTS - "The Future Is Agentic"
452,499
Published 2024-03-29
Be sure to check out Pinecone for all your Vector DB needs: www.pinecone.io/
Join My Newsletter for Regular AI Updates 👇🏼
www.matthewberman.com/
Need AI Consulting? ✅
forwardfuture.ai/
My Links 🔗
👉🏻 Subscribe: / @matthew_berman
👉🏻 Twitter: twitter.com/matthewberman
👉🏻 Discord: discord.gg/xxysSXBxFW
👉🏻 Patreon: patreon.com/MatthewBerman
Rent a GPU (MassedCompute) 🚀
bit.ly/matthew-berman-youtube
USE CODE "MatthewBerman" for 50% discount
Media/Sponsorship Inquiries 📈
bit.ly/44TC45V
Links:
HuggingGPT - • NEW HuggingGPT 🤗 - One Model to Rule ...
ChatDev - • How To Install ChatDev 👑 An Entire AI...
Andrew Ng's Talk - • What's next for AI agentic workflows ...
Chapters:
0:00 - Andrew Ng Intro
1:09 - Sequoia
1:59 - Agents Talk
Disclosure:
I'm an inv
All Comments (21)
-
As I come from neuroscience, I insist it must the the right track. The brain also uses "agents" which are more likely to be called "concepts" or "concept maps". These are specialized portions of the network doing simple jobs such as recognizing a face, or recognizing the face of a specific person. Tiny cost per concept, huge power of the intellect when working in concert and improved dynamically
-
I really like how you feature your sources in your videos. This "open source" journalism has real merit, and it separates authentic journalism from fake news. Keep it up! Thanks for sharing all this interesting info on AI and agents.
-
LLM AI + "self-dialogue" via reflection = "Agent". Multiple "Agents" together meet. User asks them to solve a problem. "Agents" all start collaborating with one another to generate a solution. So awesome!
-
Exponentially self-improving agents. Love how incremental improvements over a period of years is so over.
-
I really appreciate your rational and well-considered insights on these topics, particularly your focus on follow-on implications. I follow several AI News creators, and your voice stands out in that specific respect.
-
The old saying comes to mind: Think twice , say once. Perfectly applicable to AI where LLM checks its own answer before outputting it. Another excellent video.
-
I like the idea of replacing a single 120b (for instance) with a cluster of intelligently chosen 7b fine-tuned models if for no other reason than the hardware limitations lift drastically. With a competently configured "swarm," you could run one or two 7b sized models in parallel, adversarially, or cooperatively, each one contributing to a singular task/workspace/etc. They could even be guided by a master/conductor AI tuned for orchestrating its swarm.
-
Agents? You know this is how the matrix begins, right?
-
Your commentary "dumbing things down" for people like me was very helpful in understanding all this stuff. Good video!
-
Great point about combining Groq's inference speed with agents!
-
This is one of the best vids you've made. Good commentary along with the presentation!
-
Matthew, I've watched many of your videos, and I want to thank you for sharing so much knowledge and news. This latest one was exceptionally good. At times, I've been hesitant to use agents because they seemed too complex, and didn't work on my laptop when I tried. However, this video has convinced me that I've been wasting time by not diving deeper into it. Thanks again, and remember, you now have a friend in Madrid whenever you're around.
-
You upload on the least expected random times of the day and I'm all for it
-
Excellent video. Helped clear away a lot of fog and hype to reveal the amazing capabilities even relatively simple agentic workflows can provide.👍
-
Very good Matthew! Thanks for sharing. I built my simple agent and I see it improving a lot after a few interactions.
-
Glad I saw this, your additional explanations were incredibly helpful and woven into the main talk in a non-intrusive way. Subscribed.
-
I have been thinking about agents for months without knowing what I am thinking of untill I found videos like crewai and swarm-agent and my mind is blown. I am all in for this and trying to learn as much as i can because this is for sure the future. Thanks for all your uploads
-
Thanks for sharing. As much as I love Andrew Ng, his voice always puts me to sleep. Its like a lullaby. Thanks for elaborating on these updates. It kept me engaged.
-
I'm glad we all seem to be on the same page but I think it would help to use a different word when thinking about the implementation of "Agents". What I think was a breakthrough for me was replacing the word "Agent" with "Frame of mind" or something along those lines when prompting an "Agent" for a task in a series of steps where the "Frame of mind" changes for each step until the task is complete. Not trying to say anything different than what has been said thus far but only help us humans see that this is how we think about a task. As humans we change our "Frame of mind" so fast we often don't realize we are doing it when working on a task. For a LLMs your "Frame of mind" is a new LLM prompt on the same or different LLM. Thanks Matthew Berman you get all the credit for getting into this LLM rabbit hole. I'm also working on a LLM project I hope to share soon. 😎🤯😅
-
All of your videos are very informative and I like that you keep the coding bugs in rather than skipping ahead, and you demonstrate solving those issues as you go. I’ve been experimenting with ollama, LM studio, and CrewAI, with some really cool results. I’ve come to realize I’m going to need a much more expensive PC. 😂