Hello everyone and welcome back to the Mat Vidpro AI YouTube channel. If you’ve been watching this channel for a while and you aren’t subscribed, please consider it. It helps the channel out a lot. Oh, and if you want to get more involved with the community, we have a phenomenal Discord server where everyone seems to be getting access to Dolly3 lately. So if you want to see some cool Dolly3 images, I highly suggest you check that server out. Oh, also follow me on Twitter. Shameless shilling aside, today we actually have some really fun AI news. The main big news comes from Meta AI. You see, they just had their Meta Connect, which is some big virtual event where they announce all their new stuff and basically talk about what they’re doing. And I think the main purpose of that event was to show off their new piece of hardware, the Meta Quest 3. But they also spoke a lot, and I mean a lot, about AI, more than most other companies have been lately. So it seems like Meta is all in on AI, and they’re actually going to be integrating it with their current hardware and software. So without further ado, let’s dive in.
So here we are, right on the Meta AI website where they talk about all their new generative AI stuff. It’s pretty surprising the direction that they’ve taken in the AI space. And don’t worry guys, I’m going to provide as much context as I possibly can on the rest of the AI market in comparison to what Meta is doing here, because I think it’s very important that we do so.
Introducing new AI experiences from Meta. Oh my God, so exciting! A new class of generative AI features that expand and strengthen the ways people connect with each other. And they’ve got this thing that says ’explore, create, do more’. We now have new creative tools that allow you to create and share custom stickers or update the visual style of your photos with a simple text prompt. Here’s the big one: chat with 28 different AI’s and get unique perspectives on topics like travel, games, and food. These new experiences will help you have fun, connect, and learn something new. So we’ll talk a little bit more about that, but this right here, the chatting with 28 different AI’s, well, this is pretty similar to some websites that have already existed for quite some time now. And in fact, in some ways, those websites might actually be better.
First thing here is a personal assistant. This is an Alexa, a chatGPT, a Siri, but they’re just calling it Meta AI, and I think that’s a lot better of a name than Bard. By the way, every time that I mention I don’t like the name Bard as an AI chatbot, I get someone named Bard in the comments saying, ‘You know, I’m really offended by that.’ And look, if your name is Bard, I think that’s a fantastic name, but you’re not an AI chatbot and you’re not a product. Like if Meta AI were to call this John AI, I think that would be pretty weird. Anyways, this is now in beta. Meta AI is an assistant that you can chat with one-on-one or message inside of group chats, which is really freaking cool actually. Like imagine if you could just invite chatGPT in your Apple group messages or even your Snapchat group messages. It has the ability to make recommendations, make you laugh when you need a good joke, debate in group chat, or generally be there to answer questions or teach you something new. So yeah, it’s chatGPT. This is based on Meta’s Llama 2 architecture. This is the large language model that is open source and anyone can use. It’s a fantastic open source model, but it doesn’t really compete at that chatGPT level of quality in terms of responses. But it’s not bad. As you can see, they just got a little example. They’re just doing some very basic creative work here. Hey, still, they are ahead of Apple in this game because Apple hasn’t upgraded Siri with any form of LLM yet. But maybe they’re taking their sweet time to let things bake in the oven, so to speak. Anyways, moving on.
This is a pretty huge announcement. They haven’t spoken much about their AI art models yet, but they’ve been working on them. Dream it, create it, generate images you can’t capture with a camera. Just describe an image for Meta AI to create, like ‘imagine a fairy cat in a rainbow forest’, and watch your idea come to life. All right, right off the bat, guys, you’re all going to recognize the ‘/imagine’ command from Mid Journey. Now, it’s not like Mid Journey owns this or anything, but Meta, what are you thinking? Why would we say ‘/imagine’ anytime we want to create something when we’re chatting with the bot? Shouldn’t the bot just be able to pick up on the context that we want to create something? Like even Bing AI can do that. We can tell Bing AI, ‘Hey, I want to make an image of this’, and it’ll be like, ‘Okay, I’ll try to do that’. And I assume when Chad GPT with Dolly 3 integration comes out in the next few weeks, it’ll be very similar. Chad GPT already has this functionality with plugins, where it can just recognize when it needs to use a plugin. And there’s like hundreds of plugins out there. So why can’t you guys just make it so I don’t have to use this ‘/imagine’ command? Maybe they thought this was like a marketing tactic because it’s so popular with Mid Journey. Guys, I don’t think Mid Journey customers are really the same target market here. I think really you’re targeting your current Meta customers, right? Because this is a chatbot that’s going to be available in Facebook Messenger, Instagram, WhatsApp. Who knows? Maybe I’m wrong, but yeah, you can see from this little demo image, this thing actually ain’t half bad. Like, it looks pretty good. A fairy cat and a rainbow forest. It all looks pretty fun. It’s probably somewhere around the SDXL Mid Journey range. It is a home-brewed model, though, so if they were to, let’s say, release this thing open source, it would be a pretty big shock. However, they haven’t made any announcements. They didn’t show a single example that had this thing creating text, though. So I don’t think this model is as capable as Dolly 3. Probably not even close to that. And it might even be less capable than SDXL because SDXL actually can do some very slight sentences and stuff sometimes with text.
Next up here, you can ask Meta AI into your group chats to get recommendations for a group trip, spark ideas for dinner party recipes, or just have a bit of fun with photo-realistic images. So everyone can generate. By the way, this is all free. Those group chats just seem so interesting to me, but I don’t have real group chats in any of these apps, so I don’t really know if I’m going to be using this day-to-day. Just because I’m not a huge Meta customer. But yeah, this isn’t necessarily anything new, but there isn’t a lot of group chat AI integration out there just yet.
Finally, this thing can search the web with Bing. So yeah, this actually can use the same exact search that I believe is integrated inside of Chad GPT and of course the Bing AI search. So this is like a partnership with Microsoft, two companies kind of competing in the AI space a little bit, but also at the same time working together. It’s a little weird, but yeah, at Meta AI, what is the average year-round temperature in Buenos Aires? And then it’ll give you a result that comes from Bing. So it does have access to factually correct information.
So, in conclusion, Meta AI is introducing new AI experiences and generative AI features. They have a personal assistant, chat with 28 different AI’s, AI art models, AI image generator, and more. They are integrating AI with their current hardware and software. The AI experiences will help people have fun, connect, and learn something new. Meta AI is also partnering with Microsoft for web search capabilities. Overall, Meta AI is making significant advancements in the AI space and is focused on providing innovative AI solutions to their customers.
Please note that the content has been edited and optimized for clarity, formality, and adherence to the given guidelines.