AI Chatbot Showdown: ChatGBT Plus vs. Gemini Advanced
Explore the capabilities of ChatGBT Plus and Gemini Advanced in real-world scenarios, from recipes to travel tips, and discover which AI chatbot excels.
File
ChatGPT Plus vs. Gemini Advanced Comparing AI Answers From Both
Added on 01/29/2025
Speakers
add Add new speaker

Speaker 1: It's the AI chatbot battle. Today, we're testing OpenAI's ChatGBT Plus against Google's Gemini Advanced. AI chatbots are mind-blowing, but not without faults. So, let's see what each offers and which one gives the best results. To compare the chatbots, I'll prompt them with real-world scenarios like finding and modifying recipes, researching travel, and writing emails. The goal isn't to break the AIs with bizarre riddles or logic problems. I'm looking to see if real questions prompt useful and accurate answers. So, let's dive in. OpenAI's ChatGBT Plus subscription costs $20 a month and allows for access to ChatGBT 4, 4.0, DALI for regenerative AI images, higher usage limits, and early access to new features. Google's Gemini Advanced subscription is also $20 a month and allows access to its 1.5 Pro chatbot, ImageFX, documented image upload for analysis, code editing, and 2TB of Google One Storage. The context window between the two differ greatly, however. ChatGBT 4.0's context window is 8,192 tokens, whereas Gemini's can handle 1,000,000. First, we'll use the chatbots to find recipes, shopping, and travel suggestions. So, one way I like to test AI chatbots is by throwing fusion recipes at them. It's pretty easy to make a turkey sandwich, but if you start mixing in different flavors and combinations, it forces the AI chatbots to pull in different data sets and try to combine them. Let's ask it to create a churro recipe with a Thai twist and with some pumpkin spice as well. Of course, it's going to have the flour and the water and salt and butter and all that stuff, but we're looking more for the spice mix, right? So, it's bringing in together cinnamon, nutmeg, ginger, and cloves, things you'd expect from pumpkin spice. So, instead of integrating the Thai flavors into the churro itself, ChatGBT is instead recommending we create a Thai coconut dipping sauce. So, this necessarily wasn't where I was going for, but it's not a bad suggestion. But let's see how GemIIni fares with this test. Okay, so going through the water and the sugar and butter and eggs and stuff, Google is recommending we add some pumpkin puree, cinnamon, ginger, cloves, nutmeg, and then saying to put a Thai-inspired coating on top of the churro. So, as you can see, each chatbot gave us very different takes on a Thai-inspired churro with a pumpkin spice twist. I wouldn't say neither are necessarily bad or good. I would probably try both and probably enjoy both. But that's the thing that makes reviewing these AI chatbots a little tough because each time you ask it a question, it can yield a completely novel answer. I'm looking to buy a mid-range turntable available in red. It doesn't have to have a phono preamp. So, let's see what ChatGBT gives us. All right, so ChatGBT is recommending the Project Debut Carbon Evo, which does come in red, as well as the U-Turn Audio Orbit Special, which also comes in red. It also recommends the Rega Planar 1, and I need to make sure that comes in red. Yeah, it doesn't. It only comes in white, black, and walnut. But chatbots do this sometimes where, in an attempt to answer your question, it'll just spit something out. It is also interesting that it didn't include the Project Debut Carbon Evo 2, which is relatively new, also comes in red. And that was ChatGBT 4.0. So, let's see what kind of answers Gemini Advanced gives us. So, Gemini recommended the Project Debut Carbon Evo. Good. Also recommended the Rega Planar 1 Plus, but it does say it doesn't come in red. So, it actually did delineate there. And then it also recommended the Fluence RT82, which I believe also doesn't come in red. Yeah, the colors are natural, black, a woodgrain, and a white. So, comparing the two, it's really hard to kind of say which one did better. ChatGBT, in this instance, gave more correct recommendations of turntables that came in red. But it did recommend one that didn't come in red, and said that it did. Google's Gemini Advanced recommended two that didn't come in red at all, but at least was able to tell me that it didn't. I guess Gemini Advanced, ultimately, was more correct, even though it gave less correct recommendations. Now to travel recommendations. Generally, these chatbots are going to be pretty good with itineraries for New York, Los Angeles, Tokyo. So, to really test these chatbots, I kind of throw random little towns at them and see what they can yield. So, one of our coworkers, Corinne, she hails from Festus, Missouri, which is a small town. And let's ask ChatGBT to give us a travel itinerary for Festus, Missouri. Yeah, so on ChatGBT, it gave a decent itinerary for what to do in Festus, Missouri, a town of about 10,000 people. According to Corinne, having breakfast at the Main & Mill Brewing Company is a little bizarre, considering it's a brew house. The mid-morning going to the Festus City Park seems okay, and then for lunch, going to La Pachanga Mexican Restaurant, it's a place she's been to and she likes. So, the recommendations, while maybe not the most magnanimous, it seems to be generally accurate. Gemini is telling us to go to the Holiday Inn Express, and then it's telling us to head over to the Jefferson National Expansion Museum in nearby St. Louis. So, it's already pulling us away from Festus, which is fine, I guess. And then for restaurants, it's telling us to go to the Pasta House Company, which is a local favorite, as well as the Masson State Historic Site. This is also a place that's not in Festus specifically, but in a nearby town. Google is definitely looking to go to areas around Festus, Missouri. But overall, it seems like they both have similar-ish results, telling you to go to places near and around the area. So, one of the great things about AI is using it to minimize some of the busy work that comes in our lives, such as writing emails. So, I usually do a pretty basic test to write an email, where I'll ask it to write my manager a simple request to take the day off for a religious holiday or something like that, and to write it in a manner that's professional, yet fun and festive. They all seem to do a pretty good job on this. The email that Chachi Bouti gave us has the right amount of exclamation marks and has the tone of something that's a bit more casual. It also adds some emojis, which is a nice touch. Gemini is a bit more religious. It starts with the Eid Mubarak header, and it says that Eid is coming up. It definitely has all the exclamation marks that you would want. Then it also gives a recommendation to bring baklava and halva for my co-workers. And it says, you know, I'm hoping you can approve my request. No emojis, though. But either way, they're both fine. Earlier, I published a review of a new video game. About 1,600 words, so a bit of a long review, right? So let me see. I'll just ask Chachi Bouti to summarize this article for me. So summarize this article for me. I'm just gonna paste the entire article. And it's actually outputting pretty quickly as well. This is an accurate summary. It definitely borrows a lot of my language, so it definitely is like taking my words and just kind of cutting out a lot of the, I wouldn't say fluff, but a lot of the more important contextual information. It's weird that it says the game has been praised for its polished gameplay. I guess has suggesting like larger discussions within the gaming industry. So maybe it would have been more accurate to say this game was praised by the author for its polished gameplay, for example. It did get some of my like feelings about the game. Like overall, the reviewer seems to have mixed feelings about the game. But yeah, I mean, it's a good short summary. Both are doing a pretty decent job of summarizing articles if you paste it in its entirety. So what did we learn testing out both ChatGBT Plus and Gemini Advanced? We found that testing AI chatbots can be tricky. The answers that you get earlier in the day can differ from what you get later in the day. And that's just kind of the nature of the beast. Each time you ask a question, it's gonna give you a different answer. Between the two, I did find that ChatGBT tends to give you more verbose responses, maybe sounds a bit smarter. They both made mistakes. They both gave incorrect information or less incorrect information. Ultimately though, ChatGBT seems to be the one to beat.

ai AI Insights
Summary

Generate a brief summary highlighting the main points of the transcript.

Generate
Title

Generate a concise and relevant title for the transcript based on the main themes and content discussed.

Generate
Keywords

Identify and highlight the key words or phrases most relevant to the content of the transcript.

Generate
Enter your query
Sentiments

Analyze the emotional tone of the transcript to determine whether the sentiment is positive, negative, or neutral.

Generate
Quizzes

Create interactive quizzes based on the content of the transcript to test comprehension or engage users.

Generate
{{ secondsToHumanTime(time) }}
Back
Forward
{{ Math.round(speed * 100) / 100 }}x
{{ secondsToHumanTime(duration) }}
close
New speaker
Add speaker
close
Edit speaker
Save changes
close
Share Transcript