Explore ChatGPT's New React and HTML Rendering
Discover ChatGPT's ability to render HTML and React components, including demonstrations and comparisons with tools like V0 and Bolt.new.
File
ChatGPTs New HTML React Rendering Abilities in 5 Minutes
Added on 01/29/2025
Speakers
add Add new speaker

Speaker 1: ChatGPT now has the ability to render both HTML as well as React components directly within their web app as well as their desktop app. So here's a quick example from Edwin who's one of the developers over at OpenAI who made a Windows 95 simulator all completely within ChatGPT using the O1 model. And that's a big piece of this announcement you do have access to the O1 model directly to render these canvases of whether it's HTML or React components. So in this video, I'll go through a couple demonstrations in both the web app as well as the desktop app. And then I'll also briefly touch on my first impressions and how this compares to tools like v0 bolt.new as well as lovable. So there's two ways to access canvas if you haven't used it before, you can access it from this button here. Or alternatively, you can forward slash canvas at any point within your query to have it indicate that you are going to be using canvas for whatever you're doing. Once you have it selected, I can go ahead and say generate me a React component that reads hello world. Once you submit that now if you are using the O1 model, it will take a little bit of time potentially depending on your query. The important thing to consider is if you are a plus member, you can very quickly go through your O1 messages by using something like this. One thing that I'd encourage is if you want you can swap to a different model mid chat one of the difference that I found right off the bat when I compare this to something like artifacts, v0 bolt.new or lovable is you do actually have to choose to render it and won't actually just render automatically. That's one subtle UX difference, but it's not actually that big of a deal to actually just press preview and then get it from there. But the nice thing with the canvas view is you can edit it directly within here. There are also some other features within here you can port it to a different language, fix bugs if they're in there. You can also add comments just with one click. They have some of these nice little features to layer it into whatever you might be building. We have our code with some comment blocks. Now what I could say here is I could say add in a header and footer and have this read developers digest. The thing that I found is using these models for similar type of web apps before is GPT-4.0 does do quite well for a lot of different applications. If you're just building a simple web page, you will be able to get pretty far. If you do want something that's a little bit more complicated, obviously the O1 model is going to be a little bit better for that. But let's just try a new chat here and I'm going to try canvas again. I'll say build a beautiful website for my SaaS and let's give it the fun name of a gentify. Want it to have multiple pages and have a professional look and feel. We'll go ahead and I'll submit that. We'll see what it looks like with GPT-4.0. One big thing off the bat is it is going to render these React components within one file. That's one of the big notable differences between tools like V0, Bolt.new, as well as Lovable where those are more text to app builders, whereas this is like a text to component builder. It is similar to Anthropx Cloud Artifacts feature where you'll be able to render these little components that you can then port within your application. That's my general first impressions with something like this. Here's our simple SaaS website. Obviously it's pretty straightforward, but now let's go to O1 and I'm going to say I want to make this way more involved. Let's use Framer Motion and a number of beautiful component libraries to make this look like a professional SaaS website. Now it will just take a moment for it generate all of the different tokens. O1 is obviously a little bit slower of a model given that it does think through things and then it also has to generate the tokens. Here is our second iteration. Now we have our header and we have our footer. If I go to the next page here, we have something that looks a little bit better in terms of a pricing page. Obviously it still needs a little bit of prompting to get it across the line. If we go to the about page, again, a pretty simple about page, and then we have a contact page. The other thing with this is you can also just copy the code just like that, or alternatively you can open up the console. You can see if you have any errors or if there's anything within the console that you're going to be logging out, you can see that within there as well. That's definitely a nice little touch. I think where this could potentially be interesting is if you tie it to something like cursor and maybe you have a React component within your application, it would be interesting to see how well it would work on actually being able to integrate with existing code bases. Now my suspicion is if you're trying to do that with some proprietary code where you have to access different environment variables and stuff like that, it's probably not quite there. But for certain applications, like if it's a static website and stuff like this, you could actually probably get quite a bit done. Last I'm just going to ask it to generate a Minecraft clone in React. We'll go ahead, I'll send that through and then I'll skip forward to what it generates. All right, so here we go. We have Minecraft clone, we have all of the different components, I'll go and I'll click preview. Now here if you open up the console, you'll see that it's installing the packages. In the desktop app, I saw that it got stuck at installing packages, so I'm not sure what the issue might be there. But when I hopped over to the web app, this is what it looks like. So we can zoom in here, I can go ahead, it looks like I can potentially select blocks where it's highlighting a little bit as I hover over them and then I see click me. It needs a little bit of work, but this is a starting point. I'm curious what you'll be able to generate with this with a more impressive prompt. So let me know what you've had luck with within the description of the video. Otherwise that's it for this video. If you found this video useful, please like, comment, share and subscribe. Otherwise until the next one.

ai AI Insights
Summary

Generate a brief summary highlighting the main points of the transcript.

Generate
Title

Generate a concise and relevant title for the transcript based on the main themes and content discussed.

Generate
Keywords

Identify and highlight the key words or phrases most relevant to the content of the transcript.

Generate
Enter your query
Sentiments

Analyze the emotional tone of the transcript to determine whether the sentiment is positive, negative, or neutral.

Generate
Quizzes

Create interactive quizzes based on the content of the transcript to test comprehension or engage users.

Generate
{{ secondsToHumanTime(time) }}
Back
Forward
{{ Math.round(speed * 100) / 100 }}x
{{ secondsToHumanTime(duration) }}
close
New speaker
Add speaker
close
Edit speaker
Save changes
close
Share Transcript