AI Copywriter Figma Plugin Full Course

Transcript

[00:00:00] Today, I'm going to show you how to build this AI copywriter Figma plugin with just prompts. So when you have a screen, and let's say this in case, in this case, it's an onboarding screen, and you can select the text that you want to create variants for.

And then you enter an open API key. And just select what tone you want to create copy variations on and enter the number of variations and you can say, give special instructions, tailored it for a high school student. And once you say enter, it will go talk to chat GPT, create different variations and just put all these different copy variations that you can use.

And as always, it's free, and we will build this together right now.

I see this tool incredibly useful for designers who are creating multiple variants of a copy and then you have to compare them all together and see, okay, which one's better? Uh, because now you have to go to chat GPT and then you have to write a different prompt and you have to generate something and then it's hard to see and visualize how [00:01:00] these copy variants are working with your designs.

So I thought this would be a cool addition to add. This plugin will also be soon published in the Figma store, and I will share a link to it on my LinkedIn very soon.

I will also make the source code available in case you want to just play around with it okay, let's get started

so for this project, we're going to use a boilerplate that's available free on GitHub, and it's called Figma plugin react template. And this already comes with some of the configuration that you need to make for your Figma plugin to work within Figma. It's super easy once you have this. And it also has some basic React setup.

There's clear instructions on how to use it, but I'm going to show you exactly what to do. So once you go and copy the command from HTTPS on your computer now what you can do is just go into your terminal and within your terminal, you can just say. So git clone and just enter this and give a name to your project. So I'm going to say AI copywriter Figma and [00:02:00] just press enter. That should go and copy all the files from the main project directory.

So you can just go into AI, copyright, uh, Figma. And once you're inside, there is a small tweak that I want to do here, which is not necessary, but it's just something it's good to do because, , I have been so used to using node package manager. I am going to open it on. Cursor and once I have done that, the, the only thing I need to do is just delete this yarn.lock file and I'm going to use NPM instead of yarn. Yarn is just similar to NPM. It's like a package manager, but, and it's in fact faster, but I have just been so used to NPM that I just by mistake sometimes do NPM. And when you mix yarn and NPM, it becomes chaos. So that's why I like to keep it all npm just so I don't make a mistake, but you can use yarn if you want.

So we're going to open a new terminal. And we are going to say npm install, or npm i on the [00:03:00] shorthand, and that will go and install all the files that you need.

So that is done. What you can do is to run this On your Figma, all you need to do is just open Figma. I'm going to close this plugin for now and I will delete this. And so what I have here is just like a screen from pocket and there's like a title is subtitle and this was an image and I can go to plugins.

I can go into development so you see, this one is my previous one that I built, but you can go into import plugin from manifest or manage plugins in development and say import from manifest. And once you're here, go into your documents, um, and just go select the AI copywriter Figma and copy this manifest.json And then once you do that, it will automatically just put it inside your folder. It's called Figma plugin template. Because if you go into your manifest. json file, this is the file that we copied. We call it Figma plugin template. So if you, let's say you [00:04:00] call it, AI copywriter Figma.

So I'm just going to put this Figma in brackets because I already have a plugin called AI Copywriter. So to just tell the difference. So once you do that, you can see it immediately reflects here. So this is the file that we are editing right now. So once you open this file,

That's because I am not running this locally.

Okay. So to make this work, I'm going to go back into my code and I need to first run the program and then import it in Figma for it to work. And that's why it's not working right now. So within Figma, within my code editor, I'm going to, in my terminal, I'm going to say npm run build:watch uh, and the reason I enter this is because if you go to package.Json you can see there are some different scripts and build watch is the script that we need to run and what it will do is it put it in development mode so we can keep prompting it and then keep watching. So watching it will just be a hot reload. So when you make some changes here, you can immediately see it on [00:05:00] Figma.

So I'm going to hit enter. So if you go into Figma, uh, I'm going to close the console again, and I will open Figma run last plugin once again. Um, and now you can see the AI copywriter Figma works. So what does it do right now? It's, , it's a plugin for creating some rectangles.

So you just say create five rectangles. And when you hit on create, it just. creates these rectangles. That's pretty much what the plugin does. So we will build on top of this plugin. So I'm going to go back into cursor and within cursor, if you open the source directory, there are, there's some things that's happening here.

And it just, it will be good for you to know what files are doing what. And if you open app, you have index. tsx, and then you have components. There's another app. tsx here. So this app. tsx is basically the front end. So what you see on the interface, so if I run the last plugin again, this interface basically lives in app. tsx. There is another [00:06:00] file called plugin. And inside this plugin folder inside app, you have controller. ts. So this controller. ts is basically the logic behind what happens. So it's the part that says go and create these rectangles. And this is the part that just says, that just shows what you can do.

So we'll create the UI that so that we can capture different fields from the user. We will capture the tone of voice. We'll capture a number of variations that we need to create.

And we'd also get some special instructions. To do that, we can go back into cursor and let's just make sure we have app. t6 open because that's where the front end lives and we will hit command. I.

And once you open it in the composer, you see the previous ones, I put it on the bottom. Now I'm going to keep it on the right and I found this really simple way to keep toggling between chat and composer. If you had certain questions, chat helps you kind of break things down.

I'll show you later how to do it. But once you open the composer, I am going to say, add more fields to the UI.

Okay, so now I'm in the [00:07:00] composer. I'm going to tell you another interesting trick I learned. , instead of actually typing this on the composer, you can actually take advantage of the AI's Type filling type suggestions, which is really good. So to do that, I'm going to go into my source and create a new folder called prompts.

And these prompts are basically what is going to run over here, but except I'm using it here because AI auto suggestion works here. So within prompts, I'm going to create a file called 1-modify- ui, which is the objective that I am going to do right now, call md. So within here, now I, if I say add more, see when I type something it automatically prefills things.

So when I just hit tab, it just prefills the whole thing. It's much faster and sometimes you might be struggling to articulate what you wanna say and AI does it for you and it gets you even closer to what you wanna say. So it cannot get any more easier than this. So I'm going to say add more fields to the UI. To allow the user to [00:08:00] input more information. Here are the fields that should be added. I'm going to say tone of voice. There should be a drop down. Okay. There you go. It even gave me all the options. See, this is very cool. And then number of variations. The second one, I'm going to put this here.

Gosh, my English. Then I'm going to say this needs to be a number input. And then I'm going to say special instructions. which is a text input field. That's it. Once I've given my information that I want to create, I go back into app. tsx, go here and just tag that file. And I just need to say one modify UI and that's it.

Once I hit enter, goes and does everything that's needed.

Okay, so it added these fields tone of voice, number of variations, special instructions. I'm going to accept them all. And when I go back into my Figma, those UI elements have already been added. So now I can select formal, number of variations and special instructions and so on.

To style the UI, let's create another prompt, , within this thing. So you can say style [00:09:00] -the-ui.md

mD stands for Markdown, , and it's just Markdown files. It doesn't do anything in the project directory. It helps you keep a track of how you've been using prompts. And that's a very good advantage and you can also add different learnings that you took from prompting and the errors that you faced. There was a lot that I took note for when I was building this.

So one thing to know is sometimes what Cursor does is just creates its own CSS files. When you ask it to make some styling. Sometimes it fails to know that there is a CSS file already in your project directory. This is one of the limitations of cursor, but we can try to mitigate it by saying the CSS file is located at source.

So it's located at source app and styles, styles UICSS. So if you can see, this is the CSS file. So CSS is basically where the styling of this window happens. , I've showed cursor that this is where the CSS file is and , I'm going to first make this a little bigger.

So it's super small and there's too much [00:10:00] information going on. So I'm going to say, make, , first controller dot 3s I want to tag this controller TS file, and I want to say, make the plugin UI 600 pixels wide and 400 pixels high. So why am I tagging controller is because, I know that the main functionality of controller lives here and this is basically the front end that stuff that comes inside it.

So I actually figured it out that the resizing of the window of the plugin is something that needs to be from inside the UI that is generated by Figma. So once I've written this, now I am going to say make it dark mode, because I want it to be dark mode, add some padding between the label, and then add some margin above and below the container div.

So the container is basically the whole container. And then I'm going to say, remove the logo [00:11:00] entirely. I don't want this logo and let's add everything actually. So below the AI copywriter title, create a gray Info banner that says, select a frame to begin. And then now you say style it with a dotted line, no fill

and with a lighter gray color, make it center aligned. So I'm just visually describing how I want the interface to be. which is a very interesting way to design and I am going to go back into my app. tsx because this is where most of the visual front end lives and within cursor I can open a new one and just tag the second styling changes that I did and just say yes and it should understand everything that's written from here.

problem I anticipated doing is that it still doesn't paste the styles in the right place. I think it did. Let's see. So it has changed the styles in the right place. [00:12:00] Okay. So it changed these files. It changed app. tsx and it added some stuff here. It changed UI. css and it changed controller. I think everything is as what I expected.

So I'm going to accept everything, go back into Figma. This is already pretty cool. This is what I wanted.

And of course I can go and style the input elements a bit more. I can say, I make it grounded radius and stuff like that, but I'm going to save that for another day.

And let me just make the window a little bigger than this right now. So I am going to go back to controller TS, and I can see that the width here is 600 and height is 400. So I'm going to make the. Height 600, for example, and I think this is pretty decent and maybe make the width a little smaller than the height, 400.

And I think this is pretty decent to start with. I'm going to keep this here. Okay,

so the next thing we're going to do is we're going to remove the existing functionality. Right now, it's still. And then when you still enter six and say create, it still creates these rectangles. So we don't want to, we don't want that right now. So [00:13:00] we can do that by going into prompts and say, three, remove existing functionality dot MD.

And within this, we can say, remove the count field and replace it With an input field called OpenAI API key. Yeah, because we don't want to give our own API keys. Uh, we could do that if we are charging upfront to our users. Maybe that might be an update in the plugin that I launched, but it's better for now.

We're going to keep it simple and let the user. input their own API key and use their own open API access to use this plugin. So then we're going to say, then rename the button to generate. Okay. So once we have that, no, we don't want to move the cancel button so we can go back here. And let's see what happened.

And let me refresh. Oh, we haven't run this. So, uh, [00:14:00] we can go into app. tsx, um, in the composer. Um, let's open a new one and we will tag the third file. So remove existing functionality and that should take care of it.

It looks like it made the changes that I need. I'm going to accept what it did. And go back and run the plugin again And I see the button has changed, I can see the open ai api key has been renamed in the input field and That's it. I think It worked.

Okay, now I see this yellow color because there is something that is being used here that, something that's written here that is not being used. If you see here, countref is being declared, but it's never read. oftentimes you can just remove the code that you don't need. For example, even this thing.

What AI does is it creates this code for one of the previous variations and then it decides it's not needed anymore, but it forgets to clean it up. But. Yeah, I mean, I hope things get better over time. Okay, so when you go into prompts, um, and you can [00:15:00] go to the fourth one, and for the fourth part, what we will do is, we will show frame functionality.

So what we're going to do is when you select the frame, we want to show which frames to select. We can do that, very easily by saying when I select a frame, I want you to extract the text from the frame and display it in the UI, as a checkbox, as a checklist. And, only show the text fields that contain more than two words. I'm writing this because I don't want it to pull out this time or the call to action and stuff like that. So you can also say three words or four words just you can adjust it.

Or you can even add another input field and say, if only find words that match more than this. So you could go and do all this granularity, but for this example, I'm just going to say three words. Again, go back to app, open a new [00:16:00] composer, and then I am going to tag the fourth prompt and just hit enter.

Think it has made all the changes. I'm going to accept everything it did. And if I go select, Nothing happens. Maybe there is a bit of a problem. Let me run the plugin again and once I select this, I don't see anything.

When I select the frame, the fields need to show up. Still doesn't show.

Once I accept the changes. There you go. It works. I think the issue was that I did not tag controller. ts So what it did was it only made changes to app. tsx Yes, this is a mistake I keep making And you need to remember that you need to include all the files. It doesn't by default have access to all your codebase I don't know if that's good or bad because if it did have access to all your codebase It would make sense for it to be more smarter but Sometimes it's better not to have access to the [00:17:00] entire thing because you might want to make changes to one just two files in the whole code base and Sometimes ai makes a mistake and it assumes that the change has to be made at a completely different file.

There are different use cases, but I think it makes sense now, but Now that we know what the problem is we could just fix it , I'm going to just like visually align this a little better. So I'm just going to say, visually align the checkbox to the left and the text on the right. So let me also tag UI. css because sometimes it may need to make some UI changes, but I know there is nothing to do with the functionality. So it's not, so I don't need to tag controller and, yeah, it goes and creates a class on the style sheet and I will accept the changes.

Okay, this looks pretty clean. Cleaner than my initial version, actually.

So we will now do the meat of the whole project. And this was the hardest part. And this took a little longer for me to figure out. But I hope to make it as easy for you because I tried a lot of prompts. To [00:18:00] get this up and running and I kind of finally figured the one that works, that I tried a few times and it worked properly, but I hope it does right now.

So I am going to say five simple open ai. Do md So this is the part where we are gonna go talk to OpenAI. We are gonna use the OpenAI API and we are going to, make it work within our plugin.

So for this part, what we're going to do is we're going to say when I click on generate copy, use the open AI library and chat completions with model GPT 4. 0 mini to generate response and display it in the console. Here's the thing. I, I tried a lot of other ways to prompt to get chat GPT working, but I had to be a little more explicit in terms of which method it should use.

For example, by specifically saying this chat completions, it uses a specific syntax [00:19:00] for prompting chat gpt's api and without mentioning it Took a lot of time because it did used all those previous ways to prompt chat gpt api And all of them gave some error or the other so This prompt seems to be working fine for me.

So and then you say the request should be made with the following parameters. So we just want to make some selections pretty explicit. Um, so we want to say model is GPT photo mini, and then prompt just want to say hello world. And we will use the API key from the input field that we will specify in the front end.

And that's it. So we're going to go open a new composer and we will open Okay, we're going to open app. tsx, and let's make sure we also tag controller. tsx because these are the, these are where the main files live. And we will say [00:20:00] execute and we will tag file simple open API.

Here's the thing. Um, there was another point that I learned earlier, which I think I forgot to do it. So I can go back into chat and just go back to the initially Prompt initial state before I made this prompt by just clicking on this checkout and it just like removes everything I did And i'm going to go back into my prompt and i'm going to say When I click on generate copy the same prompt that I initially input let this be Now what i'm going to do is let me open a new composer The only difference that i'm going to do is instead of using claud i'm going to use gpt 4.

The reason i'm doing this is because gpt 4. 0 is from openai and it probably has the latest Data from open ai and it also has more training data, so it's kind of better in some way, but okay, so let's try this out this time Maybe it would work And we are going to go into app. Okay, i'm going to go into controller.

ts Where this is where the main logic should live and we are [00:21:00] going to also Tag app. tsx and we will say execute

Simple openai

So let me accept it and I go back into figma And try to run this plugin again Okay this time it seems to be working the issue was that claud is not as smart When it comes to integrating openai plugin You Who knew? Okay, so I will clear the console, but let's see if this works fine.

I can now select the frame I need to get the OpenAI API key. You can get the OpenAI API key by just going into the Playground. And you can go into Dashboard. And just API keys and here you can create a new secret key. I already have a couple of keys So when you say create a new key And just like say figma test plugin or whatever and then it gives you the key You can just copy that key and just save it in your notepad or something and then You can use it for now for testing.

So I'm going to take my key [00:22:00] and I'm going to paste it here and everything else I don't need to do right now because there's nothing happening and I will say generate and through an error. Sure.

Let me just copy this error and it's probably just a simple best practice for calling the API. That's fine. , and I will just paste this error here and I will also tag app. tsx.

And when I do that, it will go and fix the problem. See, this is a security issue. These are the things you wouldn't have to worry about, you know, just follow the errors and just AI take care of the rest. And you could also, if you are extra careful, you can also be like ask AI to go do all the security audit and kind of check each part of it and just like make sure it's secure.

And you'd still reach a significant level of security that Would have taken years for you to figure out if you just like did it yourself, We save this go back to figma and let me Clear the console and paste the api key again and hit [00:23:00] generate and wait Incorrect api key provided okay. Maybe let me try it again. Um, clear this, generate it again. Oh, there you go. So it was my mistake. So. I can hear back from OpenAI API, and it says, Hello, how can I assist you today? Basic OpenAI works. So we are able to talk to chat GPT from within the interface.

This was like the toughest part of the whole thing. And now that it's done, without me figuring out the right prompt, it would have taken ages.

So now we have simple OpenAI working. Let's go and modify the prompt. For this, you're going to create a new file inside this, and we're just going to call it six modify prompt dot MD. And, um, within here, let's clear the composer.

And now we can say now collect all the following information from the user input. Now, which information that we want to collect, which is selected text, tone, special [00:24:00] instructions and number of variations, right? Let's see if that's how it's collecting the information. It says it's extracted text generated text And once you have this thing See it says tone of voice think we're going to say tone of voice and What else so it used input number variations Yeah, I think that's why it suggested num variations.

So what we're going to do is say selected text, which is what we're going to use in a prompt. And, we want to say that is a combined text that is a string. from extracted text, right? So we're just going to combine everything from the extracted text into one single line.

And we are going to do that by saying combine all of them into a single string, separated by not commas, but full stops. The reason I'm doing this is because I want [00:25:00] to, Take the extracted text, put it in one single sentence or two sentences and then send it to OpenAI API. This is not going to be about prompt engineering, but I am going to say now use the following prompt.

And I will just quickly explain to you what prompt I'm using, right? So this is, I'm going to copy the prompt here. So the prompt is basically What it does is it creates this like generate number of variations. So this number of variations is basically coming from num variations. This is the part.

Once you get this value from num variations, which is from the input, unique variants. So if you enter five, then it'll give you five variations of the input text, selected text. And the selected text is this one , consider the following instructions, then it takes the tone. So we don't need tone, we need tone of voice.

And special instructions. And again, we input the special instructions. Please output the variants in JSON format. Each sentence in the variant should maintain a word count close to the corresponding sentence [00:26:00] in the input text. So we want it to look kind of similar. We don't want like huge paragraphs to be returned for each sentence.

text for which we want to create a copy of. For example, if the first sentence input has six words, the first sentence in each variant should also have around six words. Similarly, if the second sentence is 20 words, the second sentence 20 words. The reason I'm being so explicit about this is because that's when AI knows how exactly it should give you the output.

The more clear you are and the more instructions you give, the better the response gets. And finally, you can say and share the number of sentences in each variant It, uh, matches in our sentence input text, for example, if the input has two sentences, the output should be this one, right? So we want the response to be formatted in a way.

So if we send two lines of text, it should come back with a JSON and that says text one is this, text two is this, for each variant. So we want , if there's five variants, we want five of them. And for each of them, we want text one and text two. Okay. So this required a bit of playing around in terms of.

How to prompt the right way you can use chat gpt to help [00:27:00] you prompt it So you can say I want to do this So tell me the right prompt and then like go try that prompt within the openai playground And then you can see how the responses are generated and then you can just keep playing around with it until you find the right Prompt, right?

I will make another video for like how to do prompting Well, I spent quite a bit of time trying to learn how to prompt but yeah, that's for another video and finally, we will ask it to just console lock the final prompt that you're going to use so that we know exactly what it's doing on the backend.

Okay. So now that's done, we are going to go back to app. tsx and also tag, console controller because that's where the main program lives. And we are going to say execute six.

Okay. It's done some stuff, but it's removed OpenAI now. Okay. Interesting. , so why has it removed? So did it move it into controller. ts? Let's see. Um,

Okay. Let's see what it did. We've added the new message type generate text. When the message is received, we extract the necessary information. We [00:28:00] combine into a single string. We construct the prompt. We find a log. Okay. We send the generated prompt back to the UI. Prompt generated. This is not what I want because what it did was it removed all the previous OpenAI Configuration that we did.

So we're going to reject this. We are going to go back to a modify prompt and we're going to say, see now collect all the inputs, single string. Um, now use the following prompt to, prompt when, contacting OpenAI API. Let's see if it understands this better and I am going to say that control.

ts and I will say execute six modify prompts. So yeah, what it did was it just removed the OpenAI thing. So I, I don't want that, right? So it, okay. It understands that there was a confusion in the previous one.

So we've added this new message. Necessary information is extracted. We combine the extracted texts. Okay. This is what we [00:29:00] want. We send back to the UI prompter. We remove the Figma close plugin and the rest of the file remain unchanged. Now you'll have, okay. So let's see what it did. So when I go into control log.

It does just generate text., there is nothing changed. So it added a prompt. So generate number of variations. So this is a prompt that we gave. so clearly added all the special instructions and six words. Okay. The prompt is clearly and correctly captured and it console locks the final prompt. Great.

And this is how Figma responds back to the UI. Okay. This is That's fine. then i'm going to go into app. tsx OpenAI still exists and it goes and Does all the rest of the code that remains the same. Okay. I think this is what I want I am going to accept all the changes for now and Go into my figma And see what it does.

So let me how how can I help you today? Okay, that's fine. The console [00:30:00] looks fine Let me select this And I will select the text that I want and I will Paste my OpenAI key And I'll keep this tone of voice and it should already select all of this what we expect is that it returns from OpenAI And it prints what OpenAI sends to the console.

That's what we are hoping to see So There you go. What it did was I okay, there's some interesting stuff happening here in the console. Let me see if I can make it bigger and explain to you what's happening.

Here's the final prompt. What it's doing is it's, printing the final prompt before sending it to OpenAI, which I wanted to just take a look at, and some of the stuff that I mentioned here, which is say, want one number of variation and then I want it to be professional and all of this, all of that is getting captured.

So, as you can see, generate one unique variant, that's right. You're quite corner of the internet, and it took both the sentences. That's great. Consider the following instructions, professional and without any special [00:31:00] instructions. Great , please output the following, the JSON format. Okay. Everything looks great.

, I think this is the rest of the prompt. There's no problem here. And this is what ChatGPT gave me.

But then it just goes and prints. How can I assist you today? So there's something that's happening.

Okay, so what's happening is it's not sending in the right prompt because I'm again getting back. How can I assist you today? So I'm going to go back into my modified prompt, right? I mean, this is probably an important thing for you to understand how to like go back and forth and work with prompting without actually having to write code.

Now, I'm just going to be a little more explicit here. Instead of. Of just hello world. When contacting the open API open AI API, so I'm being a little more explicit that right now it still continues to do that. So I can go into my app tsx, and I can see that when it goes and contacts open ai, API, the content it sends is just hello world.

So it's wrong, what I don't want to do that, right? So I wanted [00:32:00] to use the new prompt. So that was the problem. So I can go, let me like go back to the previous checkpoint that I had. I'm going to actually go back here and just remove everything, all the changes that I had made. And I've captured it in my prompt to be a little more explicit.

So let me start a new, Composer actually, and I will make sure I like, I'll make sure I go to app. tsx and also tag controller and I will say execute six.

So hopefully now we should understand that I don't want to prompt the same thing. I want to prompt a new thing

we log the final prompt into the console instead of making an API plugin. We send a generated prompt back to the UI. With the prompt generated. But what did it do?

So I'm going to reject it again. And I am going to say then, change the prompt used in the plugin. app. tsx to use the new prompt in chat completion. [00:33:00] Well, I know this is a bit complex than just using prompts to create simple things.

But yeah, I think once you understand like a little bit of how to read code, it's. Things get really easy. Okay, so i'm going to try it again and Let's Tag app. tsx controller. tsx and just say execute Okay to implement the changes

Okay, so I think now it understands that the main changes have to be made at app. asix, which is okay. We locked the final console before OpenAI, that's great. I think this is pretty much what I wanted. , okay, great. So it removed this simple hello world and it replaced it with like the final prompt and it is going and prompting as well.

I think that is what I want. Okay. Let me accept it. And I will go back and do Figma. Let me refresh and I will clear the console and it doesn't just give me that. Okay. [00:34:00] Let me run it again I will paste the API key once again.

And let this be professional. I will three variants. Cast the value to a string. And let's hit on generate.

Okay, I got a response from OpenAI, and it says, please take note of these directives. Your consideration of the following guidelines is essential. Okay, I don't know what this is saying. Okay, maybe let's see. There's an issue with the prompt. So create three unique variations. So it's getting three.

Consider the following instructions, professional instructions.

Okay. Clearly it didn't send the selection, obviously because we haven't even selected it. So I'm going to go here and I'm going to select this first and I am going to generate it again. And I think now it should come back with, there you go. Great. This is what I wanted it to give me? So it was professional, um, and it create three unique variants.

And It sent in what I selected. So I need to disable the generate button when things are not selected. I can do [00:35:00] that. So basically your tranquil space on the internet, it created a copy of this. And text2 is pocket organizes articles in a streamlined fashion. There you go. You have three different variants of, the copy.

Great. So now is the interesting part, right?

So I want to clean up some parts and I will make sure that the generate button is disabled when no text is selected, so I can go and just, say on app. tsx, that when no text is selected from the frame disable the generate button, right? And I will also just, uh, tag UI. css because that has to do with some visual language for it.

Once it does that. there would be no reason for us to make a mistake.

Okay, I'm going to accept it. So now if I see the generate button is disabled, but once I select these two texts and enter an OpenAPI key, Maybe it was smart enough to create. I didn't even specify it, but AI figured it out. And let [00:36:00] me clear the console and make sure it works again. So I will create three versions of this and I will make it friendly.

Okay. I had a funny version before and that was kind of cool. It's great.

So now everything works perfectly.

The last part is to make copies and replace the text. This one was an interesting part for me. Again, it had a bit of figuring out into going into the niches a bit to figure out.

So I'm going to go into the seven and say, create copies dot MD. And within copies first, what I'm going to do is, just create copies. So now I'm say create another action. So I want, I'm just saying that don't change anything that I did before. Add something on top of it, to create copies of the selected frame.

Then I click on generate. Get the number of copies from the user input. And I'm going to say [00:37:00] num the number of copies. Place the copies next to each other. So if I select 12 variations, I want 12 copies of it. So that's pretty straightforward, right? So I'm going to give this prompt going, go back into app.

tsx. And I will say, execute seven, right? Seven. Yeah. Let me also tag controller just to make sure. Probably there's some code that would execute there as well. And I will say do it.

We've modified, okay, so you see it did use controller stuff. When this message, you see the function checks out frame is selected. The frame is selected, creates a specific number of copies. Each copy is placed in the original frame. It's spacing between them. After creating a copy, it selects them and controls, scrolls the viewport to show them, create.

I didn't ask for it, but awesome. ,

I'm just going to accept it as it is. Just say yes. And let's see if it takes care of things on its own. And let me copy the key. Paste it. Number of variations, three. And [00:38:00] formal. Let me select the, To text and generate. Great. So it created three copies, placed them next to each other. And it also gave me the JSON.

For the last part, , I am going to go back into cursor and I will create a new thing that says 8 text replace. So now we created these copies. All we need to do is replace those copies with the text that we just got. So I can do that by going here. Now this part was the actual tricky part, right?

I'm going to go , I'm going to show you how exactly I did it because this was like very complex and to handle complex things like this, I found like an easier way to do it.

The easier, more interesting trick that I learned was when you go to chat and explain exactly what you want and ask it to break it down. Like you would ask it to be like, Hey, I want to. Want to take this text from, and I got this, this JSON response [00:39:00] from, from OpenAI. Now I want to take this and I want to paste, I've created three different copies and within each copy, I want, I have the same text, the title and the subtitle that I've selected.

Now I want to take that first variant from this response and paste it in the first copy, second variant from the response, paste the second copy and just be very explicit and also give the example, right? So now paste an example of. Like your , like your code from what you got and paste it.

So that's exactly what I did before So i'm going to show you Like what I had before, and how I did it so this was this took me the longest to figure out, right? So this is an example of response I'm getting from OpenAI right now. Here's a response, right? So here's an example of the input text, selected text I gave.

And so this is obviously the input text. This is the combined sentence and this is the response I get. So now I've clearly pasted the response. So if you see here. This is basically it created three copies and it create, it gave me this response. So basically [00:40:00] I'm just pasting the response I got.

And now what I'm saying is this perfect. My input has two sentences, blah, blah, blah. And now in my code, I'm creating copies. Now what I want to do is replace the text. So once I do that, I'm asking it. So I gave an example, in the first copy, I want your corner of the internet so explicit that I want this part to be replaced with this part.

I want this part to be replaced with this part. So I'm being so explicit in the examples. And once it does that, what chat GPT, gave me, or in this case, you can just do it in chat. It gave me that you had to do a set of steps. So first you want to parse the JSON object. So which means you got a response.

Now you want to parse that response and make it into a, turn it into a JSON object. And then you want to create copies and then you want to replicate text, , in the copies, right?

So I'm going to remove this for now.

So first step, let me just parse the JSON object. So I know these three [00:41:00] steps. So first I need to parse the JSON object. So I'm going to say, parse the JSON object from OpenAI and console it as a JavaScript object in the console and clean the response string, remove any occurrences of JSON and remove any leading or white space.

How did I get all of this? I got it all from the chat and I'm not like showing you how the chat works right now because I just copied, I pasted the whole chat right now. So I'm just going to say what it needs to be done and let me accept

And I will go into a new chat and let me tag, controller and app. And I will say it needs to be in the composer. So I will open a new composer and I'll say controller and app, and I'm going to say execute it. So what it would do is, I wouldn't care about all the other things first. So I'm just going to care about logging the JSON and making sure it's correctly passed.

As you see, it generated text, it [00:42:00] cleaned the text and it's going to log this thing. And within the app, what it, so I hope it's not Okay, so it's not removing the generated copies. It's still going to continue to generate copies. Okay, let's accept, and go back into my Figma. And I'm going to clear this console out again

and I will paste the key. Let me, okay, sorry, paste the key. I will delete these three frames. I'm gonna select this frame, select these two texts, and I will make it casual and just say three variants and hit generate. See, now there's a problem. Error parsing content. It's an unexpected token. I think these are like smaller errors.

It should be like easier to fix. Make sure you tag app. tsx, control. tsx.

Okay, I'm going to accept it. Let me try it again. , I know this is a bit annoying that each time you have to enter the key. Maybe if you take some time to like simplify this for yourself, you would save time. But, today we're not going to do [00:43:00] that.

Okay. Know what this is, why this issue is happening. I had, I saw this before as well. It's because it needs to load the font before it can create a new frame. And it's a font issue. So the fix is easy. I just need to just copy this whole line of text and go into paste, go and paste it here again, tag app and tag controller and

okay, except no issues let me try it again. This is the part that you have to do like a few times and Copy paste the key again informed two three and select the frames And hit on generate Okay, great it generated three variants And it also replaced the text but There's one issue. Past generated content.

So I'm only getting one response back. So I'm not getting three responses back. I'm only getting one response back. that's the problem.

So maybe this might [00:44:00] have to do with the way I prompt it. Let me go back into my prompt and, let's try to make it a little more clearer, right? I'm going to go into app. tsx and

select the whole prompt. And I am going to go into Composer, open a new Composer, and say, modify the prompt to accommodate the following.

So generate blah blah, I think all of this is fine, whatever we copied there. For example, let me open to the new line, for example, if the input has two sentences, the out, output should be.

this for each variation

so it will be

variation one it will be something like this. I'm just gonna variation one, text one. I don't want to write code. So I can say for each variation. I think that should pretty much Um, depending on the number of variations [00:45:00] specified in the input field.

Change extracted text to selected text when combining input text as we only want to use the selected text. Updated the prompt string to match the requested format, including thing. Modify the example output to show okay. Uh, Maintaining structure and word count. Okay, let's try it out. I think it should work fine right now.

I'm going to copy this thing paste it here In form three variations. Let me delete these and just keep This one And clear the console as well and generate

Okay, I can see that it has copied, it created four variations. Variation 1, Variation 2, Variation 3, Variation 4. And, the problem is, it's not replacing the content correctly because this format has changed. That's okay. So what we can do is we copy this object, and we can say that, we can go to composer, now update the replace text function [00:46:00] accordingly as the object from OpenAI has changed.

This is how it looks like right now

You need to replace the first copy with the contents from variation 1 second from variation 2, and so on.

Okay, I'm going to say accept. And, hopefully now things are going to work well. Let me delete this. Select it. Select your text.

Prompt. And then just say informed. three variants and generate

great! Everything works perfectly. So, that is the end of this video. I hope you learned something. Took me a while to figure this out, but I think once I was able to get the exact working behind what was happening. I learned a lot of new things along the way, like how to use, chat to break down a complex task and how to make those smaller tasks happen

so it's easier to debug small [00:47:00] things and then incrementally add on top of it rather than just give a one, one huge task. One is one definitely was how I spoke to ai. It seemed like if you're a better writer, you can have better conversation with ai.

So it, it's like the more explicit and more detail you get, give so many examples, just like pace everything that you can think of so that it can never go wrong. And if you're so clear. it gets it right. And yeah, there was some interesting things about using GPT 4. 0 instead of Cloud.

Somehow it integrated with OpenAI better, but I think these things should get better over time. But yeah, this was a bit of a trial and error. Okay. This is the end of the video and thank you for watching.