Speaker 1: Hey there, I'm Robert, and in this video, I'm going to show you some exciting updates that I have made to my Kiki Alfred workflow for interacting with AI text models. If you are an Alfred user and you are interested in using the power of AI to boost your productivity, then you may want to stick around. I previously shared a video where I introduced this workflow and went through some of the setup. So I'll just try to keep this brief, showing you some of the new stuff. So first of all, Kiki now officially supports anthropic models. By anthropic, I mean that it supports Haiku, Sonnet, and Opus. Currently, I feel like these models are so much better than ChatGPT for writing. They just feel so much more natural and a little bit less predictable. They just don't feel so AI. So since this workflow is all about text, I decided to incorporate this API. Second of all, now we have a new option for holding modifiers. So if you hold Fn and Shift, whenever you are about to start a chat or whenever you are about to send some text through a preset, you are now presented with a list of models that you can choose from. And these models are set by the user and they can include OpenAI models, OpenRouter, Anthropic, or local models. This is something really, really exciting, but I will tell you about that in a little bit. I want to show you another part that I also incorporated in this update, which is WhisperAI. Whisper integration allows several things. Maybe the most basic one would be just to dictate and paste in the frontmost app, like this.
Speaker 2: This is just some random text that I am using to test this feature.
Speaker 1: But the really cool thing is that since WhisperAI supports many, many different languages, I can dictate in another language, in Spanish, let's do it. Esto es solamente una prueba para checar si funciona o no. I don't have to be changing settings. I don't have to do anything because it will auto-recognize the language that I'm speaking. But since we're in Kiki and Kiki connects with so many different models or presets, I can directly make a request using WhisperAI dictating. Write me a short song that I can use to teach children about productivity. Just something very random, I know, but let's just test this. And there we go. Time to get to work. Let's not delay. There's so much to do. Another thing that I can do is that I have some presets that I have made for myself and I go over this in the other video that I have on Kiki. And you can also see this in the documentation. But for example, I can just say, you know what, I will dictate something in another language and I want this to translate into English.
Speaker 2: Hola, buenos dias. ¿Qué hay para desayunar? Hello, good morning. What's for breakfast?
Speaker 1: Another thing that I can do is, for example, please tell him that I am very, very sorry. I cannot meet with him tomorrow. I have available next week, Monday to Wednesday before noon. I just need to know a day or two before so that I can reschedule. But again, please apologize. You know, I have a preset that I have specifically set up for replying emails. There we go. I am very sorry, but I will not be able to meet with you tomorrow as originally planned. My apologies for the late notice. I do have availability next week, Monday through Wednesday. It organizes everything. This is just like a very quick way to, you know, format your emails or dictate something in a very raw form. And this will help you give it format in the tone that you want, in the language that you want. It really is super useful, and it just needs a little bit of your creativity to come up with what you want this to do for you. Another thing that you can do with Whisper AI is, for example, select some text. And let's do, please translate this into Chinese for me. You know, I'm dictating my prompt. In this case, I am presented with Alfred's bar, and I can decide if by holding a modifier I want this to be in the front most app, or if I want it on a dialog, or if I want to see the list of models, I will just press enter, and I will see the dialog where I selected in Chinese. Something that I mentioned and that is really, really cool is that with this new update of Kiki, you can use local models. This is something that is new for myself, but just like there's models like Chagipiti or the Anthropic models, there's also something called open source models. And now you can download these models and use them without the need of having internet. So whether you care about privacy, or if you just want to save money, or if you just don't want to support big corporations like OpenAI and stuff like that, you have this option of just having your little Chagipiti kind of machine inside your computer, and that's really, really, really cool. There are several free applications that allow you to do this. I have currently been using LM Studio, and I have also tested with Olama, but I find LM Studio more user-friendly. The only thing that you need is to have a server with an API which is compatible with OpenAI. I will cover this in the documentation of the workflow, but with this new option, you can use Kiki in three ways. The first way to set up this workflow would be the usual, totally online, for which you would fill any of these or all three fields with your API tokens. You would leave this last field totally empty. The second way, if you have one of these other applications that I'm telling you that allow you to mount a server, would be to use it both online and offline. In that case, you would have any of these API token fields with your information, and you would also fill this up with a URL that is given to you by your tool. To indicate that you want the requests to go to this API endpoint URL, you need to add the custom and this underscore prefix to your model name. The third way to use Kiki is totally offline, which is actually what you are seeing here on the screen because I do not have any API token set up at all. In this case, all my requests will go automatically to this API endpoint URL, so this is not even necessary. Let me just open LM Studio. When you open this, you are already presented with some popular models to download, and I do not plan to go into detail here because there is so much that could be said about all the options and all the menus and all the settings, but to make this simple, I already downloaded a few models. LM Studio gives you the option of having a server with multiple models down here, and in this case, you can load them all on your computer. For now, let's just do it with only one model here. Now let's just click here to load Lama and start the server, and I can make a very quick
Speaker 2: request. Give me a short haiku on LLMs, and I can see how the answer was streamed here and pop up
Speaker 1: here, and there you have it. You can now use any of your KIKI presets offline, even in one end-of-the-world kind of scenario. So that's a quick overview of the latest updates to KIKI. I am really excited about these new features, and I am still experimenting a lot with the local language models. It really is something exciting to see so much technology now available to pretty much anyone, but more exciting for me is to be able to use it for everyday tasks like this workflow allows me to do. I believe it truly is a game changer for productivity. As always, I am really interested to hear your feedback, and don't forget to sign up for my newsletter if you want to stay up to date on all the latest updates, not only about this workflow, but anything that I do or media I consume, apps, devices, hobbies, film recommendations, and more. I hope that you found this video helpful. Thank you for watching, and I'll see you in the next one.
Generate a brief summary highlighting the main points of the transcript.
GenerateGenerate a concise and relevant title for the transcript based on the main themes and content discussed.
GenerateIdentify and highlight the key words or phrases most relevant to the content of the transcript.
GenerateAnalyze the emotional tone of the transcript to determine whether the sentiment is positive, negative, or neutral.
GenerateCreate interactive quizzes based on the content of the transcript to test comprehension or engage users.
GenerateWe’re Ready to Help
Call or Book a Meeting Now