[00:00:00] Speaker 1: My inbox used to stress me out more than it should have, not because of spam, but because everything looked important. I had rules. I had folders. I had labels. And somehow, I was still constantly worried that I was missing something that actually mattered. The real problem wasn't the email volume. It was the mental work of deciding, does this need me? Does this need me now? Or can this wait? The tools that we use to manage email are all built on rules. And rules work until things change. Email isn't predictable. Context shifts. What's urgent one day isn't the next. I ended up managing rules instead of my inbox. Of course, rules can automate steps, but they can't decide what actually matters. So instead of adding more filters, I built something that can actually think. I didn't want another inbox trick or just another set of rules. I wanted something that could actually read an email and decide what to do next. So I built this as an AI agent using Make. I partnered with Make on this video because it makes building agents really straightforward. And most importantly, it gives me visibility into every decision the agent makes. Most tools out there don't offer that level of transparency. Instead of hard-coding the logic, this agent can make judgment calls. Let's say, for example, whether an email needs action, whether it's urgent, or whether it's worth interrupting me. And because this runs inside a real workflow, I can see exactly why each decision happened, which is really the only reason I'm comfortable using it for something as important as my inbox. At a high level, this agent does three things. First, it watches my inbox and decides whether an email actually needs my attention or not. If it doesn't, it gets labeled and quietly stays out of the way. If an email does need action, the agent figures out whether it's urgent. Now, of course, most messages aren't, so it just prepares a draft reply to save me time. And only when something truly can't wait does it interrupt me by sending a Slack notification. The key is that none of this is based on rigid rules. The agent is making judgment calls on its own, and I can see exactly why it made each one. Setting something up like this in Make is actually pretty quick. It only takes a few minutes. Let's take a quick look. Let's start by creating a new scenario and then adding a module that watches for new emails coming into my Gmail inbox. From there, let's add an AI agent. I can choose a model. In this case, I'm using OpenAI's GPT 5.2. Then give the agent a short set of instructions that explain its role and how it should behave. If you're curious, I linked the exact instructions that I used for this agent right down below in the description. Next, I pass in the inputs the agent needs to work with, who the email is from, the subject, and the email body. And that's it. At this point, you already have a working agent. Once the agent's in place, I can give it knowledge and also tools. Knowledge helps the agent answer things it already knows, and tools lets it do things in the real world. For knowledge, I can give it access to a simple FAQ, like let's say a Word document or a TXT file, and then it can draft accurate replies to common, well-defined questions. For tools, I add three. One tool applies labels to emails in Gmail, another creates draft replies, and the third sends me a Slack notification when something's important and actually needs my attention. When I add the Gmail labeling tool, I give it a name and also a short description, and then I let the agent decide which label to apply or remove. That's the key here. I'm not hard-coding any rules. I'm letting the agent make the decision. I do the same thing for drafting replies and then for the Slack tool so that when something truly matters, it can interrupt me with a direct message. And that's it. The agent and its tools are now in place. Everything else you're about to see comes from the decisions it makes. Let's now run a few real emails through this agent so you can see the decisions it makes in practice. Let's start with this first email. This one doesn't require any action. I'll run the agent. You can see it immediately labels this as no action. There's no draft, no Slack notification, nothing else happens. If I open the reasoning here, you can see why. It recognized this as informational and not something that needs a response. This is the kind of email that normally just adds noise, but here it just quietly gets out of the way. Now, here's a different type of message that eventually requires action, but it's not urgent. I'll run the agent again. This time it's labeled action required and it's prepared a draft reply for me. Notice what it didn't do. It did not interrupt me. There's no Slack message because it's not urgent. If I look at the reasoning, I can see that the agent understood a response was expected and it also had enough context to help, but it decided that this could wait. And finally, here's an email that actually blocks progress. I'll run the agent and you can see three things happen. First, it labels action required, it creates a draft reply, and I also get a Slack notification. This is the only time Slack fires when something truly can't wait. And again, if I open the reasoning, you can see exactly why the agent decided this was urgent and worth interrupting me. The important part is that none of this is magic. Every decision is explainable and that's what makes this usable for real workflows. This is the part that really matters to me. If I open the reasoning here, I can see exactly why the agent made this decision, what it looked at, what it prioritized, and why it chose this path instead of another one. That's a big deal because most AI tools just give you an output and ask you to trust it. Here, I don't have to guess. If something looks wrong, I can see where the decision came from and then I could adjust it. That transparency is the only reason I'm comfortable letting an AI handle something as important as my inbox. The reason I like this approach isn't just because it works for email. The same kind of judgment applies anywhere you're dealing with messy unstructured inputs. Let's say support tickets, form submissions, sales leads, or internal requests. That's what makes AI agents powerful here. You're not just automating steps, you're automating decisions. One thing to call out, AI agents aren't the right tool for everything. If a process is fixed and predictable, regular automation is still the best choice. It's faster, simpler, and also more reliable. AI agents, on the other hand, make sense when judgment's involved, when inputs are messy, the context matters, or maybe the rules just keep on changing. That's the distinction that I use. If it needs doing, just automate it. If it needs thinking, that's when you use an AI agent. What I like about this approach is it doesn't try to just automate everything. It takes the mental load out of deciding what actually deserves my attention. My inbox is calmer, I'm interrupted less, and when something does come through, I know it actually matters. If you're dealing with inbox overload, really any workflow where you're just constantly triaging incoming requests, this same agent pattern applies. I built this using Make, and if you want to try it out yourself, I've included a link right down below in the description. The goal here isn't inbox zero. Instead, it's focusing on what really matters. Thanks for watching, and I'll see you in the next one.
We’re Ready to Help
Call or Book a Meeting Now