[00:00:00] Speaker 1: Please welcome Jonathan Abrahamsen, Chief Product and Digital Officer at Deutsche Telekom, along with Stan Masueras from the Eleven Labs Go-To-Market team.
[00:00:17] Speaker 2: Hi, good morning everyone. My name is Stan Masueras and I'm delighted to see so many of you today. So Mati started the day by talking about innovation, so things that we shipped in 2025 and gave you a quick glimpse of what's ahead of Eleven Labs in terms of new technology that we're planning to release. Marco talked about safety. So innovation and safety wouldn't be nothing without AI adoption and so that's why today I have the privilege to welcome Jonathan Abrahamsen, Chief Product and Digital Officer of Deutsche Telekom, the largest telco operator in Europe, one of the largest in the world. So Jonathan, thank you for your time today. I'm going to be delighted to talk about some of the learnings that you acquired along the way. But first, maybe if you could tell us about your current role in the scope of your position at Deutsche Telekom.
[00:01:03] Speaker 3: Sure, happy to do that and thanks for the honor of being here with you. Yes, so I lead the product and technology function for Deutsche Telekom in Europe. So you can think about that as 10 countries from Poland and Germany in the north all the way through to Greece in the south. We build our digital platforms, our products, our ecosystems, our engagement platforms, e-commerce platforms, also our broadband and television products for the group. Obviously in the last couple of years we've shifted our pivot, our focus a little bit into focusing on industrializing AI for an enterprise context at DT, a big focus for us and huge opportunity then. That's obviously a lot of the work that we do with you guys.
[00:01:46] Speaker 2: Thanks, so with the ElevLabs team we spent quite a lot of time. We almost became an extension of the Deutsche Telekom team. So I don't know if you're curious but if you like to meet the Magenta team they have a booth so you'll be able to experience a little bit of the immersive experiences that they built on top of our agent platform. But what has been very interesting for me, for my vantage point, is to look at how thoughtful you were with AI adoption. When most large organizations were still in the starting block, looking at the reporting, analyzing everything, you actually started to work on testing and now you're finished 2025 and you're ready to scale at large scale in 2026. So I'll be curious to hear about some of the deliberate decisions that you've made last year in order to kick off 2026 and to keep your competitive advantage.
[00:02:36] Speaker 3: Absolutely, I mean it's kind of the question, the perennial question for executives, what's your AI strategy? So I think the thing about that ultimately is AI in the context of an enterprise it's sort of AI is a sort of a proxy for human intelligence which means it's applicable wherever human intelligence can be applied with it, which in a place like Deutsche Telekom is lots of different places. So we have to be a bit focused and sort of take our time to make sure we're clear on where we're investing and how we're investing. We see ourselves in three broad buckets, three ways that we think about running AI at DT. The first one is ultimately making a better telco. So this is the work that we do in customer service, like obviously the work that we do with you guys which we'll talk more about today. This is the work that we do in network operations, in marketing automation and the like. So ultimately growth, efficiency and automation is that first bucket that we like to think about. The second bucket that we talk about is work for our employees. We have 130,000 employees in Deutsche Telekom across the group and I can tell you each one of them is as excited about using this technology in their work life as they are in their personal life and there's a huge demand for and pull for this and it's part of being a good employer that we can provide these tools to our employees. So there's a big task around making sure that we do that, we have the right tools and they're brought into an enterprise context and they're connected to the crazy amounts of disparate knowledge bases across the group that we can make for much more meaningful jobs for our employees. So first bucket, better telco, second bucket, more meaningful jobs for our employees and the third bucket is actually something for our customers. I think you mentioned it before, Deutsche Telekom has built a very successful business on the back of connectivity, providing and democratising access to connectivity across Europe and the US and it's fair to say that connectivity's had a pretty impactful impact on society. I think we would believe that and I'm sure this audience believes that artificial intelligence will have as big if not a bigger impact on society as what connectivity has done and we would like to play some role on behalf of our customers as Europe's biggest brand and Europe's biggest telecommunications company in democratising access to this technology for our customers. Again, we do bits of work with you guys on this and there's much more exciting stuff coming down the path but we see the role that we do across those three areas.
[00:05:08] Speaker 2: Perfect and it has been really interesting to see some of the work done around democratising access to AI with the partnership with Parplexity, Eleven Labs, being able to create really cool experience in the My Magenta app. So yeah, you can all check it out at the booth after this session and the first pillar which is to leverage AI in order to deliver the best possible experience to your million of customers. So now we're going to switch gear, we're going to be talking about how you're using our technology in order to support your customer base. So we're going to switch to a video, we're going to switch gear now, please. So we'll be able to see the agents in action.
[00:05:49] Speaker 4: Deutsche Telekom, Europe's largest telecommunications company, is bringing Eleven Labs voice agents to customer support. Available through the Magenta app or directly via the phone. Let me show you how that sounds like. Oh hi there, you're so quick to pick up.
[00:06:16] Speaker 5: You prefer to continue in English? No problem. How can I help you today?
[00:06:22] Speaker 4: Yeah, let's continue in English. I'm calling about my Wi-Fi, I'm having connectivity issues.
[00:06:28] Speaker 5: I can look into that for you. To do so, can I get your permission to access your router to run some remote diagnostics?
[00:06:36] Speaker 4: Please do so. And also, can you speak a bit slower? It's quite loud in here.
[00:06:41] Speaker 5: Thank you. Connecting now. Hmm, okay, I have found the issue. I am rebooting your router. Okay, you should be back online now.
[00:06:53] Speaker 4: Amazing, thank you.
[00:06:55] Speaker 5: Oh, and one quick tip, Louise. If your Wi-Fi signal is weaker in certain rooms, a Wi-Fi repeater can help improve coverage. Would you like me to suggest some suitable options for you?
[00:07:08] Speaker 4: That's actually a great idea. Can you send them to me by email? I have to run.
[00:07:13] Speaker 5: Absolutely. I will send you a short summary of what we fixed in both German and English, along with information about suitable Wi-Fi repeaters to your registered email address.
[00:07:24] Speaker 4: Perfect, thanks. Speak to you soon.
[00:07:27] Speaker 5: Perfect. Have a great day, Louise.
[00:07:31] Speaker 2: A quick round of applause for Dutch Ligam.
[00:07:35] Speaker 3: I think interactions is a super interesting one. And I think, obviously, with this audience, you kind of see below the borderline of what's actually going on there. So obviously, a very sort of empathic interaction with a voice agent. But it's the context switching, so it's English to German and back. Tool calling and the upsell, obviously, very important as well at the end. So the fact that we can sort of stitch all that together, make it seamless for a customer, is fundamentally different from what we've been able to do up until now.
[00:08:07] Speaker 2: Usually, people will kind of just judge the quality of the voice. But what has been interesting is how thoughtful you were regarding specific workflows, super complex workflows, across all the different products that you support, from broadband to TV to mobile to landline, etc. But then also some of the work done about the audio production of the voice, keeping the official brand voice of Dutch Ligam and moving it into the next century, making it emotional, emotive, empathetic, dynamic, depending on the use case.
[00:08:37] Speaker 3: I think we've still got some work to do there. It still sounds a little bit like a 1980s sort of game show host. We'll improve that over time. But I think what you guys did is take something which was probably recorded in the 90s and made it somehow much more prevalent. But I think we have more work to do in this context. And I think what working with you guys actually does is sort of prompts us to listen to these things again and say, we need to improve on that.
[00:09:02] Speaker 2: Yeah, it was one of the discussions we've had last week regarding the voice. We're like, okay, we might need to think about working with our branding team to think how can we take it to the next level. It worked well in the kind of IVR setup and now how can we scale this? But it comes to the questions that usually people ask is what's the difference between good and a great AI agent in your world?
[00:09:27] Speaker 3: I think we've got a good one now. We're not yet great. Although I was listening this morning to the new release you guys did on V3 with the expressive mode. I think that's sort of another game changer, which we're incredibly excited about testing. I think we think about the path towards a great agent. I mean, ultimately what a great agent for me is where a customer is actually preferring to talk to the AI agent than a human agent, and we're not quite there yet. But we think about it ultimately in four different ways, and that path to becoming great, it's our four Cs. So ultimately, the first one is capable, so incorrect. So it's able to sort of understand knowledge recited correctly, ask for clarifications if it's not clear. The second one is calm, so it understands context, it can de-escalate if that's required. The third one is capable, tool uses is the gold standard for us, its ability to sort of make decisions, not just answer questions, but actually take actions on behalf of the customer. And the fourth C for us is continuously improving. So all our evals and observability in place to make sure that we're able to take a constant sort of feedback loop and improve the model and improve the application as a result of that. So lots of fun on that path, still a lot more work to do though.
[00:10:40] Speaker 2: Thanks, thanks. And spending time with your team, it looks like you're always at the tip of the spear of innovation, testing new technology, putting your level labs aside, but how do you come to the realization that AI voice agents were at the levels that were suitable for Dutch Silicon to deploy them at scale?
[00:10:58] Speaker 3: Yeah, you ask that question, and usually people think it's like, oh, when we got to GPT 4.0, that was the time. And it really wasn't like that. If we think about deploying this sort of technology in an enterprise context in front of millions of custom interactions, the model is important, but the model has been good for some time. We're not yet at the stage, unfortunately, where we can just plug our call center, our IVR into a large language model and the call center goes away. We're not quite there yet. There's a lot of orchestration, there's a lot of plumbing, there's a lot of applications that we need to build and wrap around those models to make them effective. And that's all your evals, that's all your observability. It's ultimately, if we're going to push something in front of that many customers in a brand, in an enterprise context like ours, we need observability, we need the ability to monitor and continuously improve. So when we have all of that plumbing in place, that's kind of when we started. We started in digital, much easier just in text, a couple of years ago and sort of took a lot of learnings from there and optimized from there. Unfortunately, well, the reality is in DT, only about 11% of our customer interactions are happening through digital channels. The rest are in voice and that's really where the huge value for us is now and really where our focus is to make sure that we continue to push that sort of experience in front of all those customer interactions.
[00:12:20] Speaker 2: Thank you. And so we met the first time almost a year ago when with Matti and with us that you've seen on the video when we came to Bone to spend two days with you and the team. So I'll be curious to hear, so since that day, so how has that been to work with our forward deployed engineering team, customer success, go to market, legal team, security team, et cetera. So I'll be curious to hear how did it go with you?
[00:12:47] Speaker 3: You put me on the spot. No, I think, listen, it's been fantastic working with you guys. I would say that on stage or off stage. I think the quality of the talent that you have and you're able to track is fantastic and it's wonderful working with you guys. I think I'd sort of call that three things here. I think obviously having sort of, I mean, I guess we have Palantir to thank for this sort of FD model and I think it's working fantastically in this sort of context where we're talking about sort of this kind of technology in sort of an enterprise software environment. The first thing obviously is sort of shared goals and alignment around how we build and both teams sort of working together to ship amazing products and working towards the same goal. I think the second thing is co-design of these platforms. I think having sort of deep domain expertise on our side of how our platforms and systems work and on your side, working together to co-design systems makes for a more robust integration or sort of a less brittle integration. And I think the third one and probably the most important for me is the fact that it's a knowledge transfer. It's the fact that every extra hour that my guys get to spend with your guys is that the knowledge is transferring and that sort of goes beyond the end of the project and I think that's something that lasts. So, I mean, we're super happy with the partnership and obviously the ability to have your guys sort of so deeply embedded with us.
[00:14:06] Speaker 2: Thank you. Mutual feeling. It's been a pleasure to work with your team. Now I'd love to talk about success metrics. So, it's great to test and deploy, but I'd be curious to see if you can share with the audience without sharing all your inside secrets, but how are you planning to measure the success of the fleet of agents that you're deploying?
[00:14:28] Speaker 3: Yeah, I mean, I think in the short term, we sort of look at it differently to how we think about success in the longer term. In the short term, it would be the metrics that you would expect. I think we're all sort of tracking at scale at the moment. So, it is sort of containment rates with quality, sort of avoiding escalations. It's the security metrics, so policy violations, hallucination rates and that sort of thing. And it would be things like friction. So, customers sort of refusing to speak to the bot, I want to speak to a human, that sort of stuff, which is in the first weeks. I think longer term though, obviously the intention here is not just to answer the customer's question, but actually solve the problem. So, it's much more around sort of end-to-end workflow completion. Are we able to sort of make sure that the customer's problem is being solved or the change that they're asking for is actually happening by the bot and being handled sort of end-to-end. We look at things like customer effort score and MPS at that point as well, as well as cost and platform sort of capabilities there. So, I think that we'd look at it in those two formats, but ultimately the end goal is customer completion, not just good conversations.
[00:15:35] Speaker 2: Thank you. Yeah, what we've seen happening is that initially some of our partners and customers were using our platform just for deflection. We get, let's say, a thousand of calls per day. Instead of putting the customer in front of a human agent, we want our AI agent to deflect the conversation. So, we started from there to then actually, what is the time to resolution? What is the win rate? What is the quality of the answer? What is the impact on MPS and CSAT? And this is where we see companies like yours really, really skyrocketing because there's specific use cases, which I love to touch on, but it's specific use cases where customers would prefer to speak with an agent now because time, not super time sensitive, but also they want to use it in a way that is very personal to them, where they can share openly without feeling judged. So, I'd be curious to hear what were some of the surprises that you discovered along the way?
[00:16:29] Speaker 3: Surprises? I mean, I think lots. It's been quite a long journey. I think the biggest surprise for me is the expectations that we place on our human agents to disambiguate and understand the complexity of the businesses that we run today in an enterprise context. I think when we started this, we became very quickly clear that there was no single source of truth of how we should be interacting with customers. It was sort of this knowledge basis was split between training manuals and portals and sort of weekly emails from the team leader and even tribal knowledge. And that doesn't work when you want to give an instruction set to a stochastic technology like a large language model when you want deterministic outcomes. So, it took us some time to get that all organized. And I think we're up past now 15,000 SOPs, which we've had to sort of create and put in Gherkin format, so it can be sort of machine readable. And that was a big task. And that was something which I guess when you sort of play with the models and you do your POCs, you don't see. But when you want to take something to production and you want an experience as good as what a human agent can do, that was a big one. I think beyond that, there's always an API that's missing. There's always a data set that's not available. So, it really sort of iterated process as we roll this out and sort of get to complete scale.
[00:17:54] Speaker 2: Thank you. And maybe I'd like to touch on now the future of human versus AI agent experience. I'd be curious to hear what's top of mind for you and how do you see things evolving, not only in 2026, but beyond in the CX customer support space?
[00:18:12] Speaker 3: I wouldn't say versus. I think where we are today is very much the sort of a hybrid mode. We have AI agents handling the more mundane or repetitive L1 type interactions. And as things get more complicated or more heated, they get escalated to a human agent. I think, again, with the kind of technology that you guys are displaying now and the kind of things that we'll start building into our own Phragmagenta bots at Deutsche Telekom, I think that will quickly get addressed. And sort of humans will be, I guess, elevated at that point to more around sort of quality control and journey design, because I think that still humans will be sort of required for that. But I think more and more we'll start getting to a place where the quality of the interactions that you can have with very good voice models and underlying sort of reasoning capabilities, the large language models, I think we're on a pretty good path.
[00:19:09] Speaker 2: It's been interesting to see how the, especially customer support agent, I don't know if some of you have been a support agent. I was when a few years ago when I was fixing printers and photocopier over the phone. It's a very important job and it's very hard when you have to deal with customers every single day, repetitive tasks, repetitive questions. So that's why what we've seen happening that, as you just shared, which I find interesting, that this role is being elevated. So now they will only focus on strategic questions, questions that will require a sophisticated answer, but also they will start to be more hands-on on developing a fleet of agents, optimizing the workflow, the knowledge base, monitoring the quality of the answer, and to create almost an extension of themselves. So it's going to be interesting to see how things evolve this year when more and more companies like Dutch Silicon bring to scale AI agents.
[00:20:04] Speaker 3: Absolutely.
[00:20:05] Speaker 2: So now I'd love to talk about what I shared in my introduction that you've been a fast mover, maybe not as fast as you wish, but we're seeing Dutch Silicon as one of the fastest large-scale enterprises, touching millions of people to experiment, deploy, and put in production at scale AI technology. What's the secret sauce? How do you operate internally? Because I think it would be great learning for the audience.
[00:20:33] Speaker 3: I think, Stan, you're being very polite and maybe we've got the wrong benchmark. No, I think, I mean, thank you. The reality is constant angst for me around the fact that, you know, you just read the press or listen to a podcast or keep an eye on sort of your Google alerts. Every day there's another release, another announcement, another breakthrough with the model, the models layout, the info layout, all the way through the application layer. So it's, you know, constantly got this sort of angst of, are we moving fast enough? How do we move faster? How do we continue to ship? And obviously working with partners like you guys, I think it's sort of a big part of that. In the context of a large enterprise, like Deutsche Telekom is, it's super important as leadership of sort of investing heavily to reduce operational overhead and coordination overhead. Unlike a startup, there's more people, there's more opinions, there's more objectives, there's more targets, and they don't always align. So focusing very clearly on what the objective is that we're trying to solve for, and then getting broad alignment across the organisation is super important. So I think that's a big part of it, is making sure that you keep alignment in the context of a big organisation. I think beyond that, slicing deliveries, anything that takes longer than three months to go from sort of ideation to in front of a customer gets deleted or cut down. We don't have, in the context of a large organisation, the kind of patience or ability to sort of stay focused for much longer than that now. So if it's anything longer than three months, we don't start. Yeah, I think they're probably the biggest learnings from my perspective.
[00:22:17] Speaker 2: Great. And maybe last couple of questions, but is there any specific advice you'd like to share with the audience, especially teams that might be a large organisation like Deutsche Telekom, but also smaller teams? What would be the one lesson that you'd like to share with them when it comes to deploying agentic AIs and especially deploying voice to your own customers?
[00:22:38] Speaker 3: Lessons learned. Now, I think you actually asked me, you told me that you're going to ask me this question. So I prepared on the flight on the way in. I wrote down some thoughts to make sure I was going to hit them. I think the first one is around alignment. And it sounds, I guess, super obvious when it comes to a company, but actually in a company the size of Deutsche Telekom or any large corporate where you have hundreds of thousands of employees, multiple lines of business across 10 countries, B2B, consumer, multiple lines of business like a mobile fixed television broadband. But one product team and one engineering team to go and solve for that can be quite hard to keep such a broad organisation on the same page. We invest heavily in DT to make sure that we have super crisp outcome focused objectives for the organisation. We just have five. And that took, I can tell you, a long time to get 130,000 people onto the same page for just five things that we focus on. It's super important because it keeps us focused on what's important and sort of kills the pet projects and make sure that we can ship. But it also changes the way that we work. Actually in DT there's a, in my team there's a sort of a required reading by, it's a Teresa Torres book, I'm not sure if you read that one, called Continuous Improvement and Continuous Learning. And I think that it talks a lot about these product trios. So atomised teams, three people, a product person, an engineer and a commercial person sitting together and having a core focus around a small number of topics, one topic ultimately to solve for. So that's been the biggest sort of investment and learning. Again, sounds obvious in the context of a smaller company. In a big company, I can tell you that's a big change. The second big learning for me is we made our focus tool execution, not conversation quality. I think the models and the technology that we're able to get from partners like you is solving very much for the second thing. But it's very much on us to focus and solve for that first thing. And again, it's very easy in the early days of making sure, getting to a place where you're shipping nice conversations, but we're not ultimately solving the customer's problem. And then you see customers calling back because you didn't solve it end to end. So that was a big one. I think from our perspective also, our moat in this context is our learning loop. We've invested hundreds or thousands of hours across the team in optimising these platforms, understanding our business better, writing those SAPs and transforming them into a format that makes sense. And I can imagine if we were starting again from today, that would be a big task that was still ahead of us. And AI hasn't solved that part for us yet, understanding our business and documenting all that. So I think that time investment is super critical and super important. Yeah. So they're the big learnings from my perspective.
[00:25:40] Speaker 2: Thank you. I'm going to be greedy. One last question. But yeah, what's next for Dirk Schliekhoff and for your team when it comes to AI adoption and AI innovation?
[00:25:49] Speaker 3: You're asking for secrets. No, I think you'll hear more about this. We're in a shared room. You guys can keep a secret. I trust you guys. So I think you'll hear more about it in MWC in a few weeks. I mentioned before at the start around this idea of democratising access to artificial intelligence for our customers. We take that seriously. And I think we've got a role to play there, which we have a unique sort of opportunity there. And it's around how we interact with people, which is through voice. And if there's a control point that Deutsche Telekom or telecommunications companies in general have, it's still the voice. There's 800 billion calls in Germany per year on our networks, voice minutes. So there's still a huge amount of interpersonal sort of interaction. And I think the challenge that we face today more generally is that this technology, artificial intelligence, is generally sort of trapped in an app, in an app drawer somewhere, and isn't immediately available. People still have to think. I mean, maybe not this room, but in general, people have to think about, I need to sort of open up my chat GPT. I need to open up my perplexity to do that. And it's sort of controlled. Access to that is sort of controlled by two West Coast hyperscalers. And that's not right. So I think the idea of pulling this technology out of the app drawer and into our network, and having it much more easily accessible, and having it just on tap, like when you're making a phone call, is something that we're super excited to announce shortly, and looking forward to doing with you guys. So it's a big part of what we see ahead of us.
[00:27:35] Speaker 2: Amazing. Well, big round of applause to Jonathan. Thank you so much for your time. Thank you so much for sharing so much knowledge. It's been a pleasure to work with you. Thanks, everybody.
We’re Ready to Help
Call or Book a Meeting Now