Optimise to Innovate
Not every tech investment leads to innovation, but it should...
Join your hosts, Alex Galbraith and Jason Gray for the new Optimise to Innovate podcast from SoftwareOne.
SoftwareOne helps organisations maximise the value from investments in technology. Over this podcast series, Alex, Jason and guests explore how technology investments, that are designed to solve business problems and drive innovation, can often transform from enablers to obstacles, and what you can do to navigate this challenge..
Together with industry experts, they will examine challenges in cloud investment, AI innovation, software estate management, cloud cost control and much more, providing practical advice and guidance on how these challenges can be overcome and how you can transform your technology investments into power drivers for business acceleration.
Want to suggest a topic for discussion on a future episode? Simply email Alex and Jason on o2i@softwareone.com.
Don't forget to hit the subscribe button on your favourite platform to get the latest episodes first.
Optimise to Innovate
Agentic AI Is not really about agents
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
In this episode of Optimise to Innovate, we cut through the hype around agentic AI and ask the big question: is this the future of business process transformation, or just generative AI with a very enthusiastic to-do list?
Expert guests Alex Waldhaus and Seva Shchepanskyi explore why agentic AI is not really about agents at all, but about better process design, stronger data foundations, sensible governance, and knowing when to keep humans firmly in the loop.
From data platforms, governance, and change management to observability, human-in-the-loop design, and the risks of treating AI like magic instead of engineering it like a system, this conversation grounds the topic in real-world execution and practical examples.
A great listen for anyone who wants smarter automation and fewer meetings where someone says the word transformation 14 times...
You can follow our guests on LinkedIn:
- https://www.linkedin.com/in/alexander-waldhaus-5a5174115/
- https://www.linkedin.com/in/vsevolod-shchepanskyi/
Welcome to Optimize to Innovate, a show where we help organizations stop wasting money on things that don't add value to their businesses and understand the technologies that actually will. Join us as we share practical insights into the latest trends and innovations with industry experts across everything from software and fin ops to cloud, data and AI. We've got some fantastic guests with us today, and it's actually Friday as we record this. So happy Friday to you all, and let's jog enthusiastically into this week's episode.
JasonAI is a multifaceted thing, but it can feel like AI is just a huge bucket into which we throw everything, resulting in it becoming confusing and a bit intimidating. I like to think of AI in terms of how you might use it and describe it in three ways. You can use AI to help your customers, like in a chatbot or greater personalization through data. AI can help individuals and teams become more effective, something we call workplace AI. And AI can help improve your business processes. And it's that last one, improving business processes through AI, which we're unpacking today. We're going to explore what is agentic AI. And in the room here with me, I've got two guests who are perfectly placed to help us explore this topic. We have Alex and Seva. Would you guys like to introduce yourselves to the audience, please?
Alex WHappy to. Thank you for having us. Yeah, I'm Alex Welthaus with a company for roughly four years. Um, right now I'm leading our strategic priority for data and AI, but also take care of Sage Strategy all up and also um look after our legacy crayon data and AI team out of Jenna, the former center of excellence. So that's me in a nutshell here. My background actually is cloud infrastructure, and I started my career prior to cloud, so when we had tin boxes, data centers. Um, I'm a pre-sales guy by default. So that's me.
SevaAll right, and I'll take over. Happy Friday, everyone. Happy to be here. Seva Shchepanskyi. Uh, but everyone calls me Seva, and for an obvious reason, no one can pronounce my name properly, so feel free to call me Seva. I am with Software One since five years. I've been uh uh working the last 10 years in the digital transformation industry on various positions, uh mainly as a consultant, and then moved slowly to software project management, product management, and now uh with Software One, after successfully delivering a couple of uh biggest projects in the AI area, I am now leading the uh sales and business development strategy for AI globally. Happy to be here.
Alex GSo, if they were to tell you you guys know a little bit about uh this whole data AI thing, is what I'm getting from that. Uh thank you very much for joining us. Um so the topic today, as as Jason was talking about, was this whole agentic AI thing. Now, I I'm a guy who's been to um AWS reInvent a few times. And two years ago I went and all I heard was generative AI, generative AI, generative AI. And then last year I went and all I heard was agentic AI, agentic AI, agentic AI. So for those people in the audience who are not quite so familiar with the topic, like what's the difference between this whole agentic AI thing and the generative, let's say, web-based AI that we've been using for the last few years?
Alex WYeah, uh, I think it's interesting if you look at how quickly buzzwords and hype cycles evolve. Uh also if you start to look at like how does innovation behave these days, right? Like if you look at the innovation curve, uh, we have more innovation every day. Uh the cycles shorten, but the amount, like the amplitude of innovation actually increases. I think uh let's talk about generative AI first. Uh, my first before I was in my role here, my first touch point with that was uh with Mid Journey, actually, where I started to use it. Um and a friend of mine showed it on a Discord server. I mean, it can't get any nerdier, and you need a Discord to actually interact with it. And I saw the power behind it and I was blown away. And this was still prior before like OpenAI was a big thing when it was still like a research lab, and before we had Copilot, Q, Gemini, all those kinds of things. But with Genai, we have the ability to provide instructions, prompts, and the further we go in terms of innovation, we can literally use natural language to actually have conversations with AI systems based on large language models, and it's the conversational piece that makes it so easy and appealing for users of all kinds of walks of life. Um, when we start to think about agentic AI and agents, there is a difference. We use it um always with a similar meaning, but actually there is a difference. It's actually nothing new under the sun. We with agentic AI, we actually go back to the thought process about how do we automate business processes. That's the idea. How do we introduce automation? And obviously, we equip this with a grand vision in terms of having fully autonomous systems without any human interaction at some point. But I think when when we when we think about a gentic AI, we first want to distinguish between what is an agent and what is the meaning behind the gentic, right? Like agents are systems or entities that have the ability to act. While agentic is more like an adjective, it's a property, it's a kind of behavior that allows systems, um, ideally based off many different agents, to act independently and first and foremost intelligently.
Alex GSo that so it it if I was to put it in my layman's terms, uh generative AI is like a brain and a voice, and then agentic AI, you're giving it hands and it can actually do stuff.
Alex WOh, I like that. I I I have not looked at it like that. Yeah, you you make the puppet more complete, right?
JasonSo uh Alex and Seva, question for you. Thinking about um this this idea of setting AI loose, giving it autonomy, letting it have agency, you know, effectively like Alice, Alex was saying, having hands. When when we think about organizations replacing established business processes with AI-driven solutions, you know, we're talking about processes which you know are robust, they're well understood, uh, and you know, reliability and consistency, well, maybe consistency isn't exactly there because obviously you've got humans involved, right? So there can be an element of inconsistency there, but reliability in business processes is critical. So if you think about agentic AI being some kind of you know secret source to add, you know, to make business processes run better. What what approaches have you seen work best to ensure you know processes involving agentic AI deliver consistently?
Alex WUm and I feel in your question there is a slight tone of concern when it comes to like how much leniency and autonomy do we want to give and how how do we design processes? I think like the the underlying situation here, it actually doesn't really come down to agentic AI. This entire domain is not really about agents or the part of agentic, it's more about how do we design business processes in general. And if we want to make sure that we do not introduce too much too early, we need to go back to what are good practices when modeling, designing, or changing business processes in itself. It's about layering. How um so the mistake we want to avoid is like um trying to replace a full process end-to-end. The right approach is more about the introduction of agenti capabilities in layers. That's what I mean with layering. We start with augmentation and not full automation in the beginning. This could look like let's introduce decision support, not execution, let's always keep a human in the loop to ensure that we have human decision making and also observability in terms of does the introduction of pieces of automation actually is for the greater good in terms of the process or not? So ideally you would layer your business processes and you would layer the introduction of automation. You start with decision support, not with execution, you keep humans in the loop, you avoid full autonomy in the beginning, and you design for failure, not for perfection. It's about keeping and retaining the possibility to interject into every small step of the process, understand the decision making, utilize decision locks, and ensure that things happen in an orderly fashion according to the desired outcome and the desired state of the system.
JasonAnd it's really interesting, Alex, you say about designing for failure, because you know, I'm I may I'm not saying I'm a pessimist, maybe a realist, but I have a concern that, you know, if we change something that that runs, even if it doesn't run in the most efficient way it could do, you know, we replace some of the people-driven steps, it can seem great. But then when something breaks, if it's complex, it takes a lot of time to troubleshoot. Well, you know, we we automated the process maybe for some kind of business advantage, but a lot of companies will just do it for efficiency, you know. And if you spend a long time trying to troubleshoot a core process, well, you've kind of just lost your gains there. So is that what you mean when you say design failure?
Alex WExactly. Um, I mean, it's I we I think we all are fully understand like the goal of um going after efficiencies and taking out um decelerating step of every process. But let me give you three examples here. It's about how do we lock decisions to actually ensure that we understand if something breaks, where did it actually break? It's about traceability of steps. How do we trace how the process actually went? And it's about the boundaries, meaning what do we allow the automation to actually do? The real risk is not that a genetic AI fails, the real risk is that companies deploy it like magic instead of engineering it like a system.
JasonYeah, that's a really good point. So, so do you think that you know that we're not dealing with black box components? Or there's some out there that companies actually should be aware of because you know what you're talking about there is an understanding, um, like an insight into what's happened and why, and almost like you know, being able for a company to be able to justify each decision and each step that's been taken, are we still working with black box components or is there enough insight that companies can get into what's actually going on?
Alex WThis actually comes down to system architecture again, right? Agentic doesn't have to mean opaque. It becomes a black box if we design it and build it like that. But if we allow observability and line of sight into every single step and everything a part of the process, we avoid creating a black box that would operate in a way that we have no idea what actually is happening under the hood.
Alex GThat's an interesting point because we still have uh a lot in the AI space, which is still a black box. Even the most intelligent AI researchers on the planet still are not 100% sure on how all of this is actually working under the hood. Seva?
SevaYeah, I'd like to add to that that in order to avoid the black box challenge, it actually comes down everything to the data, right? So agents are still operating with data as any other uh AI application, and uh the principle of garbage in, garbage out uh still is uh actually valid, right? So uh the organizations uh and coming back a little bit to the beginning of the conversation when you said agent AI, agent AI, agentic AI is just so uh uh like everyone is talking about the agentic AI, right? Uh when we talk to our customers, uh, and uh obviously, right, everyone follows or many follow the hype. Everyone wants to do agents, but no one wants to speak about data platforms. And whenever the customers are telling, oh, by the way, I I would like to have 10 agents, we say, wait, wait, wait, wait, wait. Let's stop here and a little bit dig deeper into why do you need agents and are agents actually the proper solution? And before that, we come to okay, with which data would the agents operate? And let's start with you know, cleaning the data first, understanding the data, and digging into the topic of the data platform first, right? And this is the baseline for the future success of agentic implementation, and therefore it is extremely important to dig into the topic of the data platform before actually implementing the agents.
Alex GI think you're 100% right. Um, it's so easy to get caught up in the shiny, shiny of what's the art of the possible and this amazing stuff because it really is amazing. When you hear some of the use cases that people have got, it blows minds. And in fact, we've got a couple of interesting ones we might talk about today. Um, but without that data foundation in place, you're literally building on SATN, aren't you? Because if you don't have the governance in place, you don't have the access, the security, um, you don't have your data. Like, I mean, how many organizations out there have their data in uh silos, you know, simple little silos, whereas this is my data, nobody needs to touch this data, we're just gonna keep it over here and it'll be fine. So there is that massive gap between uh the vision of where we can go with AI, but the foundation that we actually have in place underneath it. So, what do you think are some of the kind of activities that an organization should think about to get those right foundations in place?
Alex WThe first one I like to call it get your house in order. So we we have we have joined many, many clients over the recent years when it comes to moving towards the data in the AI space. And we started to look at this in a quite structured and specific way. So right now I would say there are four different maturity levels from the what we internally, never externally, call the uninformed beginner, those clients who need they understand they need to do something. And we see quite often situations where the board of directors is like pushing on the C-suite. Hey, we need AI, we want to stay in business, otherwise, our competitors will will take over. And then they are sitting in front of a massive problem because they not fully understand it, and they they might start to experiment themselves, like get a chat GPT Pro or a clause subscription, and then they are overwhelmed with the art of the possible to uh uh to quote you. And this goes up to like the fully AI native company, but in all honesty, when we look at like companies we have dealt with and our client base, there are no full AI native companies. I would say there are like 10 in the world, it's usually the research labs, the anthropics, and open AIs. Um, and it's quite interesting to say, I mean, I'm I'm I will answer your question in a second, but if if you look at like what they actually did first, when when you look at all the innovation, you saw that they first started to focus on improving how they actually develop code, right? So it was all about cloud code, GitHub Copilot, all those kind of tools, because they understood early on that the amount of coding required is massive. So they just introduced a big change here. But bring let's bring this back to like what should a company think about first? It's like, what do I want to get out of it? And do I have all my ducks in a row in order to utilize that? Because otherwise you you invest heavily into a desired business outcome where the foundation is not even given. So get your house in order, start to think about how your data structure looks like, think about how do you actually go with data platforms. And this always sounds a little bit terrifying because it seems to be a huge project. But when we think about the recent innovation in the data and AI space, we can see that things that a couple of years ago would have been a huge custom delivery project, highly customized, highly complex, lots of time and material because the majority of providers would not even dare to go in a fixed price engagement here. Um, a lot of those things actually got commoditized. So if you think about hyperscalers and what they actually offer and like the completeness of platform abilities you have, just think about AWS here, it actually becomes more easy and it allows customers of all sizes and maturity levels in terms of their infrastructure, cloud infrastructure, and technical capability to jump on the bandwagon here.
SevaI'd like to add to what Alex just said, uh, the part of the data governance as well and data and IT estate understanding, right? So before speaking about if agents or agentic AI is the proper way to automate business processes, let us understand first the processes, and that includes understanding the IT estate and data estate. So what are the processes? Who are the persons involved, the business users, the persons affected? In which kind of tools do those processes um happen, right? Which kind of data is being created and shared? Um, who are the data owners and the data stewards? How is this data being stored? Um how is this data being cleaned? And and what is the quality, which kind of data quality KPIs are we using, right? And therefore, when again these all topics and measures are very important, right? And come usually with the data platform itself. Uh as the other topic, uh, it's also important to touch the change management process. So if we are implementing agents, what are the peculiarities and the impact on the persons involved? Will those agents be used? Do those persons understand how to use those agents? Yeah. And that actually is a major component of the success of an agentique AI implementation.
Alex GSo working backwards from what you just said, uh, one of the things, if we look historically, loads of people would just go and say, right, we're gonna do a data project or a machine learning project or an AI project, if we go five years ago, right? And they go, right, let's go and get all the data we've got in the company, and we're gonna throw it into this massive bucket, and we'll call that a data warehouse or a data lake or a data lake house or whatever. And then uh maybe we'll extract some value from that. But what you're saying, Seva, is it's much more targeted than that and much more careful, I think. So you might say, um, I've got this outcome I want to achieve. What is the specific data I need to achieve that? And then what are those guardrails I need to wrap around it from a governance perspective? And then I get an outcome. And after that, then I build a bit of confidence and I build a bit more trust in the system, and then I pick my next use case or my next business case, and then again I go through that kind of same iterative cycle rather than the blat it all in a bucket method. Is that fair? Absolutely.
SevaYes, and if we implement a data platform, we usually or the customers usually think mostly in terms of technical capabilities, but it is only one component of success of this project because technical capabilities, if you talk about you know, the hyperscalers and out-of-the-box technologies that they are offering, you could say they are more or less the same, right? It's just in a different ecosystem, different infrastructure, but it doesn't ensure that the project will be successful. So there are many more components to that, and that is the governance, the processes, the understanding of the data and IT state, um, change management uh activities, and there are many of them. And by the way, this is this tends to be one of the most underrated parts, but uh uh I think as important as the technical part itself.
Alex GAbsolutely.
SevaThe people bit in the people processing technology, isn't it? Of course, you can implement whatever the technology and whatever the data platform, but if it's not going to be properly used, then uh the outcomes will definitely be not expected.
JasonSo I'm thinking of things like increased context. Windows, so AI models being able to retain more context understanding of the tasks they've been given as part of that process. Having models trained to specialize in key tasks like orchestration of multiple AI agents running process steps, standardization of protocols like agent to agent, and especially the speed with which that has that's happened, or making it easier for AI to utilize tools and data sources through the model context protocol, which gives us effectively a proper toolbox instead of just an LLM hammer. Or back to uh uh podcast Alex's point, it gives it some proper hands. So, from your point of view, what do you think has changed most to make agentic AI-driven processes really usable?
SevaVery interesting question, thank you. Um, so the short answer it's none of these, but the whole combination of those uh topics, right, that uh made uh agentic AIR AI more capable in the last uh months. Uh, when it comes to what has the highest impact, definitely tooling and protocol standardization like uh MCP has the biggest impact. Uh, the core issue is that uh LLMs were smart uh but trapped in a text box, and there were no integrations, so agents could not really execute uh the full planning and decision-making and action-taking capabilities. With tools like MCP, uh now uh models have a standardized way to discover and use various tools, and uh they have some specific structures for enforcing the actions and uh and the inputs basically. Um when we talk uh about uh better orchestration, that's uh definitely I would say the next, so the second most uh uh important factor. Uh what are now what is now emerging is basically those planning execution execution patterns, test-specific models like routing, planning, verification of various uh um uh actions, uh, and also structured reasoning loops. Uh the key shift here is that uh you stop relying on emerging reasoning, but you start like engineering the reasoning process itself. That also provides better traceability of decision making by the agent and also understanding where some drifts or errors happened and being able to actually find out um what is the performance leakage part in this whole chain and what to work on and improve to provide better accuracy. When we talk about larger context windows, um this is not um a silver bullet yet, if we may. So there are many limitations still, like yes, of course, bigger context provides more history, more documents, uh some less retrieval calls, but at the same time, if the yeah, the garbage in, garbage out, right again. So if the quality of this history, if the quality of this data and if the quality of those documents uh is uh not good enough, then it doesn't fix the reasoning errors, for example. Um and uh last but not least, as you said, uh agent-to-agent protocols. This I would say uh it's very promising, but we are still very early here, so uh we should uh see how those protocols will evolve in the future. As of now, um multi-agent protocols um and multi-agent conversation is useful, but in a very specific uh and controlled scenarios. Um, and the it the reliability layer here is still uh something that uh may must be observed uh very cautiously, I would say.
Alex GI think uh you you make a really there's a really interesting point you made there around uh context windows, right? And the bigger the context window, that just can sometimes mean more you're holding more junk in memory. So for for those people who are less familiar with this, absolutely um could maybe Alex, can you just explain what a context window is? Um and also uh in the in I was about to say in the context of this conversation, can you explain what a context window is? But also the the bit for me is, and I and I think we've all seen this, sometimes the longer you work, even in forget the agentic bit, if you just go to a generative AI session, the longer you sit in a session having a conversation with a bot, sometimes the responses get worse. And you give it back that information, or you're having this ongoing conversation, you're like, no, no, change this, or no, I don't like that. And actually, every single time it comes back, it gets progressively worse. So, Alex, can you touch a bit on this whole context piece? Because I think that's something that, um, although we've heard of it, isn't necessarily as familiar to everybody.
Alex WYeah, uh also it's not getting just worse, it's getting slower. Yeah, if you have a very long lengthy conversation with one of those like chatbots or conversational bots, um, you see that actually the performance decreases on many fronts. Um, why is that? So let's translate in layman's terms, let's translate context window here with memory. So picture it if you carry on with a very long conversation every time you prompt and give an instruction to the model, it actually goes through the history of the chat, through the memory, which decreases performance. So and if you start asking, do yourself a favor, ask the tool, ask the model, hey, why is that? And see what the recommendation is. Yeah, start a new chat. And then depending on the provider, um, the context windows are connected, so they can actually reference things like information, bits and pieces from other context windows, or they can't, depending on who you work with and depending on how strong the model is.
Alex GYeah. So so then if we take that into the agentic AI world, the bigger the context window, potentially, the more uh intelligent responses I may be able to get if it has the correct context. But at the same time, I'm also playing a bit of a balance with performance. Because Seva, you were talking about how do we optimize performance, and I could absolutely see, and we're not we're not on uh uh an optimized session this week, uh, but we are optimized to innovate. And the bigger the context window gets, the more performance is required to be able to handle these massive queries. And then as an organization, I'm paying more for that, aren't I? On my on my AI bill.
Alex WYeah, it goes, it comes down to tokens, right? You might not do as many retrievals, but you still need to put a lot of information into the tokens available. Um, for instance, just to give you it's it's like a recent example. So uh I I got myself like a Claude Pro subscription. I wanted to play around with it, and I have to admit, the last time I coded back in university, I'm not gonna kill you the guy only. Like, well, yeah, that's like it's like two semester. No, it's like I'm not dating myself, it's more than it's it's almost 20 years ago. So but I had an idea, like I had an actual business problem for which I am facing myself, which I would like to solve. And well, within this like subscription, I was able to build an application, and I burned through tokens like hell. All the different tool calls, and and I mean now now we could we could shift into or switch into software engineering discipline. Everything was written into one large JSON file, which means every time you make an adaptation to the code, the model has to go through the entire file and you burn through all the tokens until you receive the message. Well, you ran out of it, you have to wait for four hours or you upgrade from 20 bucks an uh a month to 100 bucks a month. Well, and then I attended my garden. So, yeah. There is an economical impact to answer your question. Yes. Um, if the context window gets quite large, you obviously burn through more tokens. Um, there is there is also another side to it, right? Um when you think about like how do I operate with all of this? Even today, many of the agents you use are quite reactive and they are prompt-based, meaning there is not the full autonomy included just yet. Meaning you need to give instructions to it so through prompts. And what we saw is that the results get exponentially better if people know how to prompt. It is actually a skill you need to learn and need to pick up if you start to look for very specific information. So when companies decide to to go into the AI era and start utilizing it and want to ex and expect the workforce to actually start adopting it, you need to you need to make sure that you provide like the proper training to the workforce. Uh give them first, give them the reason why. Like that's like the first rule of change management, etca, and all those kind of things, like awareness and desire, create desire, but also tell them like how they are supposed to do it, equip them with the right skills because then the um results will satisfy.
JasonSo, Alex, is there a danger that when it comes to improving processes through AI, that we end up just tinkering around the edges and making small incremental gains versus actually looking at how an AI system naturally runs best and re-engineering a process to mold it to that shape for optimum effect?
Alex WYeah, there is. I agree. So well, we we need to look at it from two specific perspectives here. Um throw just throwing a GenTech AI or an agent against a business problem is not gonna solve it. Um, we talked earlier about system architecture and process design. We need to think about business processes like systems we want to build. Um today we already see that if you want to do this properly, you deal with a lot of edge cases anyway, and that is where some sort of risk is. So creating a general system of business processes that include all the different edge cases is highly complex, and we might not even be there yet. So we should focus on the business processes that we fully understand and where we have the ability to do a complete re-engineering. Um, it gives us an interesting view on how do we want to think about agentic AI in terms of what does this mean for other business areas, because agentic AI not only looks at business processes, it gives us also the opportunity to fully rethink how we want to do application innovation. In the end of the day, an application is a solution to business processes and business problems, and they are architectured in a certain way. But thinking about introducing more automation here backs the question why not rebuild like application architectures completely on the Agentic AI framework and creating like a multi-agent system. But that already brings us now to the other side. We might not even be there yet. Um, the majority of agents you see in marketplaces today, and it doesn't matter to what publisher and vendor we go, are reactive agents that only fulfill one purpose. The majority of them actually connects the hyperscaler ecosystem with an ISV, because the ISVs are usually very quick in adopting those new trends and producing IP and publishing it on a marketplace. The amount of true multi-agent systems where you have a very complex business process architecture implemented is still quite low.
JasonThat's interesting. So that suggests in a way that companies are going to be having to build versus buy for some time yet.
Alex WOh, absolutely. Um, it is it is it is a challenge to identify like a provider now that actually delivers you the desired outcome because in the end you need to think about the business process management like something that is always highly custom to a client. They have their own processes. There might be similarities to other businesses, but the way they implement it and the combination of different systems they use is unique to them. So this actually has an impact on how we need to look at what kind of roles do we need in the future, but I don't want to go too much into that right now.
SevaSo to add to what Alex said, it is very important to understand if the processes that we are trying to automate by agents are the right processes, because we can end up automating processes that shouldn't have been there at the first place, right? So again, it is very important not to think about agentic AI implementation only in terms of technology, but it's very important to understand which business processes are affected, which users are affected, what are the success KPIs that we are measuring the success of agentic AI implementation with, and if the processes themselves are correct, maybe it makes sense to redesign the system, to redesign the process and not redesign an incremental, you know, um wrong process automation with AI. Yeah, yeah, just make it work.
Alex GYeah, that's that's a really good point. Also, I thought it was really interesting uh on Alex's point around the the build versus buy. For me, um, I think the nature of the beast of AI is that we are all going to be more likely to build versus buy. I think the buy in the future will come in the platform. The buy will be the platform that makes it as easy as possible to connect your data sources, to secure things, to govern things, uh, to plug them into the appropriate models, as we're seeing with the hyperscalers and many third-party organizations building these um AI platforms, these connectors, this like I jokingly refer to it as like AI middleware, if you will, right? Pulling all these different bits together. But the intelligence and the customization and the thing that's going to give us the benefit is ultimately always gonna be bespoke to the business. And therefore, I think that's almost always gonna be a build thing. So we'll end up with these two camps. One is the one is the platform, and the other one is the actual bespoke element, is where I think it's going. But then actually, just to I know we've been talking about uh theoreticals, if you will, and some practicals, but let's take it into a really practical. Can you give us a couple of examples maybe of of how agentic AI is really being used in in the real world by customers?
SevaFor sure. I can start with that. Uh, I would even go uh a step further and uh uh tell how we uh are using agentic AI for our core business. Oh, yeah. As companies and customers know us, we are the biggest software asset management partner in the world with AI capabilities, right? So we're not an AI first company ourselves, but obviously uh with our AI capabilities, we are addressing our own needs as well. So eat your own dog food is a very important topic for ourselves, right? Uh, we are implementing various agentic AI solutions uh to do our core business uh more efficiently. And I would like to provide an example of an agent called a EULA analyzer. Uh, we so our customers are purchasing software and we are the resellers. And with every software, they must read uh the EULAs, right? So the end user license agreements. Nobody ever read this thing, surely. Whatever we are installing uh uh privately on our uh private machines, obviously no one reads the terms and conditions, just puts a tag there and installs quickly. But uh enterprise customers and uh uh businesses, organizations actually must read it, therefore they have the procurement departments, software asset management departments, etc. And with every software, they must read the EUL. That's number one. Use case number one, right? Reading the EULAs itself. The use case number two, actually, on average, every second software provider changes the terms and conditions at least once per year. Therefore, every second EULA has a new version every year. And with introductions of amendments and changes, you need to actually understand whether there are new risks being introduced or um topics that are non-compliant with your organization's internal policies. For example, that means the software asset management department or the procurement or legal department, whoever is responsible right on the customer side, must read every second EULA at least once a year additionally. Now imagine an organization buying uh a software um asset bundle of 100 vendors, so 100 various software tools at one transaction. They're reading 100 Eulas, and 50 EULAs will on average change per year. Imagine right the cost of that. So we have developed an agent that uh automatically has, first of all, inbuilt knowledge of our 20-year or 20 plus years expertise in the field of software asset management, where our software asset management colleagues know exactly what you have to look at when analyzing Uless. What are the risks, financial, technical, uh, legal, etc. Right? Um, which points exactly you have to keep in mind? This is being this this this prompting layer or intelligence layer, right? Is on analyzing the ULS automatically. Then for the second use case, it also um uh compares the versions of the two versions of the same EULA to find differences and to flag some additional points which may introduce additional risks to the customers. But moreover, there are various, and these are like standardized cases, but various organizations have their own specific uh uh topics. For example, in the financial industry, it's important to be Dora compliant, which is this Digital Objectives Resilience Act, or uh for the others, it's important to be compliant with other compliance uh right uh sources. So there we have also built an agent which does the web search and actually checks the document, so this specific Tula that you're analyzing with uh uh like for the compliance issues uh with the third sources. So if those sources are open um on the web, right, then the tool will also actually check uh the compatibility and the compliance of the specific document that you are analyzing with the specific uh uh document uh or the source uh which is accessible on the web. And this is how we are exactly, and this is how we are uh you know uh optimizing uh our own core business with agents.
Alex GThat must be, I mean, that must be literally saving us thousands of hours based on just even uh looking at a couple of hundred vendors, never mind the thousands of vendors that we were. That's really interesting.
Alex WYeah, another example is also something internally, but eventually partner facing. So something we actually will hand out to to other parties. Um, internally, we call it Project Aura. It's an AI agent that we use to better understand all the different data our hyperscaler provide to us based on actual transactions and the recommendations based on what to sell next. Um, and bake this into our to-be-planned partner and end customer conversations. So, what we are doing here is we we pretty much build a recommender engine that aggregates the different data sets available to us. So, think about it like within the Microsoft CSP land, we know who's the partner, we know who's the end customer, we know what kinds of entitlements are included in the subscription, how many users do they have. Um, and Microsoft also tracks this through their infrastructure. And with the ability now to actually aggregate all the different data points, we have a really good understanding in terms of what the next logical product conversation should be in terms of our reselling motion. And in the beginning, now we rolled this out to 100 users. Um, the next plan here is to actually extend the roller to the complete A and Z region. And doing this for Microsoft is just the beginning. If you think about our the magnitude of our reselling business and taking that data into account and building recommendations, it's a very interesting journey.
Alex GThat's super interesting. I think that's um Both of those are very tangible. Especially, Alex, that that example you give there is the kind of thing that you okay, you could change the underlying data sources, but that's the kind of use case that can apply on almost any business. If you have a customer, which I would say most businesses have some form of customer, you have data about your customers and you can use that intelligently to help grow your business. That's a fantastic use case for Agen.
JasonSo let's move on now to wrapping up as we come to the end of the podcast. And I want to ask you, Alex, which capabilities of agentic AI do you think are mature enough for implementation today? And which ones are just around the corner, but not quite ready yet for production use?
Alex WWe touched on that slightly already, right? So what do we have today? It's everything that is working in a reactive way or fashion. We use a prompt to generate activity and action. And this is something we can solve today easily. This functionality is baked into a lot of the products you can actually subscribe to, uh either as a private citizen or as like from a working and business perspective. When it comes to what is around the corner, it's to actually start connecting the large context windows we have of different agents to create memory. The next logical step would be moving from memory into learning. And the next big leap would actually be having this ability to implement the different edge cases we talked earlier about, which still separates us from having like fully autonomous multi-agent systems. So that would be the next big leap to actually have a lot of collective tissue in between and have end-to-end processes depictured and built as agentic AI systems.
JasonAnd looking ahead, what what do you see as the next major development or breakthrough in agentic AI that organizations should be preparing for? Do you think it will be um technological or do you think it will actually be more about business process redesign, maybe?
Alex WWell, yeah, it depends on how we phrase the question, right? I think but the next thing we should all prepare for is how do we think about our business processes and getting a new type of role into those conversations. So if you start to think about using agents and agentic systems to redesign such processes, now we need to bring those uh data and AI experts in business process conversations, which also introduces challenges. If you think about how how have we hired 10 years ago for data and AI experts, we hired out of academia. Um, are those groups very close to business processes? Not today. It brings us to new type of job roles. Like I hear the term, I I I kind of like it, a forward-deployed engineer, which is someone who understands system design, system architecture, but also understands business processes and speaks business. So we need to prepare ourselves for that. It's all about how do we redesign processes.
SevaI'd like to add to that. Um, if the question is, what is the major development or breakthrough, right? I'm really looking forward to the AGI era, which is uh coming sooner than I think, uh than we all think, right? Um the agent-to-agent communication, uh, agent-to-agent payments, um, the forums when agents will uh uh talk to each other, um, the platforms where agents can hire humans uh to do to do things to do things uh that they are not uh capable of doing uh themselves. The this is already there, like as a joke as of now, but uh the core message is the innovation cycles are getting so fast in AI that it is really not easy to predict beyond three months. So you can plan for the next three months, but if you plan for the next three years, you will probably revisit in six months what you have planned, right? And uh, or in six months, you will understand why whatever you have planned six months before for the next three years is already not relevant. Uh, and this is just the orders of magnitude and the and the speed uh of innovation in this space that we are not fully aware of. And here I would like to refer to the famous situational awareness paper by uh Leopold Aschenbrenner, right? So this guy is one of the first um open AI engineers uh who also dedicated his paper to Ilyasuzkever when they had this uh uh internal uh you know chaos where people were leaving open AI and then they were rehired again, then there was this uh moment where there were strange things happening there. But he launched the paper in June 2024 where he um tried to predict based on the history of development of the GPT models, right? How the performance improved from the first GPT models, then you know, to the next one and the next one, how much more performant they became. And he provided really interesting examples. Linked the performance, the development, and the exponential speed of development of the performance of the GPT models to also the development of the GPUs, because obviously they need more and more computation power, right? And then linked to if it is gonna have the same tempos, right? Then AGI is coming somewhere mid 2020, right? So earlier than we think. And this is very interesting, it's still relevant in my opinion. I just would recommend to read that to everyone.
Alex GOh, that's awesome advice. Uh, I will definitely be having a look at that this weekend. Um, so with that, um, I think we're we've we've gone a little longer than usual, but it's been such an awesomely interesting conversation. Gentlemen, thank you very much for joining us. Um, so before we wrap up, um, if people would like to continue hearing from you, want to see the kind of stuff you're posting about all these topics, uh, is there somewhere, Seva, uh, they can stalk you online?
SevaUh sure, they you can follow me on um LinkedIn, Twitter, Facebook, whatever. Yeah, usually, you usually I'm active on LinkedIn. So Seva Schipainski, just find me. There are not many Sevas Chipainskys there.
Alex GAnd Alex?
Alex WYeah, go to LinkedIn. Uh, if you find my Instagram or Twitter, that's more private. So not much AI content there. Uh, if you want to have a job, AI, or technology chat, either ping me on Teams or find me on LinkedIn.
Alex GListen, we will post your LinkedIn profiles in the show notes. Um, so with that, uh, that's a wrap, I guess. Um, for those of you who are uh really enjoying the episode so far um and the show so far, we're on this was episode three. Our next episode in our Optimize to Innovate cycle is also gonna be about how do we save money and stop wasting money on SaaS subscriptions. Uh and uh the question you've got to ask yourself is are we oversubscribed? So I'm really looking forward to that particular episode. In fact, I may have to reevaluate my bank statements and just look at my Netflix subscription and stuff, I think, between now and then. Um but if you did like the episode, please do hit subscribe on your podcast app. Leave us a review, it really helps people to find us. Um and if there's something you want us to cover in the future, please don't hesitate to leave a comment or let us know via the socials, and you can catch us at software one just about everywhere. And you can even email us at o2i at software1.com. That's oh the number twoi at software1.com. But with that, thank you again for our guests. Thank you for listening, and we'll see you in the next one.