Register

Reliability Calls 3 - AI Automation that actually works: $100M, messy data, zero surprises

Are you enjoying this session?

See exactly how PromptQL works for your business.

Book demo

Every month, we unpack what it takes to build reliable AI – AI that doesn’t just demo well, but works reliably in production, handles edge cases, and earns trust over time.
Here’s this month’s agenda: AI Automation that actually works: $100M, messy data, zero surprises
PromptQL CEO Tanmai Gopal shares how enterprise teams are using PromptQL to drive over $100M in expected annual impact through AI-powered automations of business-critical workflows.
For AI to take on business tasks, it must meet a very high bar for reliability. Most GenAI systems fall short—too brittle, too unpredictable for production. PromptQL changes that.
In this session, Tanmai will walk through how teams are using PromptQL to deploy AI that actually works in the enterprises—reliably.

What's discussed in the video

Today we're talking about AI automation that actually works, and as you can see there, we're talking about a hundred million messy data and 0 surprises. We have a good agenda for you today, including our CEO and co-founder, Tanmay Gopal, who's going to be running through a couple of things with us, along with some of the other folks from the engineering side of the organization. We're going to talk about automations with PromptQL as well. let's go ahead and jump into it with the man, the myth, the legend, Tanmay Gopal. So Tanmay, if you'll join me on the stage, please. Hey, how are you? The man, the myth, the legend. Is that my I think I've used that before, but for me, it's always the turtleneck that just makes me think that. Hey, everybody. Thanks for having me, Rob. I'll if I can dive right in. Um, I titled this talk, your best programmer is Karen from accounting. This could be anybody but that's kind of the provocative kind of title for us to think about, which is that the best programmers in our organizations, we want them to be the domain experts who live in our organization, right, and who work on specific parts of the business that can benefit from a lot of automation and a lot of AI driven automation potentially. I don't know how many of you folks have seen this tweet that became pretty viral at the time when Karpati said that the hottest new programming language is English. Does anybody remember from how long ago this was? This was, this happens to be in, this was in, this was a few years ago and so Karpati's had the Oracle property of being able to call things out pretty early. And so, and so he called this out a few years ago. And since, you know, over the last, especially a lot of a lot of pipe coding improvements have happened, a lot of stuff has been happening. And up until 2 weeks ago, I don't know if you folks saw this, which was when an AI agent deleted the production code papers for somebody who was building a pretty complicated application. So there is excitement, but there is also, even for people who know what they're doing, this potential risk of, hey, if we screw things up, it can have serious consequences. And I think that's one of the biggest challenges for not just developers building things, but especially for non-developers to actually build things that are business critical in production. The goal for us is to help our non-technical people in the organization, the expertise that they have about whatever they do, whether it's in finance or accounting or operations or supply chain, wherever it is, kind of want to give them the power to be able to build and ship stuff in production. And so we kind of talk about why that doesn't work and how we can try to make that work. So if you think about, if you kind of again think about by coding the way it is today, and being able to use either an AI assistant like Claude or ChatGPT, or using something that's kind of more, that has a development environment baked in, maybe it's like an upload lovable board even Cursor. And you think about what the, you read up on what the most popular tips are, for vibe coding. Let's take a look at those tips that help us skip vibe coded things to production. So if you look at popular guides of people who have been doing this for a while and have kind of developed that tribal knowledge of, well, this is how you can make vibe coding work. Let's look at some of those top tips and see how that stacks from somebody like Karen's point of view, who's a non-technical person. vibe coding isn't dumb, you're just doing it wrong. This is one of the most popular, this is a pretty popular Reddit post on the top vibe coding tips, which are pretty good actually. But the post opens up with vibe coding isn't dumb, you're just doing it wrong, which is probably not going to fly for non-technical people because they already have a lot of work that they do themselves in their own domain. And learning a lot of things about a new domain is pretty complicated. So this works if you're a developer who has to kind of learn the adjacent skill of prompting to help your AI code better, right? But this doesn't apply if you're not a developer. It becomes really hard to figure out what that adjacent skill of prompting is, of technical developer prompting is to get it to work, right? So specifically, if you think about examples, even as simple as like the 0 effort example of pick a mainstream tech stack, We've already lost Karen. What is the best tech stack? What does that even mean? And that's already kind of created an environment that doesn't set our non-technical people up for success. They don't know what a tech stack is, especially for their work. So you think about somebody in the finance team or the accounting team who wants to, and that's kind of one of the examples that we'll take a look at, they want to kind of automate Approval workflows right so when invoices are submitted in the organization. There's a lot of custom logic, depending on you know what part of the world you're in, depending what kind of invoices you handle etc etc right depending on the business. depending on the latest kind of expense policy, what invoices you want to auto-approve what you want to do manually. So you want to, let's say, automate that workflow. Now, as a part of doing this, as a part of automating this workflow, asking me to even pick a tech stack is, well, I'm lost already. One of the other pieces of advice for successful Vibe coding is to write a simple PRD, a product requirement document. This is really good advice. And as people get into it, this is almost thinking of it as more structured prompt engineering. Let's do things step by step. Let's explain what we want done step by step, and then iteratively improve it. So this is fair, and I think this is one of those things that we can expect our users, our non-technical users, for whom we want to enable in the org to learn gradually, to start small, understand, oh, OK, this is what a requirement description looks like, and then how I improve it. One of the third pieces of advice is use version control. obviously doesn't track as well. That's also super hard to understand. I don't have to explain why, but Git is a little bit of an art to use for somebody who's not used it before, so it takes a little bit to wrap their head around. And it's also a lot of burden to start to use that easily in a way that is fool proof. Provide working code samples. Nope, that's not going to work either because I don't know how to code. So I don't have any working code samples that I can look at. Right. When stuck, start a new chat with better info. This is a great way of coding because sometimes you just have way too much in your context. This sort of works. It's not, it's not. I mean, we all wish that we didn't have to do this with AI, but I think at this point, almost everybody who's using AI even a little bit kind of understands that, hey, as my conversations become longer, the conversations become less reliable. So that's an expectation that's OK for our non-technical users to have. On the security side, a lot of this started to become much more in conversation, especially after the rep-lit production database incident, that these are kind of the security tips to keep in mind before you manage to ship something in production, right? And if you kind of think about, if you look at this, this is not easy, right? Like these make no sense. to again, somebody who's a domain expert who just wants to get something done. It's really hard for them to say, do this, right? Like check, make sure that your code is safe from these things. It's not even I mean, the development part was hard. The security hardening is impossible. There's no way that Karen is going to go sit here and make sure that their work is protected from SQL injections or has the right authorization, low-level security policies. It's not going to work. Understanding server-side versus client-side, Jesus. Right? It's not going to work. It's not easy. It's not easy just given the amount of change that is on the front end and full stack ecosystem. It's kind of hard for people to generally have a sense of server-side versus client-side anyway, especially now with a lot of SSR stuff. But still. hard to expect, right? So these kind of like live coding tips or advice is not really going to work. And so in a way, we kind of need to rethink how to make it possible for non-technical people to be able to ship something into production, right? And so one of the things that we've been working on from QL is this idea of automations, right? And the core idea of the automations is that if Karen's writing code, Karen's code is secure by design. So whatever logic or work is shipped, it is secure. It's not possible for it to be insecure no matter what the user does. The user could do something terrible. could write absolutely nonsensical things and it still would not be insecure, right? The user has a full non-dev software development life cycle. So that means that I don't have to be technical, but I can build and test and deploy and debug and troubleshoot in production. So I can actually take responsibility and ownership of my business logic. I don't have to learn these concepts of version control and CI, CD, and whatnot. I'll write a piece of logic, I'll ship it. If something goes wrong, I'll find out why it's gone wrong, and then I'll go improve it. And then I think perhaps the most important piece of this is that as a user in a particular domain, let's say I'm in finance or let's say I'm in accounting or in supply chain or I'm an operator for a health care contact canter, whatever kind of domain expert I am, I don't want to learn Software engineering so that I can tell AI to help write software engineering better. So I don't want to learn that language. But I'd rather that the AI can understand my language, like my domain terminology, my data, the things that I care about, my business context, so that I can provide precise instructions in the language of my domain. I want to just use my vocabulary. in, for example, how I say invoices and POs and approved vendors. I want to use these terms. I want to use these business concepts, which in, say, for example, in software engineering might have been classes or modules or things like that, like business domain objects that we might have. I want to be able to use those terms and then describe what I want done, and then I want that to work. As opposed to me as a user having to learn how software engineering works and then providing precise instructions to guide an AI agent to build production quality software for me. And so we kind of flip that equation a little bit. That's what automations allow you to do. We've been previewing that with some customers. We'll show you some recent updates that we've been working on. What we're going to try to do, take a look at, is take a look at this example, which is, imagine a scenario in your organization where lots of invoices are submitted, and you have to review these invoices, and you have to say, Hey, is the invoice actually by an approved vendor or is it below or above a certain amount? And then depending on certain criteria, it's either approved or you want to flag it for a human to review. And this piece of logic here that you have is an example piece of logic that changes pretty frequently, that requires or has a fair bit of domain expertise and context, right? It's not like this piece of logic that is a part of a larger workflow or application can be easily, is understood by a domain expert. So for example, Cadmium Accounting understands what that logic for reviewing is, right? And we want them to be able to own this logic as a part of the workflow, right? And so let's take a look at what that looks like. I'll show you folks a quick example of what this looks like. Hey, Suraj. Hey, folks. So nice to be here. Feel free to share your screen and take us through a quick example if you have Awesome. I think I can give you a real quick heads up on how this all things are shaping up with a very early sort of demo, maybe a basic one. But I think that will kind of help you understand where this all things are going. So as you know, as you all might have tried, like automations is very accessible with a chat sort of interface. And then you can create automation as simple as just asking a body. So in this case, maybe I can say, hey, create an automation to add 2 numbers. Oops, sorry. Suresh, do you want to zoom up a little bit, if you can? Oh, sorry. Is this readable enough? Yeah. And just to set context for folks who might be looking at this interface for the first time, this is the regular kind of prompt QL interface, which is a conversational interface to interact with your business's data. and your business's concepts, so you can use this for analysis, you can use this for automation, which is kind of what we're showing you, but you can kind of have free-flowing conversations with everything that's inside your business and your data as an end user, as a non-technical user even, and understand kind of what is happening, ask questions, get answers, stuff like that, and the example that Suraj is showing you here is going to be an example of a small piece of logic that you like, that you wrote. You don't really understand the code here, a small piece of logic that you like, that you wrote. And then you want to convert that into an automation. You want to kind of ship that logic into production. So sorry, Suraj, go ahead. Yeah, sure. So for the time using a basic version to, you know, cover the end to end flow. So in this particular example, you know, I have a really basic automation that, you know, basically process the data and then spits out a result, right? So when you say, hey, create an automation to do whatever, this could be your actual database operations, or maybe even a complicated data analytics task, or maybe even generating a weekly summary or even a dashboard as such. So once this is done, what you can do is you can simply click on, hey, deploy, and then put some name, number, adder. Maybe I'll say one. and then deploy. So soon after you deploy, you immediately get a girl command to execute this automation. So what I mean by that is, if you are a non-technical person, right now, this is the place where you can hand this over to somebody who want to implement or maybe like integrate this into an external tool or even like set up something as a daily task or something. So you asked a question, you got an answer and then you made an automation like a repeated question. Think of it like a repeatedly askable sort of question and then you verified the logic and you deploy. So this was all possible till last month. But the interesting part is now that you can also see the execution logs. For that, I'll use an automation where I have a bunch of execution logs. I tried in the past. So to show you what is happening real quick, I'm going to execute this. It's slightly technical because this is actually, think of it like you created a daily job and then you came back next day and then you want to see like, hey, what was my last week's summary or something because every day it is kind of executing. And then what happens is you want to see like you created an automation and you're given to maybe like different sort of personas and then they are using that automation in different systems. Maybe it is like a daily con job or maybe even that is a dashboard or whatever kind of systems that you have. And then once you create that automation, you will be like QGIS, or maybe if you are a chief of marketing, or maybe if you are leading marketing in your organization, you might be QGIS, hey, who are reading the weekly report? Or maybe what kind of use cases are being, or what kind of questions are being asked with this particular automation? So in this case, you can see what was the question being asked. What is the input to that automation? And then you can also see what is the automation response. What does the automation respond to that particular ask? And then maybe in some times, or in some cases, you might notice, OK, maybe the weekly summary can have one more logic. And so I created that automation. I deployed that. Some other developer has to go integrate that automation. So I don't take care of integration. I just built the logic. I deployed it. My work is done as a non-technical person. I've just tested on conversationally whether this works, right? So I prompted it to say, do something for me. It's already connected to my databases and understands my language. All that is fine. I verified that, yeah, I can run some test cases and maybe we'll see an example of that. But once I say, like, hey, deploy, I'm done. I'm not looking at this anymore. That can now be integrated. That piece of logic can now be integrated into a larger application, a larger workflow. And that kind of was the screen you were showing us, the black screen you were showing us just before. But once that integration is done, that automation is actually running, right? And so again, as a non-technical person, I can come to, hey, these are the examples where the kind of automation has run in the past. And so if you open up one of those execution logs, That's kind of an example of the automation that has run. So here I can see that, is this like a divider example? OK, cool. Maybe it's a divider example. And so I can see that it's taking some input and it's giving me an output. And now, If I, as the owner of this logic, feel uncomfortable about something and I'm like, hey, this is wrong, that's when I can go in and say, I can start to edit and update that automation. That's the experience here for the non-technical person. We want to show an example of maybe just say that, hey, if it's a negative number, just always return a zero. Maybe that's my very custom logic that, in case the input is negative, we're just always going to return 0. Exactly. So in this case, I just made it a little interesting because there is an AI part involved. So you ask with number one and number 2, it just simply gets added. And then it also come up with a fun fact with that response, whatever the result is. So yeah, just wanted to show that there's AI capabilities as well. Now let's come to the part where you mentioned, which is important for non-technical users to just understand or maybe even add conditions, right? Exactly like you said. So let's try, hey, now if there is a negative number in the input, make the result always return 0. That's it. Like you hit enter, you suddenly is taken to the chat interface like with all familiar user interface. And then PromptQL will start its work, basically trying to understand what you mean, trying to understand what that specific automation means. And then also try to understand what was the initial input for that particular iteration. And then also it kind of modifies that logic. So let's see what this time it does. See, it has already added a condition here. Whenever at least one input is negative, then we'll simply return 0. This is a slightly technical thing, but this is just to give you enough visibility and confidence on what is happening. But as a non-technical user, you can always like, hey, test this with more inputs. So you just talk to PromptQL just like you're talking to an engineer or something. And then it's just try to understand what you mean and then make some additional test cases and then try it. see it's actually created like for yeah 6 testing test cases and then you know made sure that the response is coming up nicely and if you notice the input is not only like negative it is a mixed one because because the idea, because it kind of understood like the idea of you re-prompting, hey, try with more tests is because you want more confidence, right? So in that case, it just generated decent enough test cases to verify that logic works. Again, when you want to put this back to your deployment, you can always do this and then maybe use the same name to swap on the exact same thing. What happens is you put it back. And then it says, hey, what has changed? If the logic has changed, or is it only the internal logic, or maybe if there is any shape changes or something? What I mean by shape is the contract between your automation with any other system that are already connected. So in this case, let's try just deploying it. And then let's try this executing this once again. oh i should have put a negative uh number see the responses 0. So basically, you did not touch the code at all. You just looked at the execution logs. And then you figured out there was something wrong. And maybe you figured out, OK, there is a condition that can work out well. And then you just gave English instructions to the PromptQL. It worked out well. You tested. You gained enough confidence. You deployed. That's the end-to-end workflow. So basically, there's, yeah. No, that makes a lot of sense. We can take some questions as and when they come up, but thanks a lot to Suresh for the demo, and I'll help me to continue and stitch that together with what the end-to-end stack looks like. But do hang around in case there are some questions and otherwise I'll just take over screen share and help wrap that up. Thanks a lot, Sularj. Absolutely. Thanks. Awesome. And so kind of just like Sularj was saying, the important piece of what he was demoing was that if that as a non-technical user, I'm able to prompt write code, write business logic that interacts with a purely logical layer, which is a simple computational example, like the right way to add integers and whatnot. But also, that entire layer is connected to the core data layer as well, including core transactions that already exist, which could be financial data, which could be user data, et cetera. And one of the important pieces about the way that the stack is set up is that as a non-technical user, the idea is to allow domain experts to write, test, debug, and own their logic or their bits of the application. in a way that is completely decoupled from what is happening at the core data layer. So in the way that PromptQL is architected, its data layer goes through what some of you may know as what we call DDN, is a federated query engine with an authorization policy engine that provides an API to kind of the prompt QL layer. And that's the layer that has all of the data modelling and all of the authorization rules, including kind of, for example, protection against things like rate limiting, etc. So all of the security concerns are taken care of at that core data and transactional layer. So if you were thinking of it as a banking application, you have your account models, you have your transaction models, you have a transaction method, which is a credit and a debit. Now, those are set up here securely, and APIs are exposed. And the APIs that are exposed work entirely in user space. So the APIs do not work in system space. So that means that after prompting, when a certain piece of logic is created or a certain application component is created, that piece can only ever work in the context of the end user. So it's almost in the way that you think about front-end code and back-end code. All code, even though it's technically executing on a server or whatever it is, is all technically front-end code. So this logic of whether it's approving an invoice or whether it's summing to integers or whether it's helping schedule an appointment for a patient or whatever it is, those are all kind of pieces of code that run in the security context of the end user. Right? Which means that security, like the escalation that can happen, the mistake that can happen is only in the context of a single user, as opposed to being in the context of the entire system, which can massively screw things up. Right? And then, of course, it also means that a bunch of guarantees around security and infrastructure are already kind of baked in and not something that a non-technical user can interfere with either deliberately or by mistake. And just to kind of help visualize this for you, for example, if I have a large billing kind of database or system, I might have lots of different entities in the system, right, and these different entities will have, of course, like that particular data model, they'd have fine-grained permissions, right, that talk about like who can access them. So there's an admin, we can have a user, user can only access it if, user can only access something if let's say, for example, in an invoice system if it belongs to me, if I'm the user ID, right? Only that's when I can read the invoice, stuff like that. So you might have pretty complicated permission rules for different models in your system. And once that is set up, that is kind of what is exposing that API to the prompt QL layer where these kind of quote unquote Vibe Portal programs are written, right? The kind of core bit in the approach here is that for a non-technical user, they don't have to think about an SDLC. They just write pieces of logic and ship them. They view previous running instances of that run logic, and then they can go update them as they want to. That's all that they think about. For them, it's very first principles. The universe is not more complicated than that. The universe is their logic, their test cases, is past runs of that logic so that they can update that or not. And this logic, in our case, was doing something that was on the back end, but this logic could also be generating a UI component. It could be generating a visualization. It could actually be generating a UI component like a small form. It doesn't matter what that piece of code does. But the idea and the philosophy here, what we've been working on with our customers, is to look at a large pipeline or software pipeline of an application or workflow and to carve out safe holes inside that STLC. So you kind of carve out a hole that's safe. And that is the piece that the domain expert can now actually own end to end. Of course, the underlying AI has to be set up with enough context and enough training to understand the domain. But if the underlying AI model can understand the domain well, can understand the business context well is securely connected to data. Then a safe hole is kind of carved out in that SDLC. And that kind of then allows us to give full end-to-end control to the domain export, where they can take responsibility for writing good code or writing good logic, writing bad logic, fixing bad logic, even to the degree of where if you write bad logic that's through something on the behalf of the user, you can actually go back for that user and go fix things too. Because you own the entire system, your AI is connected to that data security. Obviously, you can have several different examples of the same thing. You can have, for example, software that's pretty common in a contact canter where you have an operator. Somebody calls the operator and says, hey, I need to schedule some, I need to schedule an appointment. You enter lots of different types of data about the patient as an operator. And then typically the operator then has a manual that they look at or think about. Maybe it's partially assisted by software to help them determine the best appointment slot. There's a lot of variation that's possible. Sometimes clinics are shut. sometimes on particular days, depending on the particular region that you're in. Some clinics don't allow a certain kind of insurance vendor or provider, right? And that kind of tribal knowledge is in my head as an operator that I have to use and then I have to kind of select the right appointment slot. Now, this piece in the middle is a safe hole that can be carved out, that can now just be owned by an administrator who understands the domain well. And that domain expert can just author this business logic of how an appointment slot is selected, what's allowed, what's not allowed, entirely on their own. And for the operator, the experience just gets simplified. They talk to the patient, they click on a button. an appointment slot gets suggested, they eyeball it, and they approve it. And this saves a lot of time, a lot of error-prone-ness, and a lot of training also, because you don't have to maintain this kind of manual for an operator to do something. So this is one example, but you can apply this for everything. You can apply this for an example where you have tax filing that you need to do. You need to file taxes every month. You're on the HR side. You want to compute people's PTO to create the right amount that should be there for their compensation for the month. You can use this for example if you have sales calls that happen and you automatically kind of want to extract the right properties from that sales call according to your sales playbook and then update your CRM. So who are the champions? Who are the people who are not convinced? Who's the economic buyer? Who owns the budget? What is the budget line item? You might have a playbook that requires you to capture these things according to certain criteria and that logic might continuously change. So now an expert, a person who's a part of the sales team, who's a domain expert, can just write that logic and that hole that determines this extraction can be filled. The rest of the STLC pipeline does not have to be changed. That just continues to work. That integration and that security is not something that they have to think about. So that's a quick look at what automations look like. And if you want access, please feel free to reach out to us and we'll hook you up with a demo that's relevant to your data. Awesome, thanks Tanmai, thanks Sooraj. We do have a few questions that are in the chat over here and before we jump to the first one that Harsh is going to put up on the screen, I have to say Tanmai and I have talked about this and I have a moratorium on the word vibe in particular, not just vibe coding, but vibes in general for whatever reason. But I was at Christie Beach last week and I saw a guy sitting there, Golden Gate Bridge in the background, waves crashing in, sitting with his laptop and noise cancelling headphones. And all I could think was, if there is a vibe coder, this is him. Yeah. That's the vibe coding that we all want to get to, right? Not the vibe coding, which is hope and prayer that something works. But just answering some of the questions that came up, for technical users, can you see the underlying code for the automation? the other way around. That's all right. Go ahead with that one. So yes, and that's kind of what Suraj was showing you, where you had a look at the code continuously. But there is, of course, a view. And ideally, the entire kind of experience that you see PromptQL, essentially, is like an LLM API. So that experience that you see of the front end is entirely customizable. So depending on what the AI assistant or conversation interface portal that you want to give to your people is, you kind of customize that. Also, maybe for your non-technical users, they don't see code at all. But for your technical users, there's a full level of detail in actually looking at code, editing it, fixing it, et cetera, to whatever level of detail they want. Just because we're talking about the code piece, this is a question that I think was coming up tangentially. How is it that PromptQL actually knows what its capabilities are, like what it can do to construct the automation itself? Yeah, then that's kind of the piece that we glossed over today. But it's kind of the quick start of PromptQL, one on one of PromptQL, is essentially that you can when you set PromptQL up and connect it to your data and connect it to, it kind of learns your business context. You give it a little bit of business context as well. And as you interact with it, it kind of learns more and more about your business, right? But that's kind of the initial PromptQL set up piece where it connects to your data, your API systems, your sources of knowledge, et cetera. And it understands what your business is and what your data what your data is, right? It understands that context. Without that context, nothing that we do in automation is even possible or interesting, right? Because the first part of it is you have to kind of understand my system, right? So that then I can talk to you and say, do something for me. Like if I'm setting up some kind of invoice auto approval, I should be able to say things like, That's the invoices. How many invoices do we have that are less than five thousand dollars so that I can I can I can look at the data and decide right what my what my what my algorithm even is right sometimes for a domain export like the algorithm is like a two line sentence in your head. For the use case that you want, but then as you interact with your data your algorithm gets fleshed out right like it's almost like a process that you want to put in place, but the process gets fleshed out once you go through all of the use cases. And so you interact with that AI in your language, with your data entities, and you say, hey, help me figure this out and do this and do that. And then finally, when you're in a good place, you're like, OK, let's create an automation that does exactly this, and then you can worship that. So I think that that's a good segue for another question that we have here and it kind of speaks to what you were talking about in the beginning with respect to do's and don'ts when it comes to vibe coding and that kind of thing. So Harsha, can we throw the question up that talks about what to build? And I'll just go ahead and start reading it out loud. So it says, what advice would you give to someone like Karen from accounting who doesn't know what to build, right? Because this is a brave new space. There's a lot of things out there like what are the possibilities? That's a good question, right? And I think the first part of that is, and perhaps the most natural way to think about it is, the first part is to think of initially giving somebody like Karen a accurate AI assistant or interface to their data and to their processes, and to just start with that. Just to start with that, to build confidence of, ah, this AI knows what I'm talking about and I can use it to do small things. I can use it to maybe extract data from an invoice. I can use it to, ah, I had to go look up that invoice where something had happened or somebody's asking for it. Let me just go review it. And so then I go pull it up. And then I do a filter and sort. Kind of what you could have done otherwise as well, a little bit manually. But you're going to get a little bit familiar with that first. Once you get familiar with that, and it very naturally then leads to, hey, I've been doing this for a while. shouldn't I just automate this? For example, you might use your AI to say, hey, is this invoice within 5 percent of the approved purchase order? You might ask that question, because your approved vendor list and purchase orders is something. It's like in 3 or 4 different places. And so you just ask that question one day, and then you realize, shouldn't this just be automated? Why should I use this AI? Yeah, she's like, exactly. Which is we're all we're all engineers, right? And in this brave new world, which is like, everything should be automated. And so then kind of like, I've been doing this every day. And I asked this question. And I get an answer, but why don't I just automate it? And that kind of becomes the first step where you're like, let me automate it, right? And let me automate it. And then maybe automate for yourself. So then you say, hey, here's the automation. And then you just use that automation every single day for yourself because you're fine. You don't even want to write that prompt again. You don't want to write if it is an approved vendor PO with less than 5 percent variation from what is approved in the PO. And the total amount is less than ten thousand dollars. And the vendor is not one of our vendors in the security space where everything has to be approved, even if it's over two thousand dollars. It's a complex prompt. And then you realize, wait, if I just call this an automation and call it Karen's approval checker, I can just run that automation every day. So I can just say, use Karen's approval checker on invoice one, 2, 3. And I just say that to the AI because now the AI knows that I've been doing this again and again. So that becomes like your second step where you're like, oh, I automate this for myself. And then your third step is to then ship this automation to other people or to ship this automation as a part of your workflow. And that's kind of where the dev team and the engineering team comes in or the IT team comes in and says, cool, what we're going to do is let you own this whole in the approval workflow. And we're just going to stitch up the rest of it. So we have a system for invoice management that has a webhook. And the webhook is called every time an invoice is submitted on our portal that can go to an approval check that has to return a true or false. And that's the webhook that the IT team is ready to have plugged in. And now Karen becomes the owner of that webhook by saying, yeah, cool. I already have this approval thing. Let me just ship it. So you just type into the thing, deploy. And now it's a live webhook. And for Karen, she doesn't care about whether it's a webhook and how it's integrated. It doesn't matter. For her, it's that logic that's always running. And that's step 3, where now this automation is actually plugged in to a whole STLC that works. And from Karen's point of view, she can own that automation and just make it as complicated as she needs it to, right? Like, oh, there's a bazillion edge cases, or doesn't work if the invoice is from a particular vendor, right? Only works if she's doing it and whatnot. And it's all kind of secure, it's all production-grade integrated in a way that can actually be done. And CAD doesn't have to care about the SDLC, the software development life cycle, doesn't even have to worry about overhead worsening, because all of that is just a part of It's just persisted in her AI system. For her, it's just one single system that works. That's actually the last question that we have here. It says, it kind of sounds like PromptQL lets Karen control the software development lifecycle using props. Is that a fair statement? Exactly. Yeah, that's exactly it. And not the whole bit of it, but just the key bits of it that are important. So build, test, deploy, everything that's required for that whole. So build, test, deploy, and then debug. And debug is real, because you look at older runs of your logic that did not work the way that you wanted them to work. Now you can actually see the real world debugging in prod, where you see the actual work. You look at the log and you're like, oh, that should not have been the output for this input. But now you can just interact with the AI and say, hey, that should not have been the output for this input. It should have been y. And then let PromptQL, let the AI kind of figure out that, ah, yeah, that's the piece in our logic that was missing. We didn't handle this at our case or whatever. Can doesn't care. She's like, whatever. I need to update my algorithm. Go for it. Let's update the algorithm from a business logic point of view. Awesome. Looks like that sparked another question that we have here. It says, are there no and low-code tools with PromptQL that allow non-technical users to create MVP automations like syncing data with Google Sheets, sending scheduled emails, and building other custom workflows or dashboards? Exactly. That's exactly what automations let you do. right? So our connectors let you connect with these different sources. Let's say, for example, Google Sheets, your email service, right? once these things are connected, they're connected in a way that's secure to you. So for example, if I'm the user, as Tanmay using PromptQL, it's connected to Tanmay's email. It's not connected to everybody's email, right? It's connected to like Google Sheets with access system. And now, from my point of view, I can set up ETL pipelines. I can move stuff from one place to another. I can schedule emails. For example, I can schedule an email that says, I can schedule an automation where I first write the prompt myself where I look at my CRM and for my older customers who are using this product version, helped me write an email to introduce them to the new product version. And based on everything that I know about them in the CRM, based on what they're building, and all the information that's there in my CRM. And so I wanted to go run a daily job. I run a daily job, pick up that information, and then go to my Gmail and actually write it as a draft and keep it there, so that at the end of the day, I'm just going to review those drafts, maybe make a few edits over there, and send. But this entire automation can be safely, securely production-grade written just by me. I don't even know how any of these things work. The cost here is that the IT team and the dev team, and if you're working with us and PromptQL team, sets up the platform for you. So we just set it up internally for you. We connect it to your data sources. We make sure that the authorization rules are set up. And once that's done, that's it. Everybody who's non-technical is good to go and start creating their own automations. That's very exciting. If you folks are curious and want to try this out for yourself, we have this QR code up on the screen. You can learn more about automations and see the different use cases that we have. Tanmay, Suraj, thank you very much for joining us today. Anything you want to say before we go? That's all. I'm super excited to have shown you the preview and pleased to get in touch if there's automations on your mind, I guess. Yeah, if you want to be lazy like an engineer, go ahead. Alright, thanks everybody.