Register

AI that speaks your company’s language

Are you enjoying this session?

See exactly how PromptQL works for your business.

Book demo

How the Agentic Semantic Layer Makes AI Understand Your Business Data: We will demonstrate an approach that eliminates months of preparation typically required before AI can deliver value, enabling immediate reliability on mission-critical tasks without demanding perfectly prepared data. The Agentic Semantic Layer represents a fundamental shift in AI reliability by autonomously building a unified view of enterprise data in real-time. It introspects existing schemas, documentation, and code to bootstrap understanding, then continuously improves through interactions, turning messy, distributed data into a coherent knowledge graph that AI can reliably reason about.

What's discussed in the video

What's the secret weapon behind reliable AI? It's basically an AI that can speak your company's language, as I said. And how do we enable your AI to speak your company's language is with a semantic layer that your engineers don't have to build. So I have said a lot, but I'll dive deep into what I mean by all of this and show you how this works. But before that, I would love to know from all of you. Imagine if you had a hundred percent accurate AI for any analysis, intelligence, decision making or automation that you needed. what kind of massive business value, and when I say massive, I really mean massive, could even be hundreds of millions of dollars that you could unlock if your AI, if your large language models could reliably do something for you, for your customers, or for your employees. So please scan this QR code and just type it out. What value would you be able to unlock if you had reliable AI? predict what deals are going to close this quarter. Make sense? Create sales use case. Get detailed analysis on marketing campaigns. Create go-to-market use case. I'm assuming like first one needs to connect to Salesforce and a few other of your sales systems and then answer a question reliably on this. Detailed analysis on marketing campaigns. So pull a bunch of data from Marketo and all the other different tools that you're using and get insights on that, right? So, which is fair, this is what you want AI to do, what it claims it can. Yeah, so, but that's, you don't use AI for any of this, do you? you don't. And why is that? Like, why is this value gap between the claims of AI and the reality of AI, right? So let's try to break it down. So before there was AI, for any of these things that you wanted to do, like decision making, you wanted intelligence and insight on your data, you want to create automations or write software, you want to generate reports, you want to create applications, you rely on these humans which are experts in their own domain. These are your analysts, your engineers, your data scientists who know how to work with a bunch of different systems under the hood. These would be your databases, your SaaS applications, different microservices that you have, a bunch of documents that you might be working with or just like be able to look up information on the open web. So they are the ones that are gatekeeping the value for the business user from these underlying data sources. But that is what we have been trying to replicate with AI. But AI isn't there yet. AI is not as dynamic, as flexible, as knowledgeable as these people in the data work. Why? Because we have been building AI with certain design patterns in mind, certain architectures in mind. We as humans, what we do is we are very flexible. We don't process things in our head, right? We consume a bunch of information from different sources. We try to operate with different type of systems using the different types of tools available at our hand. We dynamically think about strategies that are required for a certain business problem that we are trying to solve. We are very explainable. If someone asks how we did something, we can tell how we did something. We can course correct our own mistakes. And then finally, we were not hired for a certain use case. We didn't say that, hey, you are a sales forecasting guy. All your entire job is to run the same thing every month. I am a general analyst. Whatever problem you throw at me, I'm going to adapt and solve that problem based on all of my expertise. So why are we building AI like this? Tell me if I'm wrong, this is the architecture, the superficial architecture of all of the AI applications that you or your team has been building. You ask a question to an agent, this agent might have ten other agents under the hood, each agent might have ten different tools under These agents are talking to each other in national language and these are certain tools which limit the abilities of each of these sub-agents. There is no guarantee that there is a reliable orchestration between these agents. There's no guarantee that they are able to transfer a lot of context, a lot of data between each other. If my question steps out of the line, for which there isn't a tool available or there isn't a sub-agent available, it just completely falls on its knees. And that is the problem, right? Because we are so rigid in our thinking of how to build AI systems like So to start from first principles, right? What makes any human or any AI reliable is the fact that it is predictable and explainable, Think of your traditional software systems. You know exactly what it's going to do. And the developer who built it, you can ask them exactly how it works under the hood. And it's reliable. What makes a human reliable is that you trust an employer, trust a colleague. If they are predictable, they're consistent in what they do. They don't feel an unexpected base. And whatever they're doing, they can completely explain what they're doing. And hence, you can exercise control over it. You can steer yourself. You can fix your own mistakes. This is the same property we want with AI. But if we use any of these standard architectures that we've all been thinking about, I'm sure you've all heard of frag retrieval augmented generation. You've all heard of tool calling, tool composition, model context protocol. You've heard of generating SQL queries from natural language or generating database queries from natural language. All of these are the different approaches of connecting data or different external systems to your AI, and then creating agents on top using these methods, and then you orchestrate these agents somehow. But let's look at a RAG system. This is an e-mail provider. You ask it, when was my last Uber trip and how much did I spend? It says April, 15, spent 14 dollars. Okay, cool, makes sense. Then I ask it, what time exactly was this trip? Now it says that this trip was on June, 29, 24. Why have you lost your context? Because every time I ask this question, you're doing a separate semantic search under the hood, and whatever is the top semantically relevant email you're surfacing here. You have no idea about my context, you don't understand the follow-up question I'm asking, and you can't realistically sift through a billion emails that I have and answer my question. And then I ask it to explain itself, like, how did you get this answer? You said my last trip was on April, and it says each product has its own strengths. Deciding which one is best for you depends on what you're trying to do. It's completely irrelevant response. Right. So there is no guarantee how you will be able to enable a breadth of depth of tasks. This is what we've been hearing from all of our customers as well. Like, RAG is not the solution. Maybe finding some, what is the right document for this one question, maybe that's fine, RAG is great. But for real enterprise use cases, RAG is not the solution. Let's look at tool composition, right? This is AI assistant on one of the biggest CRMs. Ask her this question, can you calculate the average length of our sales cycle? It says, can you please provide more details on the specific data report that you have in mind? It gives me a suggestion. I click on that suggestion. It just tells me how to create a report myself. I refresh the context, ask it again. This time it says your average sales cycle is 71.6 days. And I ask it, can you explain it? It says I arbitrarily took stage one to stage 4 age-length average. But we have 7 stages. Can you look at all of them? Never came back with a response. Refresh the page again, ask you the same question. This time it says 2 point to one days. No consistency, no repeatability, no explain ability into how really it is, what it's doing under the hood. And I can't, I can't work with this anymore. I don't even know what to prompt to get the right So it's not predictable for complex tools or too many number of tools. Even though we have this amazing standardized protocol of connecting tools to AI systems, which is model context protocol. But still, if there are a hundred MCP tools that have connected to my AI, there is still no guarantee that my AI is picking the right tool, passing the right context, orchestrating reliably between these tools. So let's look at text to SQL. So what about structured data or like other data systems that we have and we translate natural language queries into these specific database queries. We asked a question like, how many albums from the metal genre have a positive and happy sounding title? question which I want to ask. How do I translate positive and happy sounding into SQL? There is no way for me to do that, right? Even though SQL we say is a Turing complete language, but I can't translate positive and happy into SQL. And that's where this is one of the co-pilots which gives up that, no, I can't do that. And second, me as a business user who does not even understand SQL, I can't work with these systems, right? I can't I can't reliably steer it, understand what it's doing. And this is limited to the analysts. It's limited to the SQL experts. I am not one of them. So one of our customers said that only analysts can use something like Text-to-SQL. It's limited to what's in the database. What if I want to connect it to external systems? I can't have a Text-to-SQL-based solution. So what's the solution? You mix and match all of these, you create agents, but you can increase the quality of the output, but this will keep increasing the complexity, Tool calling, that can integrate search-based retrieval and extra SQL, sure, but same pitfalls as tool composition, right? Complexity of tools. What about multiple agents, which is just like a bunch of tools talking to one AI and then multiple AIs talking to each other? But then they get caught in the collaboration loops, they forget to stick to a specific plan that the orchestrator agent came up with, and there is no reliable way of transferring a bunch of data and context between these different agents. So see, this is what the current approaches are doing. Whatever system you have built, any AI system that you have built, that AI system takes an input, generates a result. This AI system was created for a use case. And the result was generated. But if you expect the result to be generated, you are expecting all the LLMs to hallucinate, and you just hope that the hallucination is correct. if you let the AI generate the final result, there is no guarantee. There's no guardrails you can put on it that there will be no hallucinations. There will be guaranteed correct results. second, you will be confined to that specific use case that your AI system was designed to So it's okay for like, if you have a very clear use case and you know exactly why you're building, what you're building, and there's only a certain very tiny breadth of queries you want to run on it, great. These systems work incredibly. But for any enterprise use case, for any real world use case, you need a general purpose system. You want a system which can adapt to whatever you need it Right? So that's what we have tried to build with PromptQL, which is like a catch all domain specific language that decouples this planning and execution. And what I mean by that is that whatever LLM that you're using, the best state of the art LLM that you're using, it's responsible only for creating a plan. think of it this way. If I ask an analyst, hey, this is a business problem. Why do you think our sales are dropping month over The analyst is not going to just answer your question back to you. They are going to say, OK, to answer your question, what I need to first do is I need to go to these 7 different systems, pull out all of this data. These are the kind of analysis and data composition aggregations I need to run. And then finally structure my response in a certain format. And then I will implement this somehow. I'll write code. I'll do something. And then click on a button called Execute and let my computer execute it. And that will fetch all the data from different systems, run the code, do the analysis, and then come back with a response. I am, as the human, as the non-deterministic human, I'm coming up with the plan. I am not executing because I can't read the entire database in my head. I can't do math in my head. I can't do these things in my head. LLMs are exactly the same way. So let the LLMs come up with this plan and you can put guardrails on these plans. And this plan is in a deterministic language, which you can execute in a deterministic runtime. So now think about all the deterministic guardrails you can put And that execution is what is creating the result, which is not AI generated. And that's the entire idea behind PromptQL, which is decoupling the planning and execution and letting the LM only be responsible for the planning. If you try to map out all of the AI technologies, from narrow to general and unreliable to reliable, what you realize is these early AI systems, which are very narrow use cases, they can set a timer, they can change your thermostat, but they're also very unreliable. Then you have these agents which have been custom-built, which are highly reliable, but for that specific use case. Then you have these other assistance that are getting popular, like everyone is talking about them, every third LinkedIn post is about them. These are more general purpose, but they just don't work as we just saw. So it's all the hype. Then there are some of these super general purpose assistance, like Claude, Manners, ChatGPT. All of these are like a human in the loop, non-enterprise grade, like for non-enterprise rate use cases, but they're still a little bit reliable and a little bit general. But if you see, there is this one line which is blocking us from reaching the top right corner, right? Which is a completely general purpose, a hundred percent reliable AI system. And that's what we believe humans are, right? We are very close to being general purpose. We can handle any task possible and we are pretty reliable at what we do. And that is the goal. So let us look at an attempt towards building a highly general purpose and a highly reliable AI, which is what we call PromptQL, an accurate AI for any analysis or automation. So let me jump into a demo of PromptQL for those who haven't seen it before, and then I'm going to talk about what makes it so reliable. OK, so I'm just going to refresh the page to make sure my internet is working. OK, so this is PromptQL running on top of an enterprise SaaS company's data systems. So now I can ask any free form question. Can you help me find the organization that has brought us the highest billings? over all time, find unique orgs based on the email domains of the user, something like that. You can ask a free form, national language query, make it as specific or as general as I want. And I want an AI system which understands what I'm trying to breaks it down into this plan. This is what I was talking about, the plan that PromptQL generates. This is for non-technical users to see. But if you want to look at the actual DSL that the PromptQL has generated, it's right here. Hey, Anushrut, could you just bump up the font size a little bit? It's a little hard to see. There you go. Lovely. Thank you so much. Of course. Okay. So, yeah, so you see how PromptQL came up with this plan. I'm just going to refer to the natural language plan here because it's just easier to come talk about and explain but yeah so it says that okay first I'll get all the users extract their email domains join this with the invoice items to get the billing amounts great I didn't even have to tell it that the invoice amounts is where the billings are group the email by domain sort by total billing and then store it an artifact for you to see that's it that that was my AI's job that's it it's done Now, this completed in 11 seconds, right? This underlying deterministic programmatic runtime is executing this plan and creating this result. And my AI doesn't even finally answer my question. It just says, hey dude, this is the answer that the system gave back. You just look at it. And that's perfect because now I am very sure that this is correct because I understand the plan. I understand that this execution happened deterministically and this actually came from my data systems. There is no AI generation happening here. Another great thing is I can edit this plan. I'm like, this is good. But when you are finding the orgs, I can be like, here, can you ignore all orgs with less than, let's say, ten projects between their users? So I can just do that. I can just edit my query plan and be like, hey, do this instead. And it's like, I get what you're saying. Same thing as before. But I need to fetch the project data as well. And then I need to ignore those. Perfect. Nice. Now I can ask data analysis. I can ask semantic questions like, how is the first org feeling about our product? Can you look at the support tickets? So make it call Zendesk. Make it call our ticketing system, right? It's like, yeah, OK. So for Williams.com, I need to get all the support tickets and then analyze the tickets and then Also, the ticket comments to understand the sentiment and then share it. This is a more complicated semantic query. So, I am going to execute that. It will take a little bit of time, but it will execute. Okay. These are all the 68 support tickets that I have. OK, so based on the status, they show neutral sentiment, suggesting standard technical support interactions rather than major frustrations. Perfect. So see, this is what a level AI looks like, which tells me exactly what it's going to do. Does that deterministically? And keeps me in control as the user. ask it to do whatever I want and however I wanted to do it. I can ask a vague question, make it come up with a plan, I can then steer this plan to become more So this is what PromptQL was. PromptQL is this AI platform that delivers this human level reliability for any natural language analysis or automation on your data and systems. But it really does under the hood to ensure these liabilities that it learns and codifies this unique language of your business and this I'm going to get into this next which it uses for its LLMs to create these plans and execute them deterministically think of it as like a staff level analyst or an engineer in your company that anyone in your org or your customer can trust on can trust so How do we make the plans reliable? Some of you might have a question. You still have AI generating the plans. That's still non-deterministic. That's still prone to hallucinations. But what is making your plans reliable? What makes our plans reliable is because they are generated by an AI that understands your company's language. Because you speak your company's language. All your employees speak your company language. There's so much tribal knowledge that you have, tacit knowledge that you have, right? There is so much domain context that you already know. You know what are the constraints you're working with. You know what different terminologies means, what kind of KPIs you're optimizing on. All of this you know. And hence, whatever actions you take, are all based on this knowledge. But your AI does not do that. My AI does not know that, hey, if I need to answer a question on how do I improve my sales forecasting, but I also need to keep these 5 other guardrails, other KPIs, which should not go down if I want to improve a certain metric. This is a knowledge that you have, your AI did not And I can't be expected to think of every single thing as the engineer building this AI system and put it as context into the AI, right? So there is something missing between your AI and your data, which your AI should be able to understand so that it speaks the same language that you speak in your business. So an AI can speak your company's language with what we call is an agentic semantic layer. So this agentic semantic layer, think of it as like a completely bootstrapped, self-improving metadata or context layer, which keeps capturing this business context and business knowledge as you keep using the AI. Every time you had to correct the AI saying, no, no, no, this is not what I asked you. This is what I meant by that. This is what the terminology means. All of this context as you were giving the AI, the AI should be learning and improving its own semantic And that's exactly what we have built with something called Autograph. So let me show you the Autograph. Actually, let me show you this recorded demo first and then I'll show it to you live as well. So this basically, this is a good demo to understand what autograph really means. So what I did is I connected a database with to PromptQL, which has very poorly named tables and columns, like completely absurdly named tables and I asked a question like, what employees are working in departments with more than ten thousand dollars in budget? And the 3 tables which I've connected are called more plugins or completely meaningless. The AI has no idea what they even mean, right? So the AI is like, I don't see any information about employees, departments or budgets. All I see are 3 tables called more plugins or I have no idea what to Right? That's already a reliable response because I know the data is very bad. So I'm like, can you sample a few rows from each table and figure out which table contains employees? Right? So, prompter is like, okay, cool. I'm going to look at every table, sample a few rows. And now I understand. Okay. Zork contains employee information, plug contains department information, and is like a junction table. So now I can execute this. I can answer your question now because I understand. I, as the developer, didn't give any context to the AI. I let the AI figure it out itself. OK, this is awesome. But one more caveat, that the budget in our data is in cents, not in dollars. But I asked the question in dollars. So you'll have to divide by a thousand, right? So can you do that, please? So prompt is like, cool, I'm going to divide by a thousand. So now it filters down to just 2 employees. Now, what Autograph lets us do is, it lets us, like this steering we had to do, right? Like we had to give context about a certain table. We had to give context about how our data has been saved. All of this, my AI should be learning. So Autograph runs automatically in the background as well, but this is like a manual way of executing it to show what it's doing under the hood. So all I'm saying is suggest metadata improvements based on the recent threads. That's all we ask Autograph to do automatically again and again. It says, okay, cool. I'll look at the last one hour conversation and I see that, okay, there are 3 models and let me analyze the thread state to see if there are any meaningful interactions that we had with these models. I can now generate many meaningful descriptions for these different tables. That's what we exactly did. It created these descriptions for these tables. And I also added this context that, hey, this column called QWERTY, it has the department budget, which is in cents. So it not just described every single table, but also every single column. And now, all I have to do is click on this Apply Suggestion. And this semantic layer, this metadata layer, is a version control layer. It has this concept of immutable builds. So now it creates a new immutable build on top of that. right? Which I can then run my evals on, make sure everything is working fine, right? So this you see, the 0 seconds ago a new build was created. And now if I ask the same question, which employees are working in departments more than ten thousand dollars? This time the AI does not have to say that, hey, I don't understand what you're talking No, it understands now, because it has all the context and semantic layer. And it also understands that the data is in sense and not in right? So this is what the Genetic semantic layer looks like. Let me just show it to you here as well. Let me ask a Let's see. So this is like a finance database where there's a lot of transaction data, anti-money laundering data, and ask it a question. Find accounts with the maximum suspicious AML outgoing amounts for the first quarter for each print out the account ID and name. Let me refresh this page again just to make sure it's working fine. So it's like, OK, this is the query plan. This is what I need to execute. I'm implementing this query plan under the hood. I'm going to execute this query plan under the hood and then come back with a response. That's perfect. So it says that, OK, I asked the question for this quarter, and it says the quarter which starts in January ends in March. But let's say this data is for some country where the quarter arbitrarily starts in February and ends in April. Let's say I say this, the table values are in a country where the financial quarter is offset by a month. So Q one is from February to April. Now, this is a tribal knowledge that I have bought my domain. I just did what I thought it was right. So I had to steer this and it's like, okay, cool. So I'm going to just do that. I'll do the same thing, but with the filters from February to April. Perfect. So this is your answer. Now, if I go to Autograph, And I'll skip this question. Let's just do the last thread. So improve my metadata using insights from the last thread. So again, Autograph runs agnatically automatically under the hood. You can also manually execute it if you want to run it for a certain set of users, certain type of threads, certain set of queries. So that's why I say, OK, I'll first get the schema information. Let me understand what the schema is. I'll calculate the time range of this thread. from the last 24 hours and then get the top thread, I can identify several insights to improve the metadata descriptions. So let me create these improvements focusing on the anti-money laundering and accounts tables. And so these are the improvements I'm suggesting that, okay, I need to add context here that the financial quarters are offset by a month, right, and something else which I didn't even realize. some information that it needed to do some kind of joints. It has that as well. Again, apply the suggested improvements and a new build gets created here, generated by Auto craft. There you go. Next time you ask a question, no more steering, no more nothing. It's just AI keeps learning. So that is what we have been able to achieve with this agentic semantic layer, this highly reliable AI that speaks your language, your company's language. Oops, Yeah, so let's just quickly see what the customers have been saying with an AI that speaks their language. So think of this as like a company QL or QL. one of the directors of data for a Fortune Five hundred international food chain. We tried building it, couldn't do it, then we evaluated a hundred vendors, nothing worked, but then finally we saw PromptQL, and it was just completely reliable AI. Another VP of AI for Global thousand internet services company, no other tool has come even close to meeting the expectations. PromptQL has just met and exceeded the expectations. CEO of a high-growth fintech company, our prompter was able to demonstrate a hundred percent accuracy on the hardest questions in our eval set. One of the things we do with our customers is we ask them, give us a set of your hardest questions. No matter what kind of assumptions they require, what kind of data sources they might require, how complicated the analysis is, give us the hardest questions. And we promise that we'll get you a hundred percent accuracy on top of that. So with that, I ask one last question, which is what would you call your company's language? We call it PromptQL, which is a generic term, but think of your or QL, right? What would you call your company's Before we jump into the questions that we have over, like in the Q&A section, I have one. So we were speaking at a conference last week, I know that you're back in Vegas this week with another conference that's going on right now, a lot of us are. One of the things that people kind of presented as like a barrier for getting involved with AI solutions was their concern about not only data hygiene, but also just kind of like getting things rectified in advance and all the work that would go into it before actually connecting to an AI system. So could you speak just a little bit to like the, the autograph use case and how essentially you can sidestep all that work completely and farm it out to, you know, a service as opposed to having to do all that chore work yourself? Great, great, great. So I would talk about both autograph and PromptQL here. See, if you think that you can have perfect data to build perfect AI, you are kidding yourself, right? You will never have perfect data and you will never be able to train a perfect AI on top of that or make an AI work perfectly on top of that. So your AI needs to be adaptable as you are, as a human, right? You understand, like you might also run into unexpected problems, unexpected data messes, right? But then you improvise, adapt and overcome, right? That's what Autograph has been built to do, which is exactly like improvise, sorry, PromptQL has been built to do, which is improvise, adapt and overcome. And then Autograph is built to learn that this is the problem. Next time, I will make sure this problem doesn't, like, I can't for this problem. That's one. Seconds prompt queue is great for data prep. data investigation, you understand, you can ask it like, hey, can you find inconsistencies in my data and like X, Y, Z? And then you can ask it, okay, can you go fix it, please? And it'll be like, okay, cool. This is what I'm going to do for fixing it. Looks cool to you. Okay, cool. I'm going to go and end up doing it. It'll take about 20 minutes, fix all of your, as much as it can, working with your data engineers, of course, but yeah. Yeah, and I think that's the big point, right, is that we're saving a tremendous amount of time to market or to implement a solution because instead of having to do all the manual labour of, you know, sanitizing data and making sure that things are semantically like they should be, let's have PromptQL and Autograph do it instead. Easy peasy. Nice. Awesome. Harshad, you want to start throwing some questions up here that we have from the Q&A section, please? Lovely, so the first one says, so is PromptQL working within the GraphQL Hasura interface or is this a completely separate product which is also working through standard SQL queries as well as searching other data points? Is PromptQL working within the Hasura GraphQL? Okay, so think of PromptQL as the evolution of GraphQL. Because one of the use cases that we have is generative APIs, right? You want to consume data APIs in your traditional applications. Right now, you had to write these APIs yourself. With GraphQL, we solved this by just, your GraphQL API is just there. We just kind of created the sources, and we have a GraphQL API. What's next? What's next is natural language. Like, I should be able to build my API endpoints using natural language. My business logic is in natural language, So, I want to consume X by Z data in a certain JSON format, and this data should come from X by Z sources, and before it is rendered to the front end, it needs to go through some business logic. All of this, I know, I can just say, PromptQL will write the PromptQL program, and then we have something called the program's API. So every PromptQL program that you run is itself an API. So you can just call that API every single time you want to do exactly what PromptQL did. No more AI there. It's just a static program that gets executed. that's the evolution of GraphQL. That's what the next thing is. And that's PromptQL. And it is a completely separate product, which is also working through the standard. So the underlying data engine, the data delivery network, is shared by GraphQL and PromptQL. PromptQL does not use GraphQL under the hood to send requests to this data engine. We use a certain dialect of SQL because SQL is a more Turing-complete language, and LLM is really good at generating it. But that does not mean underlying data sources have to be SQL. It can be anything, same thing that GraphQL was supporting, SQL, NoSQL, SAS, APIs. It's essentially an implementation detail that you as a user don't have to know about. Exactly. Yeah. Awesome. Excellent. Thanks so much. All right. Next question coming up. We got a few of these. This is great. Can you manually provide semantic data, like synonyms, definitions, et cetera? So my first thought, Anushrut, is thinking about the LSP that we have inside of VS Code and things like that. So do you want to jump in? Yeah. Autograph was invented a couple of months ago. Before that, that's what we were doing. So, yes. Yes, you can provide semantic context manually. So, first of all, whenever you connect a new data source to PromptQL, you run this command called introspect. So introspection basically pulls out the schemas from your data sources. If you have comments on your Postgres schemas, they all get pulled into the semantic layer. Then you, as the developer, can manually add a lot more context. If this table is about X, Y, Z, use this table only for certain types of questions. This column has information about X, Y, Z, so you can annotate your data a lot more manually. Or you can let Autograph do So yes, answer questions. Totally. All right. Next question coming up here. All right. Can you provide more detail on how to provide context on the data models and the data and type of data and how the data is related to other pieces of data? Brilliant question again. I should share my screen again, I think. For sure. Let's just go back to a project like, let's look at project of a telecom company. So whenever you connect your data sources, let's just go to the latest build. Please pardon my hotel Wi-Fi. No idea what's happening. Okay, perfect. I'm just going to make this a little bit easier to see. Okay. So whenever you connect a bunch of data sources, right, like, for example, here we have Mongo, we have Click House, we have Aurora, we have Atlas. So you have a bunch of different data sources. you run the Centro spec command and it creates the Super graph on its own, right? So any relationship that existed inside a specific data source will automatically get updated. So all of these connections that you see, all of these are relationships. Then the great thing about the Super graph architecture is that you can create relationships which are cross domain as well, cross data sources as well. For example, this table, customer link, This table is connecting this Aurora customers table to this Click house table. This is a cross-domain relationship. Now, this also gives your AI the context and gives our data layer the context that, hey, this is the relationship that we can operate So this is when you can very clearly define a foreign key relationship. Sometimes you can't. Sometimes you can't define a very clear foreign key relationship, but you can still be like, hey, this is similar data. Now that goes in the semantic layer. So if it's like a hard relationship that you can very easily quantify, you can just define all of these relationships. was going to say with the visualization as well, like one thing that I like to try to explain to people is that, hey, this visualization is powered by a very human readable YAML format that we have. And that's the exact same thing that's getting eventually passed to PromptQL as well to understand kind of how everything maps together. Yeah, exactly. Awesome. All right. We have more questions coming. Next one up. A rag project that uses semantic DB for data storage. I have also added re-ranking to it. Is there any other way to improve my model's accuracy? So this is a RAG-specific question. So I don't want to behave like I'm a RAG expert. So I'm assuming what you're asking is, how do I improve my semantic search capabilities? We are not a semantic search company or a semantic search product, so I would refrain from commenting on that. But the most reliable way of orchestrating your vector database and your semantic search functions will still be So yeah. Is there a way to audit or log every SQL query generated by PromptQL for compliance? Sharing my screen again. Yep. Okay. So if I go back to my project and I go to insights, That's for this specific thread. For this thread, you can trace every single thing. When was the LLM called? Which step took how long? What was the exact SQL query that was generated? You can pretty much track every single So yes, you have complete visibility into what's happening. Finally, total observability into LLMs. Thanks, thanks, thanks. All right, and Hasura, are we able to control access to realms of data based on data in a token? So I guess the question is, do we still have RBAC with PromptQL? Yes, at the core of it. Without that, you can't power enterprise use cases. So you have role-based, you have token-based. I was going to say, let's have this be the last question for this section. I know that there's a lot more in a new short I'm going to ask you, even though I know you probably got to go to the conference floor after this, if you'll stick around after you answer this question live. And any questions that we haven't answered over in the Q&A, if you'll answer those via text. So the last question that we have here is, can PromptQL be self-hosted for privacy and data concerns? Yes. So there are multiple ways of hosting PromptQL. So PromptQL can be hosted. You can use our cloud. You can bring your own cloud. You can host the data plane, we can host the control plane, or we can do completely self-hosted as well. So there are a bunch of different deployment options. Totally. And I would say that we probably got through like, 75 percent of the questions. So you got a few over there that you'll need to answer before you hop off. thanks so much for being here today. And thanks for the great demo. Yeah. Thank you so much. Thanks, folks.