Past, Present, and Future of Data Analysis Technology & Automation with Ravi Shankar of Denodo

In this edition of the UpTech Report, host Alexander Ferguson meets with the SVP & CMO of Denodo, Ravi Shankar, to discuss data virtualization and the future of big data.

Through various use cases and examples, Shankar walks us through the exciting history of data analysis, what his company is currently up to, and where this is all going in the future. With Denodo’s data virtualization platform, companies can use modern data integration and management to sift through mountains of data to gain a unified view of the information faster, cheaper, and less effort. 

Under the surface, data analysis is quite complicated, but Shankar explains how his company aims to make it as simple as turning on a water faucet — you turn the knob and the water simply flows out the end of the pipe.

Denodo’s powerful logical data integration and management enables data to flow from databases, data warehouses, and data lakes, on-premises or in the cloud, to applications used by business users, where ultimately, they use the most up-to-date data to analyze everything and deliver useful, business-accelerating results.

Ravi Shankar is the Senior Vice President and Chief Marketing Officer at Denodo. He is responsible for Denodo’s global marketing efforts, including product marketing, demand generation, field marketing, communications, social marketing, customer advocacy, partner marketing, branding, and solutions marketing.

Ravi brings to his role more than 25 years of proven marketing leadership and product management, business development, and software development expertise within both large and emerging enterprise software leaders such as Oracle, Informatica, and Siperian. His deep expertise in data-related technologies facilitates increased global awareness of the Denodo Platform and accelerate its growth.

Ravi holds an MBA from the Haas School of Business at the University of California, Berkeley, and an MS and an Honors BS degree in Computer Science. He is a published author and a frequent speaker on data management and governance.

Prior to joining Denodo, Ravi was the Vice President of Product Marketing at Informatica and was instrumental in positioning the company as a leader in the Master Data Management (MDM) market. He helped accelerate MDM revenue and customer acquisition, and propel Informatica into a $1B company.

Denodo is the leader in data virtualization providing agile, high performance data integration, data abstraction, and real-time data services across the broadest range of enterprise, cloud, big data, and unstructured data sources at half the cost of traditional approaches. Denodo’s customers across every major industry have gained significant business agility and ROI by enabling faster and easier access to unified business information for agile BI, big data analytics, Web, cloud integration, single-view applications, and enterprise data services. Denodo is well-funded, profitable, and privately held. For more information, visit or call +1 877 556 2531 / +44 (0) 20 7869 8053.

DISCLAIMER: Below is an AI generated transcript. There could be a few typos but it should be at least 90% accurate. Watch video or listen to the podcast for the full experience!

Ravi Shankar 0:00
The amount of data that you need to analyze could go into millions of billions of rows depending on a number of transactions.

Alexander Ferguson 0:12
Welcome to UpTech Report. This is our applied tech series. I’m your host, Alexander Ferguson. UpTech Report is sponsored by TeraLeap. Learn how to leverage the power of customer stories at Today, I’m excited to be joined by my guest, Ravi Shankar, who’s based in Palo Alto, California. He’s the Senior Vice President and Chief Marketing Officer at Denodo. Welcome, Ravi, good to have you on.

Ravi Shankar 0:36
Thank you, Alex, very good to be here. ,

Alexander Ferguson 0:38
Now Denodo is a data integration and data management platform, you guys are focused pretty specifically on enterprise space. So for those who are out there, if you’re a cloud architect, business leader, or a CIO, or data engineer, scientist, this will be a conversation you’re gonna want to listen to help me understand, Robbie, the very high level, let’s start with what’s the problem that you guys have set out to solve.

Ravi Shankar 1:02
Problem is about having a unified view of the information across multiple different systems that are dispersed across different regions, different business units, and so on, our business users are not able to gain a cohesive view of the information to conduct that business. It could be executives, it could be business analysts, data analysts, who are providing this information to the executives to make timely decisions. So they are not able to do that. And quite often to gain that unified view, these users would go to the it to request information, but it uses age old methodology, technology and processes that basically hamstrung them in terms of delivering it very quickly. So we have a methodology, a product that will deliver data much more faster to these business users.

Alexander Ferguson 1:56
I’m curious in our in our pre chat before we interviewed, you gave me a nice overview of the history of data. And I’d love to actually hear that again of take me back, maybe all the way back to the 1980s. And what was data like that, and how has it changed over the years?

Ravi Shankar 2:11
That’s right. Well, this is basically has been a constant battle between centralization and decentralization, Ada, and I came up with this notion of a data gravity and the data gravity belongs to where the data at the sources are. But going back to the 80s was the age of the databases like Oracle, IBM that invented the data base to keep the data in a central place of business users can actually get that access to the data that became very popular that the database has started multiplying multiple different ways. And soon, the centralization yielded to the decentralization where the data was distributed all over the place. And by the end of the 80s, people were not able to gain that unified view. So in the 90s, the data warehouse came back as a unifying mechanism of pulling the data together from all these disparate databases into one central place. And that was a great technology. And once again, we had a unification of the data. But the benefit of that technology once again, in the multiple different data warehouses, data Mart’s that the data was once again started becoming decentralized, and people were not able to gain a cohesive view. By the Millenium start of the millenium, we started seeing unstructured information. So in terms of like the social media information, streaming information, cloud information, and so on, so there was no place to put it. So on came the notion of Hadoop and big data systems, where you can house both structured and unstructured information. So that way, you have a unified place to hold it. So once again, that the centralized data warehouses, data Mart’s and unstructured data came to be unified in the Millenium to be in the in the in the Hadoop system, but and the Big Data System, data lakes, whatever you call it, but that again, did not pan out to be really good because it never became like the unified view. So people started having data warehouses, data lakes, operational data stores, again, a myriad of ones. And in the 2010s, when the cloud on came on board, like with the AWS, Azure, and Google, they started creating these cloud based repositories are cloud optic stores in which he can store the information. So now the data is distributed in the cloud, on premises in the data warehouse, in the data lakes, it has become a complete mess. So what we are kind of talking about is you need to unify the data, physically moving the data into a central place or collecting the data in a central place is not working because the gravity keeps pulling. It’s like throwing a ball up in the air, it is going to come down. Same thing you’re artificially coming To pull all the data physically into a central location is not working, leave the data vibrators connect to the data, but in a unified view through a virtual mechanism or a logical fashion. So that way, you’re not spending the time and cycle and fighting against the gravity.

Alexander Ferguson 5:19
And now going into the 2020s, that you said there’s a 20 tends to data virtualization happen. What’s kind of the next phase even then beyond that, can you can you speak to the 2020s and beyond?

Ravi Shankar 5:30
Yeah, so the 2020s is going to be the age of the machines. So for example, take our own household, like in my own household itself, like everything is, from a smartphone perspective is automated, I have a nest thermostat that actually, you know, generates all the data about and understands the temperature in my home, I have a video camera that actually monitors, you know, both inside and outside the home, I have a garage door opener, which is an automatic, I can open it halfway around the world, I can open or close the garage door. But all these devices are generating so much data. And you take that to an industrial setting. So we have customers, where they are creating fad based machines to generate the fabs for the circuit boards. And those generate lot of data. And they want to understand to service those machines, we have heavy machinery, customers that are breeding these machines for mining and all those aspects and those devices are sending the data we have connected card that is coming up to all these are in the in the future data is going to come and you need to be able to analyze that in order to react to something and provide the information faster, for example, a car, automated car that is driving down the road, somebody crosses the road, it uses all the data to quickly determine I need to stop the car right now. Right? So that’s where it is going to go, the data is going to be out of control. But how do you basically bring that data to a point of unification that you can actually understand from a business perspective, and take the action that is needed for whatever business process you’re doing?

Alexander Ferguson 7:19
It’s the challenge of the quantity of data is only is only increasing. And if you really want to compete in today’s age, you have to be able to use that data sounds like and become a more data driven organization. So is it really possible to visualize and analyze all this complex data,

Ravi Shankar 7:37
it is possible the systems are kind of not coming up in order to be able to in order to analyze and provide the aspect of it. But the data is the problem is in delivering the data to the point at which it can be consumed. So if you go back from the 80s, till now, two things have increased multiple fold, and one has not. So you look at the processing power. The back in the 80s, I used to use the IBM extreme machines, which had only like 540 megabytes of memory. In fact, right today, like the cell phone has a lot more than that. So processing power has increased much more, the storage capacity has increased. Again, like back in those days, we used to use floppy disks, the big floppy disks. So those of you who are born after that might not know about those ones. But we used to that had limited capacity. But now in the in the iPhone itself, we can go up to you know, four gigabytes, I have a very small thumb drive that can actually hold a terabyte of data. So storage has increased the stuff that has not increased as the bandwidth. So we are still be don’t have 5g here in America. And right now, even though the new iPhones are 5g capable, they’re asking you to turn off the 5g because there is not enough 5g coverage and is sucking up the battery of your phone. So the bandwidth does not. So this voluminous data from the sources, you need to squeeze it into this narrow pipe, and you need to deliver it at the point of consumption that people can analyze that and that has become the bottleneck. So what we are trying to do is we are trying to have the capability by reducing the data set at the sources and bringing back the results to the point of consumption. Let me give you an example. Let’s say you’re a retailer you want to analyze, which are your most 10 most profitable products in the last year or last last couple of years. You’re asking a question your ask your result data set is only like 1010 rows of data. Here are my 10 products, okay? But the amount of data that you need to analyze could go into billions of billions of rows depending upon number of transactions. So why do we need to provide to the consumer all these billions of billions of rows of data Data can be run the analysis down at the sources and just bring back the results and deliver the results to the consumer. So that’s basically the the what we are trying to do with the data virtualization technology is to reduce the data set at the sources and provide just the results, which can be transported much faster to pinpoint.

Alexander Ferguson 10:22
Rabbits. Some of our viewers of this may be very technical. So let’s, let’s take it even a step further then if we can you say at the source, so are we talking at like the cash register? I mean, what point when you say the source and and how is that

Ravi Shankar 10:37
implemented? Sure. So for example, if you use a cash register, you’re scanning your items, and that actually records the transaction. At the same time it is detecting from the from the inventory systems. So you have two systems, you have the inventory systems, and you have your point of sale system, right. So you want to get a unified view across that information, and you’re analyzing the most profitable products. So rather than if you take the point of sale system that can contain billions of billions of rows of data, depending upon the transactions that are happening on a daily basis. And similarly, the inventory system might not have that many, but you know, that also has to be would actually reduce, run this, run the, the queries down at the sources, ask them give me the 10 most rank at by the by the the sale, amount of sale? And the inventory available, something like that. And then we would bring back just that result to an analyst who’s trying to understand that the answer to that question,

Alexander Ferguson 11:38
gotcha. And and is this data that that you’re you’re accessing, for point of sale, or inventories, probably already on the cloud. So are you just like integrating with these different cloud warehouses of data, and then doing this Anil mentalization, and then providing just the results back to the end client,

Ravi Shankar 11:55
it could be in the cloud, it could be in the data center, it really doesn’t matter. For the if you think about it, for the person who’s analyzing the data, they don’t need to worry about the location of the data, they don’t need to worry about the format of the database structured or unstructured, they don’t need to understand the latency of the data, whether it is a streaming data that is coming in, or data at rest, which is in a database, you abstract those out, I just need the data that I need for this particular question. And let the technology figure it out. So the technology has to be location agnostic, format agnostic, and the latency agnostic. So that way, it figures out where the data is what format it is, and provides it in a way that the consumer needs.

Alexander Ferguson 12:42
If I’m imagining myself as as, as a business leader who’s wanting to get this data, or maybe it is a Data Architect is trying to figure out, I want this, I want to figure this out, but I don’t need to go to all the sources look through those millions of data points. And they come to you and they’re like, give me this data. I mean, is this is this a simple request that they’re entering, and then the system just executes it, how much of it is it automated truly, versus has to be has to really create a unique process for each time a request is made?

Ravi Shankar 13:13
No, it is automated. So think about it, like, you know, water, you and I turn on the tap faucet and then be collect the water and drink it. But there is somebody who has already laid the pipe somebody who’s actually, you know, treating the water to so that it is it is portable, that we can drink it. So that work has already been done. So we have the IT teams that are actually setting up the views and the systems and you know, then within the data virtualization layer that allows a consumer when they kind of know go and say, I want the data from denodo Not just gives the data it’s like opening the tap and getting the water that’s that it’s the equivalent of that. It

Alexander Ferguson 13:56
I like the illustration because it people want to be able to get the the, the the content they want right away. And that’s what you’re just simply turning on the tap for that is is it that that process of laying the pipes? I mean, what does that process look like? If someone says yes, we need to get you connected to all these different data sources for that you can start to virtualize it. Is it a lot? Is it a long effort? I mean months of work, or what does that look like?

Ravi Shankar 14:21
No, it’s a pretty fast effort. So it is like connecting to data data repositories. And then you know, collecting the data that is actually needed. And again, like right now we consumed the water. Do you know which reservoir it’s coming from? We don’t right so we don’t care. As long as the day as long as the water is potable for us. Same thing from a consumer perspective. They actually dedicated to the it the it takes a few days or weeks to basically tell like you know which systems they’re connecting to what data needs to be pulled in. And once the business user runs a chart or a report that actually informs the system, the system automatically knows Okay, I need to Go to the cloud to gain the data, or I need to go to the data center to get this data and then it will bring it back and give it to the data user in real

Alexander Ferguson 15:09
was how is data virtualization different than other approaches to data integration within an enterprise?

Ravi Shankar 15:15
Sure, there is one key difference, all other approaches. So there are about like seven different styles of data integration, the most dominant one is called the ETL, or extract, transform and load. Then there are other things like ESB, data replication, and so on. All these rely on the physical realization, the centralization of the data, so they replicate the data for them to work. Remember, the problem is the data is distributed, you need a unified way of getting the data. So they bring the data physically into another repository, whether it is a data warehouse, data lake, or operational data store, another database, they try to replicate the data. Replication is bad, the moment you replicate the data, it one it gets out of sync, because the data in the source is continuously authored, the cash register is continuing to scan the items. The second thing is the data quality degrades over a period of time, the data virtualization approach the differences, it does so without replicating the data. It connects to the systems wherever they are. And it understands which data is and what system and what format it is, when you come and ask the data virtualization system, I need the data to understand my most profitable products. It knows which systems to connect to wherever they are in whatever format, and unify that in the format that you want to consume and give it in real time. So by avoiding replication, it saves on the cost of storage saves on the time, and it saves on the effort. It’s like doesn’t take too much effort to actually virtually transplant the data.

Alexander Ferguson 17:01
Those who you’re working with enterprise? Are most of them already have something in place. And now they’re just trying to upgrade it or are there newer enterprise that don’t even have anything in place? I’m curious, like, what’s the what are you seeing is the status quo?

Ravi Shankar 17:16
Sure, data integration technologies have been around for the last 2030 years. So there is nothing new. So all these companies do have multiple data integration technologies, they might have ETL, they might have ESB, and so on. But they are kind of thinking, the age old process way of how they have been doing things for the last 2030 years, if a business user comes and as they’re going to use these existing technologies to physically replicate the data, they’re not thinking I’m going to use a much more efficient technology efficient process like a data virtualization, or a Data Fabric, we talked about that in order to provide the data very quickly. So they don’t understand that I have to use these modern methods. So that’s where things break down. And that’s where we are in the education process of educating our users, you might have these technologies, which is fine, you still use it for whatever you’re using, they might be using the data to move to a data warehouse. But to gain the unification of the data across your data warehouse, your data lake cloud system, your other ERP systems, dedicated to data virtualization, keep the data wherever they are, connected them virtually, and provide that integrated view to your business user to your executives who can analyze the data and take timely decisions. That’s basically what you’re preaching to the team to the world up there.

Alexander Ferguson 18:46
Can you give me another example? I appreciate the retail example. Can you give me another one of a different industry in particular, where the pros and cons of if they weren’t using it? And if they are? How does it play out differently?

Ravi Shankar 19:00
Sure, I can give it to you as an example from one of my customers. We recently did an ROI study. So we we chartered Forrester Research, one of the analyst companies to go talk to a few of our customers to find out, you know how the challenges they faced and how they use the data virtualization technology was one key information that actually came out of that, which is what I will relay right now to the business user came and said, and I need this particular answer to this particular question. And I need the data related to that. So the IT team had to go work on in unifying the data from multiple systems to provide the data to the analyst so that they can get the answer to the data. And it took them a month and a half to do it. Because one day we’re using these age old processes and technologies that did not make them very agile in terms of being able to deliver the data. And so As a result, they were basically a hamster on the bottle neck in terms of being able to provide the data in a timely fashion. Think about it in a month and a half, that data analyst has moved on. That question is not relevant anymore. That was the eye opener for them. How do I get the data within a vehicle less than that to the customer so that their customer, which is the business user, so that way the answer is relevant. So that’s basically what we’re talking about here, that the data virtualization provides the data instantaneously, and it makes the people are more efficient. So that way, they have to use less resources, and less time, and the time to insight or time to data for the business user is greatly enhanced.

Alexander Ferguson 20:51
What’s an enterprise? To you? I’m just curious of like the those who are best can utilize what you’re doing. Like, what if you were to give some sort of definition? What do they look like?

Ravi Shankar 21:03
Sure, it is anyone who has trouble getting to the unified data in a timely fashion. Most of these tend to be large businesses, that’s where we have focused on companies over a billion dollars in revenue so far, and they tend to have global operations. If you take denodo itself, like we are a midsize company, but we are present in about 1819 Different countries already. So you take like the JP Morgan’s of the world, the Boeing’s of the world, you know, all these companies have very complex operations. And they need these separate systems in order to make the businesses function. And the challenge is getting that unified view. So we an enterprise for me, is anything that has the challenge of bringing the data together. Now, we are also starting to see that in the mid mid, and in the lower end of the market as well, they might not have the volume and the complexity of the business of a large enterprise. But nevertheless, they need the data as well. simple use case could be migrating to the cloud, they want to use cloud services, because they don’t have a data center to run their operations, they don’t have the IT staff to even there, they would have multiple, think about it. Salesforce could be one application, you could be using workday as another application, the proliferation is best still, nevertheless, we are trying to provide a unified view across there as well. Gotcha,

Alexander Ferguson 22:29
gotcha. I’m curious. Also on on on your journey, and you have you’ve always been in in the data enterprise space.

Ravi Shankar 22:37
Mostly, yes, I come from both a development as well as a business background. So I have a master’s in computer science. And I was a developer the first 10 years of my life, and did my MBA, and then I moved on to the business world. So I have done the technology side, from the data integration perspective. And also, you know, in the going into, like Master Data Management and the data virtualization now, I would say for the last good 20 to 25 years, I’ve been in the data space, but in the technology space for the last 30 plus years.

Alexander Ferguson 23:16
Gotcha. Because I think you’re at Oracle for a while is that right?

Ravi Shankar 23:20
I was there for more than 10 years at Oracle was a great journey. So that’s what I made the transition from being a developer to marketing person. So

Alexander Ferguson 23:30
how would you describe that you’re anyone who’s maybe on on the more technical side, and that transition to the business side? What’s your experience been like?

Ravi Shankar 23:41
It has been great, I really, I really loved it, and what makes me a better on the business side is my underpinnings in the technology side, because I can understand the technology to the level of deputies. So when my right now my product managers come and tell me how the product basically works, I can understand it. And but the skill that you need from a business side from a product marketing side is to be able to articulate the value of it, because ultimately, the customer does not buy a technology the buyer solution. They you need to be able to if I tell them data virtualization, the first question is, so what why would I use it, so to be able to articulate it in a value terms that they would understand is the skill that is required on the business side. So so that journey has actually enabled me to be very successful on the business side.

Alexander Ferguson 24:34
I, I find it interesting when I when I dig into topics around big data and data with enterprises. I cover a lot of topics here on the series. And when sometimes when I get to this topic, I’m like, can I can you really be excited about this every day? Now, when you dig into and you start to talk about the implications of it and the examples that I start to get? I’m like, Alright, I can see now, where it where it takes But I mean, do you get Do you still get excited about about this every day?

Ravi Shankar 25:03
Definitely Alex, you know, I tell that for every solution that we create, in and of itself, it creates its own set of problems, right? You think about the story that related the database was a great way to house data, which was all in spindles and cards. If you remember, back before the databases, that was a great solution, but it created the problem of having multiple databases, then we brought in the data warehouse, that was a great solution, it created its own problem in terms of multiple data warehouses, and can house really structured data, but not unstructured data, then we brought in the data lakes that had its own set of problems. So it is, it is, you can get excited on a daily basis. Because there’s so much of movement that is happening on the ecology side, this one is so fast, like the business and the business users can go look at look at where we are, from the pandemic perspective, right. from a testing perspective, we have problems. From a treatment perspective, we have problems, you know, all these required moving the ability to move faster, but we are a digitally connected economy. For us, we just cannot make things and deliver it fast, we still need the technology to be able to track this, and and deliver it and so on. It’s not moving much faster. So that’s why you get up every day, and you see these new problems and you say I can solve this one, I have the means to do it, I’m going to find the ways to do it. So yes, I think there will be any any restriction and how long you can go to solve these problems. And I think we have great technologies and solutions. And we can solve very many problems.

Alexander Ferguson 26:46
What are the when I look at some of the problems of the concept that come up of from the consumer side? Because I always try to process this, like how does this play down to the consumer in what’s happening in the enterprise space? And one is when you talk about data is also privacy? Like, how is the data manage? How is it tracked? And people want to know, what are you doing with my data? I mean, do you play in any of that space with consumer data? And how is that looked at and protected specially with this focus on recently?

Ravi Shankar 27:16
Definitely. So we have customers who are using our technology for GDPR, which is the Global Data Protection Regulation Act. within California itself, we have a segment of that called like the CCPA, the California Consumer Protection Act. And there are similar ones in Canada and other regions. So they use a technology to actually provide that that information. So in a lot of these cases, what happens is like if you look at a regulation, they will say I as a consumer, want to tell you, you need to tell me back ask you like where all is my data held? And you should be able to identify if your data is distributed across multiple different systems, it’s very difficult to answer that question. But they use our technology to get a good view of overall like Alex is actually present in the systems and tell you are these are the different systems. And when you say I want my name removed completely, they can remove it. Or you can say, okay, you can remove me from your marketing materials. But anything that is important to account changes, let’s say my account has been hacked, I want to hear about that. So they should be able to do that. So we do provide that ability to be able to support such privacy regulations across our customers.

Alexander Ferguson 28:35
How much of when it comes to just kind of general looking here, how you see people using this data is the insights that are coming out of it generated? I say automatically meaning like you’re providing I say you make whether it’s you or they actually have something on top of that, once they have this information that’s been provided that hey, this is a recommendation, are we are we at that point that insights are able to be generated automatically? Or is it still coming back to a human and then they take the data, and then they have to actually work with it to come back with insights and realities from it?

Ravi Shankar 29:09
No, you’re just getting there with AI and machine language is actually providing the recommendations. So the humans can actually, you know, if you ask somebody to draw a picture of alien, they will still draw it with a head but to googly eyes, and that kind of stuff. We can’t imagine anything beyond that, right? So we can understand only things in the way they know the way it is. So but a system that’s analyzing on a basis of billions of billions of rows of data, it can come up with recommendations. So that’s what we also do, we actually provide recommendations to our users, hey, this is a particular way you can set up the data so you can understand get better results, or Bob here is actually using a particular way of analyzing the data, which might be of interest in large organizations that can get At last, I have heard my customer, you know, pharmaceutical company, they say that we have a cure for the cancer, but this lost somewhere in the data. So you can only understand with a plethora of data what you can. So but if the system can recommend, that’s where it is moving in the future to be able to provide that intelligence and recommendations, which humans cannot process in that limited capacity.

Alexander Ferguson 30:27
What popped in my head right now, I don’t know if this is a perfect analogy, but of removing the human where you’re just going to let the data that comes in and then letting an AI make the decisions, as is Zillow, I think it was their home buying debacle that happened recently where they end up buying spending all this money and buying all these homes, it turned out like they weren’t it was not good. I mean, can you speak to that experience? Like how how did that happen? And how can an enterprise not make that happen?

Ravi Shankar 30:56
Sure, you know, artificial intelligence is not something that it thinks by itself, like we basically programmers basically program it to be can, right. And it is important, just like in any software, where there are bugs, you roll out our software, things, some things don’t work, because you haven’t anticipated all the edge cases you have, you have, you know, programmed it to a to a bay to kind of function the way it is. But there are a lot of edge cases that you haven’t considered. So that’s where, you know, in artificial world, in terms of its learning capability, and so on, you need to have the proper Bay. Still, programming technology is not so much of a scientist. So an art in which depends upon the person who actually built it, right. But all these things basically depend upon that and things can go wrong, if people are not programming it correctly, not testing it correctly, these things can can happen. And they need to take the time to roll it out and make sure it is all proper before they can actually deliver it. And they cannot go beyond what the capacity of the human is in terms of being able to program and roll it out. We can restrict it, we can we can do it, it’s not going to run away. If if something goes wrong, it’s not technology’s fault. It’s like the humans, how they kind of approached it and rolled it out. So it has to be controlled by the humans, it’s not going to go away where in this case, you know, the company might not have properly tested the systems before they rolled it out.

Alexander Ferguson 32:30
I want to end here of kind of looking ahead of for maybe those who are new or getting into being like a cloud architect or data engineer and, and they’re wondering, okay, where’s this space going to go? What should they keep thinking about? If you just look ahead, like, let’s say, five years from now, 10 years, actually, 10 years from now we’re going to be in the wow, yeah, the 2030s the next next decade? I mean, you’ve given us a nice history of where we’ve come from. And you’ve given us where we are right now, which is interesting of the amount of data that’s coming in, can you just share any of your own insights of predictions of where what should we be thinking of where we’re heading next?

Ravi Shankar 33:08
Okay, all right. Well, I’ll be bold enough to say that, but it might not be true, but I will base it on what you have to look at the fundamental paradigms of waters, this speed to information is getting narrower and narrower. In the past, when I was working at Oracle, I used to do analysis of the beta. And to me, like what we are in January, and the beta would be lowered at one year on a monthly basis. So I can only analyze up to December data. So if I have want to analyze the January data, I have to wait till February beat cannot beat so long nowadays, things need to move much faster, right? We are looking at the phone looking for, you know, the emails, you know, it’s not like a nine to five anymore, you’re kind of constantly checking the emails and so on. So the time the information is going to come narrow or narrower. So there is going to be so many things, everything is going to be compressed. So in terms of like the travel, we need faster flights that can get us from here to New York, not in like five, six hours, but maybe in a couple of hours to be able to get us there. But that is going to probably compress the time in which we have to, you know, get the information that we need on the call for a service, it gets enabled immediately. You don’t have to wait for like three or four days for the technician to come turn it on it will be automatic. So that is going to get compressed from a timing perspective. So that way the technologies will enable such such capabilities. And so all of us will kind of move in a model where things are consumed very rapidly. Whatever it is, whether it’s travel, whether it is procuring things like Amazon can deliver things within the same day or in the next day. So that’s the difference. So that’s it Basically what is going to be happening in the next few years, and you can extrapolate it to anything that you want as a consumer, and things are going to be pretty instant and you need to get up to the ability. So, one as an entrepreneur, you need to be able to deliver such a capability. And to from a consumer perspective, you need to be prepared to be able to consume it at that level of speed. And education also is going to go in the same way, you know, we are because the things that you have to like, for example, a physician, they have to be board certified every 10 years, the law between the 10 years of the exam that they write the books basically double in size, because the diseases have increased so much they need to know so much of knowledge. But all this is going to the information volume is going to increase the velocities increase, the variety is going to increase. And we need to be able to adapt ourselves to be able to process that all and deliver it in a much more faster fashion.

Alexander Ferguson 35:58
Ravi, thank you so much for for sharing your insights, what you’re doing at denodo and also the history and the future when it comes to data and data management.

Ravi Shankar 36:07
This is awesome. I enjoyed the discussion. Alex, thank you very much.

Alexander Ferguson 36:11
And we’ll see you all on the next episode of UpTech Report. Have you seen a company using AI machine learning or other technology to transform the way we live work and do business? Go to and let us know


YouTube | LinkedIn | Twitter| Podcast

Sales Calls, Call Tracking, & Machine Learning Tech with Mo Asady of Convirza

The Future of Digital Art & NFT with Monty Preston of Saatchi Art