India GameChanger recorded an insightful conversation with Satvik Jagannath, CEO & Co-founder of Vitra.ai. Vitra.ai helps translate Videos, Podcasts & Subtitles to 50+ languages with just 1-click.
-
The big edge of being in India
-
How the last two decades changed the game in India
-
The importance of analysing internal factors across a time scale
-
The metaverse and why it matters to Vitra.ai
-
Understanding how audio signals work
-
The genesis of Vitra.ai
Some other titles we considered for this episode:
- Our Goal Is to Build Experiential Translation
- It’s About the Process
- Translating With One Click
This episode was produced by Stephanie Ng.
Read the best-effort transcript below (This technology is still not as good as they say it is…):
Michael Waitze 0:04
Hi, this is Michael Waitze hit and welcome back to India GameChanger. Today we are joined by Satvik Jagannath, a Co-founder and the CEO of Vitra.ai. Satvik, It’s great to have you on the show, how are you doing
Satvik Jagannath 0:16
backs? Thank you so much, Michael. I mean, it’s, it’s incredible to be on your show, right? It’s been a while since we first spoke. Yeah, happy to finally be on the show and take thank
Michael Waitze 0:25
you so much. And it’s almost time to wish you a happy new year. But we’ll get to that in a bit. Anyway, before we get to the main part of our conversation, give our listeners a little bit of your background. For some context.
Satvik Jagannath 0:36
I’m a technologist primarily right as software engineer to an entrepreneur, I have 10 years of experience in the industry, I work with companies like Cisco, Avi networks, VMware, etc. And obviously started out with this problem of I mean, figured out the problem that there is the language barrier in content through which people communicate, and you know, went out with rah rah AI for that.
Michael Waitze 1:01
So this is interesting to me. But look, a lot of people don’t know this. You’re in India, obviously. Right? And how many states are there in India? 3031. Right. And in a way, a lot of people tell me that I talked to in India that like in those 31 states, it could be 31 Different countries in a way, because it’s like, if you go 100 to 200 miles away from where you are, it’s different food, different culture and potentially a different language. Is this a thing? You think that helps? Both internally? Do you know what I mean? And also externally? For sure? Yeah,
Satvik Jagannath 1:29
absolutely. I mean, to in India, 28 primary states, your union territories included its own languages. Right, right. Yeah. So basically, the culture is different across every single state. And it’s even different within a given state. Right. Right. I live in the state, Karnataka, right, north, south, east, west, all of them have different cultures. It’s so great, even though the language that you’re speaking is the same, right? That’s how different and in a vibrant it is, if you want to call it that way. No, I
Michael Waitze 2:00
love it. But do you think it gives you a different window on to the world? In other words, if in your own community, right, in just your own state, there are different cultures and different languages and different food traditions, that when you look abroad, and see all the differences, it doesn’t look so scary to you? It just looks like? Yeah, I know how to deal with that. That gives you a little bit of an edge. No.
Satvik Jagannath 2:20
Absolutely iron. From that perspective. I very much agree with you on that. Yes. All these countries trying to bring in diversity inclusion as a part of a learning. For us. It’s fundamentally ingrained, right. We’re a proud childhood. Right. So that’s the there’s a big edge that we get absolutely.
Michael Waitze 2:37
So at the beginning of this, you said and then we noticed this problem, besides the fact that it’s in India itself. There is this global issue with content creation. And you and I could probably spend hours talking about how I mean, look, I do this every day, right? Like how the building of media is changing, and how the distribution of it means that you have to have other the ability to have it in other languages, right. In other words, I speak English. I record in English. But I’d love to have everybody in Indonesia, who speaks Bahasa be able to understand what I’m saying. I can’t do that on my own. Right.
Satvik Jagannath 3:13
Sure. Absolutely. Yeah, absolutely. I agree with you on that.
Michael Waitze 3:17
Are you familiar with this company called Viki? Because I can’t get out of my head, right. And Vicki was this company that was started somewhere in like 2012, or 13, but sold to rock 10. And what Vicki did was, it was founded by this Korean couple G one moon and Chang Sangho, right. And they were living and studying in the United States. And they couldn’t understand all the content that was there. So they basically built this crowdfunding thing. And then they were able to monetize it. But that was kind of the old way to do it. I wanted to lead that into what’s happening now, because that took time. So there were benefits to this too. It built this whole community of people that then supported the videos that were out there. But from a time perspective, and an efficiency perspective, I really want to know, what changed technologically to make this possible to do in a way that’s useful. Do you know what I mean?
Satvik Jagannath 4:13
Yeah, absolutely. I feel to two great improvements that have happened over the decades is one is the compute, right? The overall cloud infrastructure and the ability to compute a high end programmes, if you if you want to call them as is possible today. And if you look at the obviously, a lot of these AI algorithms, which were mostly mathematical formulas, you know, like four or five decades back, are now actually bringing, it’s like we were able to bring those to life, right, because of the compute. I would say that’s the biggest game changer because you know, back in the days it’s a 32k V, right? 32 KB or a 32 MB RAM right? Today you have I have a right to 56 GB of RAM even on some of the AWS systems, right. So that’s the big difference is what I feel. And there has been, obviously a lot of revolution and how open source and technology itself has has grown over the, over the last, especially the last two decades, a lot of improvements in various fundamentals of technology. Right. I think these are the two biggest reasons why we are able to do this today. And not like 10 years back.
Michael Waitze 5:30
So do you want to talk at all about chat? GPT? Three, and what’s happening at open AI? Because I because I don’t think it’s possible to do this without that technology. Do I have that? Right?
Satvik Jagannath 5:40
Absolutely. The technology. And even if you look at it from the perspective, digital data, having digital data today is very important. So to train AI, you need a lot of data, right? So today, web has so much of data. Imagine in 99, you only had so much data that was digital, yeah, right? Nothing actually. Right. So that’s the big difference. You have data today, you have compute today, right? Like incredible servers and everything. And obviously, algorithms have improved to that efficiency. So all of this put together, you’re able to build phenomenal systems. I want
Michael Waitze 6:16
to get back to the open AI stuff in a second. But I want to back up a little bit. I think about this a lot. You mentioned all three of these things, Compute Cloud, but then also the throughput, right? Because I can send stuff to the cloud. But if it takes me a day and a half to get it back, you know, I’m exaggerating, then it’s kind of meaningless, right? But compute in a way has almost become infinite. And we’re so close. I’ve got friends working on quantum actually in Switzerland. And we get so close to that once we get quantum it’s like a completely different story. But I want to understand how this works really simplistically? Right? So I go out to church GPT. Three, I put a little title into it, I even give it some information. And I can watch it right? In real time. It’s in a way it’s it’s mind blowing. It’s still not good enough to replace humans. And we can talk about this in a second. But how about the translation? How does this work? Let’s say, let’s do the easiest. And I put it in quotes, part of the translation. I have like a two paragraph text. And I want to translate it from English into I don’t know Japanese and I say Japanese because I speak Japanese. So I could check it, essentially. But let’s say I want to do that. What’s the process? And where is it going, like, technologically? Does it go out over my internet connection through to a cloud where all the compute happens? And then it gets sent back to me? And like, what is that? Like?
Satvik Jagannath 7:34
Yeah, absolutely. We’re talking about simple text. That’s exactly what happens, right? Let’s type in a text, click on translate button, it goes to the cloud where a lot of these engines are running, right? We call this the context of our engine, because your translation is contextual, right? So the processing happens, and you get back the response in less than a second, right? That’s the text translation. Let’s talk about the next step, your podcast and video translation. And this is slightly more complex. Or I will even say it’s very complex when compared to a simple text, right? So in this case, a lot of processing happens across videos across frames, contextual transmission, voiceover translation, we try to retain the background music, right? A lot of things happen in the backend, but you still get the response back and few seconds to a minute, right. That’s That’s how fast the processing happens. When the video itself is now dumped, right? To another language where it’s jumped, though, it does go through the internet, to the cloud for processing, and you get the response back.
Michael Waitze 8:42
But you said Dubs, so this is where I mean, obviously the whole thing is really interesting to me. But let’s say I do a video, right? And I’m a native English speaker, I like to think and I like to think that I’m relatively easy to understand in my native language, and I speak with a certain cadence and you know, some people like the voice as well, then I have a guest on let’s say, it’s somebody who’s not a native English speaker. Let’s say it’s an Italian person speaking English. In the video, like, if I want to get that translated into Portuguese, which is not a very commonly used language shorts in Brazil, it’s in Portugal, I get it, what happens in what voice gets dubbed in for me? And then for the other guests, do you I mean, how does that get chosen to so
Satvik Jagannath 9:27
unlike most other software’s, we just have one or two male and one or two female voices? Interestingly, we’ve worked a lot on speech synthesis, right. So what does what this means is, you have 20 plus voices for every language that you can choose from. Right? So current expression we support interestingly, male, female, and very interestingly baby voices as well. Right? Well, so
Michael Waitze 9:54
why baby voice is why I think you’re just showing off now right?
Satvik Jagannath 10:00
A lot of our customers are enterprises and think of ads, right advertisements. And all of these videos, short short clips that getting into this are usually dub. And adding baby voice itself has given a huge edge for us versus anyone in the space right. So, it is like I said, you have plenty wants us to choose from, and we are working on this technology called cross language voice cloning, right, which should be out and in Feb March kind of a timeline in 2023, which is when you have it, you’re going to listen in your own way.
Michael Waitze 10:38
I got it. Yeah, it’s an easy. That’s scary. Does it? Does this stuff scare you? Do you know what I mean?
Satvik Jagannath 10:44
Yeah, it does. But it fascinates me more.
Michael Waitze 10:48
So can I book Can I ask you this? Yeah, no, but this is really super interesting. So you’ve got a company, I believe it’s in India, also called Unifor. And what Unifor does is it does voice analysis, right, obviously, video analysis too. And it also provides some call centre services, a whole bunch of other things that are voice based, right, but it does a big analysis thinking, Oh, Michael sounds excited now. Or Michael sounds sad now kind of thing. Right? Emotion? Yeah, it can it can check your emotional height or whatever, using using their technology. But what does it say for your ability then to have somebody like use their technology to understand the mood that somebody is in into all the voice analytics, but then produce output in another language for somebody who has all this technical knowledge, but doesn’t speak English? And yet, I have a problem with my thing, my gadget or whatever? Like, can all these things get tied together? Do you know what I mean?
Satvik Jagannath 11:43
Absolutely, absolutely. I mean, that is absolutely a part of our technology development pipeline. Next year, again, we’re calling this as emotion transfer. Right? So emotion transfer is where, let’s say you’re laughing, you’re crying, you’re sad. However, your emotions are in English. All of that would get translated or transferred to the other language as well, seamlessly it is it would be tied up. And that is the state of the art technology. Only when you bring emotions plus your why’s plus translation everything together, right? That is what we That’s our vision. We call it as experiential translation, right? Because what do translate or dub is not just a wise, what you’re dubbing is an experience. Right? So without our AI, we internally call this as our goal is to build experience with translation, not just dumbing. Yeah,
Michael Waitze 12:37
I mean, the biggest problem would I mean, there’s so many problems with dubbing in the past, it’s been very difficult to do and very expensive. But one of the hardest things to elicit is that emotion that’s involved in the way somebody’s speaking, it would be insane. To hear your voice, right, like speaking Swedish, in the proper cadence, but using your vocal cords. To do that, absolutely. Can you tell me how that works? Just technically, like, how does that work? How do you test it? Work? And what kind of code do you write to do? Like, what’s the code trying to figure out? I’m so curious about the tech behind this.
Satvik Jagannath 13:19
So this, this technology happens at the level of waveforms level, right, like core core audio, synthesis, audio processing, right? That’s, that’s where it happens. i The first step is we try to figure out how the voice structure so whenever we are talking is not just audio that you hear, right? There’s so many factors like frequency, amplitude, the variations of those, right, and there is pitch, and there is a lot of these factors internally, right? Right, what we try to do is try to analyse all of this across the time scale, right? How it varies with time. So when I sound excited, it’s not just about the whys being excited, it’s actually the process of getting excited. Sure, that makes you’ve seen, so we understand all of that on a time series kind of a scale, right? And analysis happens. And we try to replicate all of this, again, you know, in the why, and how do we clone the wises basically extracting attributes from your existing language existing wise to the new language? Right? So a lot of lot of internal complexity goes into it. But this is at a high level, how it works.
Michael Waitze 14:38
How do you understand first of all, what kind of what kind of software are you using? What language you’re using to do the sound analysis and the emotion analysis? What are you writing?
Satvik Jagannath 14:48
So writing in C++ and Python, C++, okay. Very, very cool level. So you need a lot of C++ kind of code, right? Plus Python. bindings to kind of make it more usable with other processes, right? I
Michael Waitze 15:04
mean, that’s the whole thing for people that know this, right? So if C++, you’re getting closer and closer to machine language, right, so the closer you can get to the machine, the easier it is for you to be able to analyse the sound that’s coming out of it. I guess, there’s so much to talk about here, particularly from a software development standpoint, but from an analysis standpoint, you take that code, like how do you in software? Are you going back and analysing a whole bunch of speech patterns historically, right? So going out, and literally, you can just take everything that’s ever been on YouTube, and run it through an analysis processor, right? And then say, here’s the way people speak in this language at this time, you can hear it in my voice. Like I’m getting a little bit energetic. But but nobody lives in this energetic state, you have to get there. And part of the process is scaling up the sound, the speed and the volume, right? How do you know when all this stuff is happening when you’re analysing this?
Satvik Jagannath 15:55
I mean, like you said, These go in two dimensions. One is statistical analysis. Yep. The other one is actual analysis on on the front of your graphs and stuff, right? All of these basically are spectrograms, that gets mapped on a graph, where we try to analyse it literally like how you do in high school, right mathematics, right? You just try to figure things out on a graph. And most of this is even done manually in the in the initial processes, so
Michael Waitze 16:23
you have to manually figure out how it works.
Satvik Jagannath 16:26
Yeah, go ahead. So that’s a lot of time goes into that. And then once we start figuring out the patterns, right, we try to automate it step by step, and then figure out patterns and try to build the whole system out. I mean, it’s a it’s a time consuming process. Sure. We’ve spent three and a half years so far to be where we are today. And we still feel it’s day. One
Michael Waitze 16:48
more. So that’s what’s gonna ask you, you spent three years to get to here, where is here? Like, where are we just so I know,
Satvik Jagannath 16:54
I mean, like I said, I still feel as
Michael Waitze 16:58
progress right. So day one would be just like a sound wave. I have no idea. Where are we today?
Satvik Jagannath 17:04
I mean, we have made very significant progress, I would say we are easily the top guys in the world when it comes to how the technology has progressed and where we are at the moment. Okay, at least right so and what it is what stands in the future is even more exciting. So where we are today is I feel where dubbing is seamless and can be done with one click real. That was our thesis, it can be done with just one click. Video dubbing. Podcast dubbing can be done with one click. And we have achieved that today. What are we looking at the future? Is that experiential dubbing, alright, there. It goes at meta level, right? So and you’re even talking about I mean, I’m not sure if I mentioned the I’m just talking about content of videos, podcast images, text so far. Yep. I’ll go for 2023 and 24 is to translate a meta verse with one click. Whether it’s VR or AR System, the whole ecosystem will be translated with one click. So even though you’re in the same I was people globally, visiting and meta word, each one of you will see the metaverse in your own language. But what
Michael Waitze 18:16
does that mean by metaphysics? What does that means? So the multiverse is a metaphor. For your perspective, what do you when you say we can translate the metaverse? What does that mean?
Satvik Jagannath 18:24
So for me, Metaverse is an alternate universe that anyone can visit sitting on your couch. Right? Okay. So that’s, that’s what I call it as that’s my definition of metaverse. Right? And why would we want to translate just like any other internet system? Like a website or a mobile app, right matter words is a global thing. It’s a local thing, right? It’s a global thing for which you absolutely need localization, because eventually billions of people will be on Metaverse from across the world. And you can’t just have it in English. Simple as that. Just
Michael Waitze 19:06
just in Yeah, it’s not going to work anymore, right? Because the key thing for the metaphors. The key thing to the metaverse is like, like, look, I’m lucky because I was born in California. And, you know, the British ran around the world before I was born and forced everyone to speak English. So I got lucky, right? Because I don’t have to do anything differently. A lot of people do. Right? In a way you’re fortunate in some ways, right? Because you’re multilingual by birth, because you have to be like you just have to be otherwise you couldn’t even do regular things. Actually, I want to see this. I really want to see this work. Can you talk a little bit more about spectral sound okay, and how it gets analysed?
Satvik Jagannath 19:46
Absolutely. So basically are talking about spectral Rafi itself spectrogram and it is a visual representation of frequency signal actually. Right and when you try to apply it on An audio signal, yep. Right? You kind of get what we call it as Why is Grant. Even in the health sector, you might have heard it as sono graphs, right? So people do all of this stuff. This helps in not just our technology, we need it today. spectrograms are used in music, it’s used in Smart Energy, if you know, right, or trying to understand brain, earthquakes and various other things, right, it’s used in various fields, to understand how sound works, or how audio signal works, right. And we’re using the similar technology to understand how spoken words phonetically are being spoken. And we analysed that to create a graph and maps and stuff and try to comprehend that and represent that three dimensional data on a 2d axis. And trying to trying to convert that again, on the time series scale, to kind of replicate the dubbing, etc, write whatever happens on the other hand, so this is the actual process that actually happens in the spectrogram. And the whole of the internals that goes,
Michael Waitze 21:24
it’s a really good description, the deeper you get, personally, into the understanding of how sound works, how it gets developed, how it turns into words, you know, I always say like, if you’re really great at what you do, it’s hard to separate it from yourself. We talked about this during our prep call, right? And I remember just to give you a perspective on this, I remember when I was in Tokyo working in the stock market, I’d go for a run. And I’d look at every building and had the name of like a company that was listed. And I could know the stock code for it. Like I just couldn’t get the thing out of my head because I enjoyed it so much. You spend so much time analysing sound and building software and analyse sound and then building output to do this. What is it like in your day to day life separate from like doing all this stuff? When you’re talking to your mom or listening to your teen? What I mean, you’re here, don’t you? Don’t you know what I mean, though, right? Like, do you feel like now you’re constantly processing and trying to understand what is what’s going on with that sound? And what would this sound like? You know, if it were in Danish kind of thing? Do you experience that?
Satvik Jagannath 22:24
Absolutely. I mean, I absolutely resonate with what you just said, the most interesting part is the whole concept of voice cloning that I just mentioned, how it actually struck me was one day, I was actually talking to my mom, and trying to do a mimicry. Right, trying to mimic one of the personalities, right. And then that is when it struck. Oh my god. So if I’m able to mimic someone with their actions and their why’s, why shouldn’t I try this using computers? Right, right. That’s how the idea itself came. Right? I mean, when you’re so involved at work, right, your brain thinks about this, irrespective of the context where you are whether you’re going in a metro train or talking to your mom, right? You try to pick up new ideas by in a general life from general life control. So that was a Yeah, absolutely. So that was the context when I felt Oh, my God, if I mean, ultimately, I must on the generating engine to write if I can do this. How can I try and get a system to do this? Yeah. And that was my thought process.
Michael Waitze 23:36
I mean, haven’t you had this thought before, when and I just had this happen recently. And I cannot remember this guy’s name. But there’s this comedian who does these incredible, incredible impressions like real sounding voices. And I just thought, if you can understand how to manipulate your own vocal cords, and the air that comes in, and the way you put your tongue and all the ways you speak, you shouldn’t be able to sound like anybody else. And if you can do it, then the computers got to be able to do it. Right. Have you had, so you say you spent all this time and I know the way this feels to write just in your day to day life, you have these little moments where you’re like, I have to work that in this is fascinating. I hadn’t thought about sound this way. And I’m gonna say this metaphorically. But when you go back to the office and talk to the team, and you say we need to fix this thing or develop that thing. It’s hard, right? Right. And again, on day one, the first when you write your first line of code, it’s more just like setting stuff up. But once you finally get it or you get close, do you have these moments in the office or in the virtual office where you guys kind of all look at each other and just go, I think we got that? Do you know what I mean? Where it’s so powerful, done kind of thing?
Satvik Jagannath 24:51
Absolutely. It feels incredible. Especially the reasoning is most products Right are most software’s may not be AI, right may not be i driven so what happens is software development lifecycle is, is shorter and faster people release. I mean, there are products like clickup where they are release cycle is a week, right every week they release new products, right. But in our case, it’s never the case to wait for months, years, right just to see things to work. And when that happens, we got on a team outing. Because with Tara excited, and oh my god, so much of effort that has gone in, has finally become fruitful or valuable, right? It’s almost when we are until like 90% done. We don’t even know if what we’re doing is the right thing. That’s the point. That’s That’s how blindly we work. In most cases, right? We’re just blind. We’re just trying out stuff, right? whether it works or it doesn’t, right, only when we hit the 90%. Mark, we then have clarity, oh, this is how it works. Okay, there is a chance that this will work. Right? That that’s how it is right? So, and only the 90% mark, we’re already excited. When we hit that 100% mark, we’re like, Wow,
Michael Waitze 26:16
lots of high fiving going on. As you can tell, I just find this so interesting. I’m gonna deal with sound every single day and also a video too. But it’s just so interesting to see the way that software and open AI and just all the artificial intelligence stuff is driving the way this stuff works. The machine learning too is huge. I was having a conversation yesterday with a friend of mine about this. And let’s we can use chat GP T three as an example. But I mean, even the voice stuff, I think can answer this question, or is irrelevant. If I don’t write this code, right. So if I understand this correctly, you can go out and analyse all the stuff that’s ever been written until today. And then you can categorise it, you can tag it, you can meditate, get all this kind of stuff, right? And then you give it you put all this processing power behind it and you say, okay, as everything new also gets written, and we have access to it. Add that into all the stuff that we know. So we know everything at some level. But is there some risk, right? That if we start using chat GPT, three or GPT, four GPT? Five as it gets super good, right? Because I look at it. And it’s kind of interesting. It’s not there yet, for sure. And they know that they’re not saying that it is if I understand what it does, it goes back and takes everything that’s been written up until today and then uses that as the basis for its writing today. So from now on, all we do is use chat GPT to write down our zero ever going to be anything new learned from the new writings? And since just based on everything we’ve already done, the answer to that, obviously, is that’s not going to happen. But don’t you think that it’s going to mean that real writing is going to get way more valuable? because so few people will do it? And do you think the same thing is true with voice and spoken stuff.
Satvik Jagannath 28:02
And that’s exactly true. And I was actually talking to a co founder mine today, already content writing is a highly valuable thing. Because quality content writing is very valuable today. But it’s going to be 100 times more valued in the next five years. I agree. Because there’s going to be a lot of crap that’s going to be auto generated. So real human writing is only going to be more valued. And it’s not going to dilute it that says there is this panic in the content community that oh my god, I might lose my job. But what I feel is, you’re going to be paid 100x more? Because if you’re good quality, right? Yes, that’s what I believe. That’s what I believe. Yeah, so what actually is going to be replaced? Is the low quality content, or is going to be replaced by charge GPD three content that said,
Michael Waitze 28:59
I agree completely. And I saw this happen. And actually, let me give you my one of my favourite examples of this. You know, 100 and something years ago, I think we 2022 more 100 and something years ago, you know, people were leasing on sites or riding horses everywhere. Right? And when the car came along, people like, Oh, God, you know what I mean? Everyone’s gonna have a car. But you know what happened to horses, they became way more expensive to maintain way more at only wealthy people had them. And the same thing is gonna happen. I think when technology replaces anything, because now I don’t need a horse, but boy, do I want one really badly.
Satvik Jagannath 29:42
Absolutely. Absolutely. I agree with you. I mean, a great example. But that’s exactly what’s gonna happen with rechargeability or any technology that comes right. I mean, imagine now I’ll be very honest, right though. It’s my mind domain. Though. We Now that we have AI based voice dubbing, yep. Right, which, which will serve the masses, the real voice dubbing will have even more. Well,
Michael Waitze 30:10
I agree. I agree the desire to listen to the real thing. Yeah, is so much higher in a way it’s like to I want to go see the Mona Lisa at the Louvre in Paris? Or do I want to get a baseball cap that has the Louvre tattooed on the back of it kind of thing?
Satvik Jagannath 30:31
Absolutely. Anywhere, especially images happening? Yeah. Right. If you’ve seen cable diffusion, image generation engines, right? I mean, a lot of there’s again, a panic in the artist community saying, Oh, my God, I’m gonna lose my job. It’s never gonna happen. There’s so much autogenerated in images and pictures. Just crap. But people now want to see real painting. Yeah, real art. That’s the value is going to shoot up if you’re selling it for $100 is going to be worth 10,000.
Michael Waitze 31:05
Yeah, I mean, look, the the value of a Monet painting went up 100 fold, when people started getting cameras, because very few people painted anymore, and nobody painted in that style. And people are like, I need that thing. And I want Van Gogh to because it ain’t never gonna happen again. So it’s the same thing. So as we leave, you can see the fireworks in the background here. I just want it to be a little bit festive today. What do we expect in 2023 from you guys, what’s going to be huge,
Satvik Jagannath 31:33
like I said, we are very focused on wise and dubbing today. And overall translation as an ecosystem, right? But we want to get into the creation and distribution too. So what that means is at wittra, right, if you are a creator or want to be a creator, right, from creation, translation distribution, we will build the entire suite or the stack for you, right? Where you get an end to end experience of content. So that’s what we’re trying to build for inquiry
Michael Waitze 32:05
today. I can’t I mean, you can see the look on my face. I’m just thinking like, how can I use this anyway, we can talk about that under separate cover. Okay, I’m gonna let you go. That was awesome. You have to come back Satvik Jagannath, a Co-founder and the CEO at Vitra.ai. Thank you so much for doing that today. That was awesome.
Michael Waitze 32:10
Thank you so much, Michael.
Follow Michael Waitze Media here:
Facebook – Michael Waitze
LinkedIn – Michael Waitze
Twitter – Michael Waitze