The difference is the synchronization of a user’s voice at the instant the animation event is created with the video. This process allows a user to be spontaneous based on the content they’re looking at. And thereby do everything in one take and be inspired by what they are looking at. Much like being face-to-face with somebody and they’re pointing at something they like and getting all excited when talking to you. This is a major time saver for video creation.
Now this kind of video creation is more spontaneous. Versus going into a studio and doing many takes and being an ultra-perfectionist to get everything just perfect for the video. It’s kind of like there’s a time you like your family’s home cooking and say to yourself I’d rather have this any day versus going to a restaurant. But then there’s time to go to a restaurant and appreciate what a great chef can do on that experience. We are more in the home cooking realm. That’s not to say that vidThat can’t be used in a perfectionistic way and do many takes and execute in a studio quality kind of way. We did this to make our product more every day in its usage.
The other thing is our video is different. We enable a user to immediately create video out of a still image and even combined that with a premade video that they can then talk over and even leave in the existing audio in the background. And staying true to our ready set go speed orientation everything is instantly transcribed which is great for people that don’t want to turn up the volume or for the hearing impaired. So bottom line, our key point of difference is speed, spontaneous voiceover execution and immediately deployed.
And we created a draft system so that the video process can be chunked out. If you were for example flying on a plane somewhere and saw three good pieces of content you could save that as a draft. And then you see three more you say that as a draft. And you can sit down somewhere and make two or three quick videos and immediately send your contacts. The other thing this is good for is if you were using the same imagery or video content but having to send to multiple people. You could do multiple voiceover versions in a personalized way to your many different contacts.
Finally tied all this into an instant messenger like iMessage. But where this is different is that our video creation process is part of the messaging system. And our auto reply system automatically deploys the same visual assets that the sender used in their original video message to the recipient. This speed to reply system is unique and one-of-a-kind. It will be the reason why we are able to dovetail into Web3. Our ultimate vision is to be able to NFTize any user’s videos.
I know that should’ve been a short answer but there’s a lot packed into with vidThat that is unobvious.
To make the whole product vision kind of fun I always said this should be like James Bond‘s car. Easy to drive with a steering wheel, brake and gas pedal. We have the three circles for hold and release talking and pausing for instant video creation. If we left it there the videos are made fast and super easy just like driving James Bond‘s car. But on top of that we have many features that are kind of hidden away much like the gadgets in James Brown’s car. Makes it all kind of fun.
And I do believe our approach to the UX and UI and its uniqueness is our defensibility. Because ultimately users get used to a new way of doing things and that is an important brand truth. But for a new way of doing things, it is all also important to leverage the court of familiarity. The court of familiarity is what users are used to doing for certain things so we built on this in a little bit without going too far.