>I want to build something with a low floor and a high ceiling; a tool that uses AI to smooth out the hardest parts of traditional animation, while letting creators keep full ownership of what they make.
And your terms of service says:
>We provide an online platform that allows users to upload images, drawings, and artwork and use our proprietary AI-powered technology to transform those images into animated videos. You may upload image files, customize certain settings related to the animation output, view and download the resulting videos, and manage their uploaded content and account. We reserve the right to establish limits on the file types, file sizes, and number of uploads permitted per user, and to modify those limits at any time. All video generation is performed automatically through a combination of AI and computer graphics technology.
That's a good question. The distinction is between generative AI (which takes in a prompt or image and generates every pixel of a new video) and non-generative AI models (e.g. classifiers, segmentation models, and pose estimation models). The second category helps us to infer characteristics about the input drawing, but it doesn't try to 'recreate' anything.
We use non-generative AI models to quickly auto-rig the character when it's uploaded. In a traditional computer graphics animation pipeline this would be done by hand and would be a slow process; we use these models to speed that step up. The resulting animations don't use any AI at all(generative or otherwise).
Would you say your approach is less flexible and creative vs gen AI then? Because you are bounded by what the pipeline can rig/interpret vs open-ended generation from gen AI. I suppose it does preserve the original authorship better though.
That's a good question. I would say that the two approaches are quite different and bring different strengths to the table. The major strength of genAI is that it is open-ended.
But it comes with costs. GenAI video is expensive to generate, and most tools constrain your animation to a handful of seconds, not long enough to tell a real story. You can generate multiple clips and stitch them together, but then you'll run up again another limit of GenAI- subject consistency (especially with non-realistic subjects, like doodles).
It's also difficult to finely control genAI outputs, which I argue limits the creative expressivity of the human. And if you generate numerous clips to try to get things perfect, it can get expensive.
Our approach is limited by the motion/visual/audio assets we have access to.
But, when we release DoodleMateStudio users will be able to upload their own visuals, record their own audio, capture their own motions, and specify their own high-level story scenes. This should be enough to let people tell expressive and personal stories. And if we get things right, it will also be a lot more fun than refining a prompt.
My older daughter draws so many funny looking characters and stick figures. I’ll show her and I’m sure she will light up seeing her drawings come to life.
>I want to build something with a low floor and a high ceiling; a tool that uses AI to smooth out the hardest parts of traditional animation, while letting creators keep full ownership of what they make.
And your terms of service says:
>We provide an online platform that allows users to upload images, drawings, and artwork and use our proprietary AI-powered technology to transform those images into animated videos. You may upload image files, customize certain settings related to the animation output, view and download the resulting videos, and manage their uploaded content and account. We reserve the right to establish limits on the file types, file sizes, and number of uploads permitted per user, and to modify those limits at any time. All video generation is performed automatically through a combination of AI and computer graphics technology.
We use non-generative AI models to quickly auto-rig the character when it's uploaded. In a traditional computer graphics animation pipeline this would be done by hand and would be a slow process; we use these models to speed that step up. The resulting animations don't use any AI at all(generative or otherwise).
But it comes with costs. GenAI video is expensive to generate, and most tools constrain your animation to a handful of seconds, not long enough to tell a real story. You can generate multiple clips and stitch them together, but then you'll run up again another limit of GenAI- subject consistency (especially with non-realistic subjects, like doodles).
It's also difficult to finely control genAI outputs, which I argue limits the creative expressivity of the human. And if you generate numerous clips to try to get things perfect, it can get expensive.
Our approach is limited by the motion/visual/audio assets we have access to. But, when we release DoodleMateStudio users will be able to upload their own visuals, record their own audio, capture their own motions, and specify their own high-level story scenes. This should be enough to let people tell expressive and personal stories. And if we get things right, it will also be a lot more fun than refining a prompt.
My older daughter draws so many funny looking characters and stick figures. I’ll show her and I’m sure she will light up seeing her drawings come to life.
Please let me know how it goes and if there's any functionality she/you would want.