#RunwayML showcases their Gen-3 Alpha #video model, in an answer to #LumaAI’s release of their #DreamMachine. BUT…👇
As I pointed out in prior posts, particularly with Luma’s history, expect updates to be small, but to arrive quickly.
In other words, no one is going to be pushing out a massive update. The iterations between competition will be stepped, but fast and furious.
I would expect the control challenge to be mostly figured out, with native 4K quality (and that means HDR, not just resolution), before the end of the decade.
Presently, none of the American companies compare to what #SenseTime is offering. As I pointed out months ago, it’s Sora-level quality at 1080p, and it renders as a combo of cloud and edge, on a #smartphone.
The American companies, to put it bluntly, are woefully behind, by 2-3 years in comparison, at least to market. The difference is simply that China doesn’t have the same concerns, or legislation, as there are in the west.
Imagine the datasets from prompts, images, and video samples that SenseTime is getting as a consequence, by hundreds of millions of users making content. All of that goes into making the foundational model that much better, every hour, of every day.
That’s why the American companies are behind, and if I were perhaps in #politics and #cybersecurity, that would make me sweat.
Why? It’s totally possible to have their model on my smartphone.
Introducing Gen-3 Alpha: Runway’s new base model for video generation.
Gen-3 Alpha can create highly detailed videos with complex scene changes, a wide range of cinematic choices and detailed art directions. Gen-3 Alpha will be available for everyone over the coming days.
Learn more at runwayml.com/gen-3-alpha