Haiper is the latest artificial intelligence startup to come out with its own generative video model, joining the likes of Pika Labs, Runway and OpenAI's Sora in tackling AI storytelling.The company is focused on quality and at the moment is producing short two-second clips, but it has big ambitions, including setting its sights on Sora-like realism in the future.Like OpenAI and Anthropic, Haiper is working towards artificial general intelligence (AGI), building what it calls a "powerful perceptual foundation model".
Haiper's intuitive model excels at determining the correct motion for short video clips, capturing expected movements accurately within four seconds. The AI performs best with shorter prompts and without manual motion control adjustments, as it anticipates motion more effectively than humans. In a demonstration, one video featuring a forest at twilight displayed improved realism when motion was left to the AI rather than set to full manually. The AI accurately depicted the sky moving behind the trees and the trees rustling, resulting in a more natural and immersive visual experience.
Overall it didn't do a bad job. What it did particularly well was the original visual, creating a much more realistic depiction than some models manage. The quality of the image was on par with some dedicated AI image generators.The problems came with motion of larger objects. If it was small motion or animating an element within a large video it did well. It struggled to animate if the object was the dominant feature.Haiper is an impressive addition to the AI video market, but it has some way to go before it reaches the longevity of Sora. For now it also has a way to go before it matches the motion consistency of models from Runway, StabilityAI and Pika Labs but it is very close.
Reference: I just tried Haiper AI hyper-real video — and it's going right after Sora By Ryan Morrison last updated March 13, 2024