3 ways to automatically detect and capture video game highlights – TechCrunch

0


With the rise In live streaming, games have evolved from consumer goods such as toys to legitimate platforms and media for entertainment and competition.

On a Twitch viewer base alone, the average number of concurrent viewers has exceeded 250,000 to 3 million since it was acquired by Amazon in 2014. Competitors like Facebook Gaming and YouTube Live are following a similar path.

As today’s professional streamers push technology to the limit, add value to content production, and automate the iterative aspects of the video production cycle, the audience boom is fueling an ecosystem of supporting products.

The biggest streamers hire a team of video editors and social media managers, but growing part-time streamers are having a hard time figuring out how to do it themselves or outsource it. Increase.

Streaming games online are challenging, with full-time creators playing 8 hours, if not 12 hours a day. A 24-hour marathon stream is not uncommon to attract the precious attention of the audience.

However, those times in front of the camera and keyboard are only half the streaming grind. A constant presence on social media and on YouTube will fuel the growth of stream channels and allow more viewers to watch the stream live. Viewers can buy, donate, and view ads on a monthly basis.

Extracting the most influential 5-10 minute content from over 8 hours of live video is an important time commitment. At the top of the food chain, the biggest streamers can hire a team of video editors and social media managers to do that part of their job, but growing part-time streamers do it for themselves. Money to find time to do it or outsource. Among other life and work priorities, there is not enough time in the day to carefully view all of the footage.

Computer vision analysis of the game UI

The new solution is to use automated tools to identify key moments in longer programs. Several startups are competing for supremacy in this new niche. Different approaches to solving this problem differentiate competing solutions from one another. Many of these approaches follow the dichotomy of hardware and software in classical computer science.

Athenascope It was one of the first companies to implement this concept on a large scale. With $ 2.5 million in venture capital funding and an impressive team of Silicon Valley Big Tech graduates, Athenascope has developed a computer vision system to identify highlight clips in lengthy footage.

In principle, it is not that different from the behavior of a self-driving car, but instead of reading nearby street signs and traffic lights with the camera, the tool captures the player’s screen and recognizes the indications in the game’s user interface. , informs you of the important events that are taking place there. Games: kill and death, aim and save, win or lose.

These are the same visual cues that traditionally tell players what is happening in the game. In modern game UIs, this information is high-contrast, clear, and confusing, and is usually always placed in a predictable fixed position on the screen. This predictability and clarity make it well suited for computer vision technologies like optical character recognition (OCR), which reads text from images.

False positives from this system only produce less exciting than average video clips and are not car accidents, so there is less at stake here than with self-driving cars.

3 ways to automatically detect and capture video game highlights – TechCrunch 3 ways to automatically detect and capture video game highlights – TechCrunch


Share.

Leave A Reply