Living Room with Flat TV Screen on Wall Near Sofa Set

Impact of AI and ML on video streaming devices for the better TV experience


Artificial intelligence (AI), with machine learning (ML) and deep learning (DL) technologies, promises to transform every aspect of businesses, devices, and their usage. In some cases, it enhances the current process, and in some cases replaces them.

As video streaming devices (including TV & STBs ) vie to be one of the AI assistants in the home. However, there are several touchpoints in regular streaming devices and Video processing that are ripe for TV transformation with AI advancements.

The quality of experience expected to increase with enhanced models visually and respect to ease of use. The video streaming devices with these features will surely outperform the ones without.

Pie Chart about Transformation in Video Streaming Devices
Transformation in Video Streaming Devices

This blog discusses the use of different AI/ML models performed in video streaming devices to enhance the TV user experience.

Super Resolution

Super-resolution is a technique that enhances the quality of a given low- resolution to high resolution by upscaling. Whereas people incline too high quality and visually appealing images and video quality.

Therefore, today the super-resolution feature can be found in many video streaming display devices, producing resolutions that are 2-4 times higher than the pixel count of the sensor.

Neural Network-based scaling is performing a lot better than the traditional bilinear scaling as can be seen from the example pictures below.

A Girl images with and without neural networks Super Resolution  for better TV streaming experience
Super Resolution with and without Neural Networks

As can be seen in the pictures scaled with NN models, the details are preserved or enhanced.

Within the Neural Networks using for scaling there are two types of models –

• Convolution/Deconvolution (Deconv) based
• Generative Adversarial Network (GAN) based.

Deconv based models are simpler and generalize better, whereas the GAN based ones seem to perform very well for scaling function. The Deconv model has an initial section that is similar to familiar networks such as ResNet, which reduces the size of the image but increases the number of features and the latter part is an expansion network, which uses deconvolution operation to increase the resolution using the features. The residual connections in the network seem to help to generalize better.

Generative Adversarial Networks (GAN) consist of a system of two neural networks — the Generator and the Discriminator — dueling each other. Given a set of target samples, the Generator tries to produce samples that can fool the Discriminator into believing they are real. The Discriminator tries to resolve real (target) samples from fake (generated) samples.

Using this iterative training approach, we eventually end up with a Generator that is really good at generating samples similar to the target samples.

Chart representing GANs in action
GANs in action

InPainting with Neural Networks

There are several cases where a part of the image is damaged or distorted or obscured with a previous watermark. Neural Network can repair such blemishes quite well by guessing the parts of the image missing. Similar to Super Resolution, Deconv networks and GANs can be accomplished.

However, Deconv networks seem to be performing better owing to the better generalization. The single network trained to do both Super Resolution and Inpainting (and other noise reduction) together.

Man GIF image about InPainting with Neural Networks for tv
InPainting with Neural Networks

Frame Rate Conversion

Interpolation and upscaling for animated graphics is hard as it has sharper edges and simple interpolation tends to make the images blurred. These softer edges are not visually pleasing in high resolution. Again, Neural networks seem to be out-performing traditional methods in frame rate conversion (and up-sampling). The paper linked below shows how using Variational Autoencoder, GANs, and Deep Convolution Networks can be used to perform this function and Deep Convolution Networks seem to have performed better than other and traditional upsampling.

Frame Rate Conversion example
Frame Rate Conversion example

Integrated Home Assistant in TV

AI integrated Video streaming devices at home tv can know the family members watching and can keep track of each individual’s preferences, viewing habits and specific programming.

With the several applications providing content, it takes quite some time to tune to the specific programming to switch the applications and to browse through the menus.

As per the latest Nielsen report, the average viewer takes 7 minutes to pick what to watch. Just one-third of them report browsing the menu of streaming services to find content to watch, with 21% saying they simply give up watching if they are not able to make up their minds.

A background application can perform this task knowing the specific user. Video viewing through TV-connected devices has increased by 8 minutes daily, in large part due to the penetration of those devices and the falling away of older, less nimble setups.

Seven in 10 homes now have an SVOD service and 72% use video streaming-capable TV devices. Also, the report asserts that customers are not in a comfortable groove when it comes to choosing what to watch.

A background application running on TV can perform the task of recommending the specific content as per individuals’ preferences and viewing habits.

Anonymized Person Detection

Above all functions performed without intruding into the privacy of the users and without recording specific user’s video/face etc.

Also, several new AI techniques are available to anonymize incoming video streaming at the source and perform person recognition at the ball/stick level. Moreover, the NN identifies the user based on the gait of the person without intruding into privacy.

Couple enjoying Television where TV detects the pose called Ball/stick AI model.
Person detect from Ball/Stick models

Noise Reduction

Post-processing can remove noise and improve PSNR caused by Encoders’ blocking, Blurring, Ringing and Perceptual Quantizer (PQ).

Encoder introduced impairments are removed with specific filters like deblocking filter etc. However, using Neural Networks can improve the performance of noise reduction and they all can be performed by one algorithm.

Post-processing options in TVs to perform MEMC (Most Estimation and Motion Compensation). Also, which has staunch critics, can be enhanced with Neural Networks.

Network Architecture for MEMC Net in TV
The network architecture of the proposed MEMC-Net and MEMC-Net

Automatic Scene Detection

There are several applications that can use the information about the running image/scene. Video analytics to detect specific actions can be used to rate the specific section of the video. Similarly, voice can be analyzed for Language rating (for kid proofing).

Every frame can be classified for detection of Specific Objects/location, Face recognition of the character and Auto Classification of Nudity

The insights can be used for auto-captions for regulatory warnings such as Smoking/Drinking.

Voice Controls

Very similar to the Home AI assistants such as Amazon Alexa and Google Home, Video streaming devices at home expected to respond to voice commands. Today many remote controllers have already voice-activated.

For a high sound output device to be an input device is quite complex, having to cancel the output audio.

In addition, there will be a variable delay in the ambient noise, if the voice enhanced with external speakers. Wake-word detection and Automatic Speech Recognition (ASR) implemented using complex Neural networks today. And for TVs, the ambient noise needs to be compensated to hear the commands.


With proper Neural Network acceleration hardware in the TV/STBs, all or some of the above functions can be implemented for better user experience compared to the current post-processing.

In addition, this processing can alleviate the chip designers to do away with the current expensive filters and post-processing hardware for a unified hardware element to perform Neural Network processing.

One of the problems of the AI algorithm development is dataset availability and annotation. Also, for most of the AI problems mentioned in this blog. Where it is relatively easy to develop dataset as raw higher resolution, error-free data is available.

Gyrus provides AI Models as a Service. by working with customers to clean up, normalize and enrich their data for Data Science. Gyrus develops custom models based on the cleaned customer data and datasets/models.  

Above all, Gyrus has worked with several customers in developing similar ML/AI algorithms for vision processing. 


機械学習(ML)およびディープラーニング(DL)技術を備えた人工知能(AI)は、ビジネス、デバイスとそれらの使用法のあらゆる側面を変革することを約束します。 場合によっては、現在のプロセスを強化し、場合によっては、それらを置き換えることができます。

ビデオストリーミングデバイス(TVおよびSTBを含む)は、家庭内のAIアシスタントの1人になることを目指しています。 ただし、通常のストリーミングデバイスとビデオ処理には、AIの進歩によるTV変換に適したいくつかのタッチポイントがあります。

視覚的に強化されたモデルと使いやすさの観点から、エクスペリエンスの品質が向上すると予想されます。 これらの機能を備えたビデオストリーミングデバイスは、そうでないデバイスよりも確実に優れています。

Pie Chart about Transformation in Video Streaming Devices
Transformation in Video Streaming Devices



超分解能は、アップスケーリングによって特定の低解像度から高解像度の品質を向上させる手法です。 一方、人々の嗜好は高品質で視覚的に魅力的な画像やビデオ品質に傾いています。

したがって、今日、超解像機能は多くのビデオストリーミング ディスプレイ デバイスに見られ、センサーのピクセル数の2〜4倍の解像度を生成します。


A Girl images with and without neural networks Super Resolution  for better TV streaming experience
Super Resolution with and without Neural Networks



•GenerativeAdversarial Network(GAN)ベース


Deconvモデルには、ResNetなどの使い慣れたネットワークと同様の初期セクションがあり、画像のサイズは小さくなりますが、フィーチャの数が増えます。後半は、デコンボリューション操作を使用してフィーチャを使用して解像度を上げる拡張ネットワークです。 ネットワーク内の残りの接続は、より一般化するのに役立つようです。

Generative Adversarial Networks(GAN)は、2つのニューラルネットワーク(ジェネレーターとディスクリミネーター)が互いに決闘するシステムで構成されています。ターゲットサンプルのセットが与えられると、ジェネレーターは、ディスクリミネーターをだましてそれらが本物であると信じさせることができるサンプルを生成しようとします。 Discriminatorは、偽の(生成された)サンプルから実際の(ターゲット)サンプルを解決しようとします。

Chart representing GANs in action
GANs in action


画像の一部が破損したり、以前の透かしで歪んだり、不明瞭になったりする場合がいくつかあります。 ニューラルネットワークは、画像の欠落している部分を推測することで、このような傷を非常にうまく修復できます。 超分解能と同様に、DeconvネットワークとGANを実現できます。



Man GIF image about InPainting with Neural Networks for tv
InPainting with Neural Networks


アニメーショングラフィックスの補間とアップスケーリングは、エッジがシャープであり、単純な補間では画像がぼやける傾向があるため、困難です。 これらの柔らかいエッジは、高解像度では視覚的に心地よいものではありません。 繰り返しになりますが、ニューラルネットワークは、フレームレート変換(およびアップサンプリング)において従来の方法よりも優れているようです。以下にリンクされている論文は、Variational Autoencoder、GAN、およびDeep Convolution Networksを使用してこの機能の実行方法を示しており、Deep ConvolutionNetworksは他の従来のアップサンプリングよりも優れたパフォーマンスを示しているようです。





バックグラウンドアプリケーションは、特定のユーザーを認識してこのタスクを実行できます。 テレビに接続されたデバイスを介したビデオ視聴は、主にそれらのデバイスの浸透と、古くて機敏性の低いセットアップの崩壊により、毎日8分増加しています。

現在、10世帯のうち7世帯がSVODサービスを利用しており、72%がビデオストリーミング対応のテレビデバイスを使用しています。 また、レポートは、何を見るべきかを選択することに関して、顧客は快適な環境にいないと主張しています。




また、ソース側で着信ビデオストリーミングを匿名化し、ボール/スティックレベルで人物認識を実行するために、いくつかの新しいAI技術が利用可能です。 さらに、ニューラルネットワークは、プライバシーを侵害することなく、人の歩行に基づいてユーザーを識別します。

Couple enjoying Television where TV detects the pose called Ball/stick AI model.
Person detect from Ball/Stick models



エンコーダーによって導入された障害は、非ブロックフィルターなどの特定のフィルターで除去されます。 ただし、ニューラルネットワークを使用すると、ノイズリダクションの性能を向上させることができ、すべてを1つのアルゴリズムで実行できます。

MEMC(Most Estimation and Motion Compensation)を実行するためのテレビの後処理オプション。また、熱心な批評家がいるので、ニューラルネットワークで強化することができます。

Network Architecture for MEMC Net in TV
The network architecture of the proposed MEMC-Net and MEMC-Net


実行中の画像/シーンに関する情報を使用できるアプリケーションがいくつかあります。 特定のアクションを検出するためのビデオ分析を使用して、ビデオの特定のセクションを評価できます。 同様に、音声は言語評価(子供の校正用)について分析できます。




Amazon AlexaやGoogle HomeなどのHome AIアシスタントと非常によく似ており、自宅のビデオストリーミングデバイスは音声コマンドに応答することが期待されています。 今日、多くのリモコンはすでに音声起動しています。


さらに、外部スピーカーで音声を強調すると、周囲のノイズにさまざまな遅延が発生します。 今日、複雑なニューラルネットワークを使用して実装されたウェイクワード検出と自動音声認識(ASR)。 また、テレビの場合、コマンドを聞くには周囲のノイズを補正する必要があります。





Gyrusは、AIモデルをサービスとして提供します。 お客様と協力して、データサイエンスのデータをクリーンアップ、正規化、強化します。 Gyrusは、クリーンアップされた顧客データとデータセット/モデルに基づいてカスタムモデルを開発します。


Leave a Reply

Your email address will not be published. Required fields are marked *

World-class articles, delivered weekly.

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

Gyrus Blog will use the information you provide on this form to be in touch with you and to provide updates and marketing.