Video transcoding optimization based on input perceptual quality
Abstract
                Todays video transcoding pipelines choose transcoding parameters based on Rate-Distortion curves, which mainlyfocuses  on  the  relative  quality  difference  between  original  and  transcoded  videos.   By  investigating  recentlyreleased YouTube UGC dataset, we found that people were more tolerant to the quality changes in low qualityinputs than in high quality inputs, which suggests that current transcoding framework could be further optimizedby considering input perceptual quality.  An efficient machine learning based metric was proposed to detect lowquality inputs, whose bitrate can be further reduced without hurting perceptual quality.  To evaluate the impacton perceptual quality,  we conducted a crowd-sourcing subjective experiment,  and provided a methodology toevaluate statistical significance among different treatments.  The results showed that the proposed quality guidedtranscoding framework is able to reduce the average bitrate upto 5% with insignificant quality degradation.