Detecting radicalness from video datasets

Does YouTube’s recommendation engine choose to display iteratively more radical/ polarising/ sensational videos on AutoPlay in order to maintain user engagement versus more accurate/ factual content? In order to answer this question, we took up this project and did analyses.

With increasing hate comments, it is necessary to protect and maintain the decorum of the platforms and the subscribers that visit the sites. By identifying the emotions of each word, we can verify the comments and other content into different categories.

Here, the video data is taken and the emotional analyses of that data is carried out. After training the model with those emotions, the machine will be able to answer the question i.e. the problem statement.

Register