[GSoC2020][Meeting] Definition of the project

I have a meeting with my mentor today to define the project of this GSoC2020.

Ideas:

  • Emotion analysis from video
    • Existing work:
      • emotion recognition in debate videos:
      • Preliminary work
      • Separately using audio and visual information. For audio, the pitch and loudness are used to predict the emotion. pre
    • My work:
      • Background research about this research area. [See new post]
      • Come up with new directions for this task
      • Get familiar with the dataset IEMOCAP
  • Character interaction in movies
    • Existing work:
      • A team at UCSB is working on it. They have done the work on the text. Now they want to integrate the visual information.
    • My work:
      • Analysis of interaction between characters:
        • Gaze estimation
        • Emotion analysis
    • There will be a meeting with this team this week. To be defined.
  • Creating a dataset for emotion analysis using the news data of Red Hen
    • Existing work
      • The current datasets of emotion analysis are mostly too old and too small. A larger and up-to-date dataset will greatly help the research.
    • Extracting interviews/debates/discussions from the dataset
      • Select the part where people have clear emotional behavior
    • Remarks:
      • We need to consider privacy concerns/copyright

Starting points:

Background research on audio-visual emotion analysis.

Let’s go!

Note:

  • It is unnecessary to follow additional courses on Signal Processing.
  • The speech signal is transformed to a vector/matrix after an embedding stage
  • I might need to get familiar with the processing.

Leave a Reply

Your email address will not be published. Required fields are marked *