Thomas Felison, Erwin conery firtan, Steven, Willyam Chandra, Saut Dohot Siregar
Indonesian Journal of Social Technology, Vol. 5, No. 10, October 2024 4254
synthesis of human faces in a human-computer dialogue system, or detecting the presence
of micro-expressions of subtle movements in the face (Rosiani et al., 2018). One of the
real applications of human facial expression analysis is to recognize the facial expressions
of e-learning users, where learner interaction is a weakness that must be considered in e-
learning learning (Husdi, 2016). The process of recognizing human facial expressions is
also applied to the MOODSIC music player application, which will play music according
to the user's emotions obtained by detecting the user's facial expressions (Wijaya et al.,
2018). Emotion detection through facial expression recognition plays an important role
in daily life, such as how to properly respond to emotional expressions in social
interactions so that it can establish and build verbal or nonverbal communication with
others and so on. Another advantage is being able to see and understand the intention of
the interlocutor so that it will minimize deception and falsehood. The inability to
recognize facial emotional expressions can lead to inaccuracies in interpreting other
people's emotions/feelings, which will automatically lead to ambiguity and inaccurate
decision responses (Hartanto, 2019).
Facial expressions are facial changes in response to a person's emotional state,
intentions, or social communication. Face detection is the first step that must be done in
facial analysis, including facial expression recognition. Face detection aims to determine
whether or not there is a face in the image, and if there is a location of the face and the
size of each face in the image. (Budiyanta et al., 2021). In face detection, there are several
challenges such as the position of the face not facing directly to the camera, face scale,
facial expressions, face obstructed by other objects, and lighting conditions.
(Prasetyawan, 2020). Many methods can be used to carry out the face detection process,
such as Jatmoko has implemented the Viola-Jones algorithm in facial recognition, with
the results of research from all experiments obtaining an average accuracy score of 65%
(Jatmoko et al., 2020). Another research conducted by Putra and Krishna using the
eigenface and haar cascade classifier methods to carry out the facial recognition process,
with the results of the study obtaining facial recognition accuracy with a maximum
distance of 3 meters from the camera is 63% (Putra et al., 2023). To solve this problem,
the You Only Look Once (YOLO) method can be used. This YOLO method reframes
object detection as a single regression problem, directly from the image pixel to the
bounding box coordinates and class probability. Using this YOLO method, the process
only needs to look once at the input image, to predict what object is contained in the
image and where it is located. (Redmon, 2016).
Some similar studies that have been conducted before can be detailed as follows:
Table 1
Previous Research