20,000+ Professional Language Experts Ready to Help. Expertise in a variety of Niches.
Unmatched expertise at affordable rates tailored for your needs. Our services empower you to boost your productivity.
GoTranscript is the chosen service for top media organizations, universities, and Fortune 50 companies.
Speed Up Research, 10% Discount
Ensure Compliance, Secure Confidentiality
Court-Ready Transcriptions
HIPAA-Compliant Accuracy
Boost your revenue
Streamline Your Team’s Communication
We're with you from start to finish, whether you're a first-time user or a long-time client.
Give Support a Call
+1 (831) 222-8398
Get a reply & call within 24 hours
Let's chat about how to work together
Direct line to our Head of Sales for bulk/API inquiries
Question about your orders with GoTranscript?
Ask any general questions about GoTranscript
Interested in working at GoTranscript?
Speaker 1: My name is Si-Chun, a PhD student at the University of Illinois Urbana-Champaign. I am excited to share our work on inclusive video commenting, introducing Sanmaku for the deaf and hard of hearing. Danmaku makes video learning more interactive with text comments, but it is less accessible for deaf and hard of hearing DHH users who prefer sign language and with lower reading literacy. To solve this, we introduced Sanmaku, sign language Danmaku, that allows users to view and share sign language-based comments that contain visual information with facial expression and hand movements when watching video. Our research evaluated three design styles using real human faces, cartoon, and robots. The realistic style is unfiltered, the cartoon style is a filtered face with the real torso and hands, and the robotic style is a filtered face and torso. Different styles have different levels of privacy preservations, which need to be taken into consideration when designing for online participation in sign language. The complexity here is that facial expression reveals identity, however, it is an important aspect of sign language and it cannot be directly masked. We answered the following research questions. RQ1. How does viewing three styles of Sanmaku impact DHH learners' video-based learning? RQ2. How do DHH learners perceive the creation and sharing of the three styles of Sanmaku? For our study design, each participant completed two activities during the study. First, they watched a video about argumented reality with error-free captions and Sanmaku to answer RQ1. Second, they provided their comments in Sanmaku, aka ASL comments or text comments to answer RQ2. Then, they completed a post-study survey and interviews. So the overall experience. Interview findings showed that while Sanmaku video and caption might compete for DHH's they provide complementary and visual information for disambiguated captions that are tedious to follow. Additionally, creating and sharing Sanmaku promoted a sense of learning community. Then, the time taken for traditional text comments is longer than ASL comments without redo. In other words, commenting using ASL costs less time and is more expressive for DHH to create, though they would spare more time on the setup for self-representation for sharing realistic Sanmaku. For example, write closing, and some of them would do multiple redos to improve recording quality when it is in a realistic version. After including the redos, time spent on ASL was comparable to text comments. Then, for robotic style, participants' survey results show that viewing robotic put the highest workload in terms of mental demand, physical demand, and time pressure. According to the interview feedbacks, the robotic Sanmaku was the least accurate in delivering hand movements and facial expressions. It was also reported with a sense of being left out when viewing robotic Sanmaku. A more detailed description and feedback to robotic Sanmaku is below. I don't like a robot avatar at all. It is useless because it is chunky and it is hard to understand its signing. I would not recommend using it at all for ASL. I learned an interesting tidbit from Dr. Annalise Guster who mentioned how you perceive sign language and texture that got me thinking about ASL's texture and got me to understand that ASL is very smooth, flexible like water. So avatar for ASL should be curvy and smooth, whereas a robot is the opposite. And the sharing and creating preferences was similar to viewing. When creating Sanmaku and sharing Sanmaku, the realistic and cartoon is far preferred than robotic. More into cartoon style. Cartoon Sanmaku were favored by participants for the entertaining effects, which aligned with participants' facial expression changes captured by the automatic emotion recognition tool. While have a good understandability, cartoon also preserves an acceptable anonymity and understandability balance for some participants. Additionally, it has some facial movement, especially the eyebrow movements. Our findings have the following design implications for DHH-inclusive online learning. First, the differences between the three styles show the importance of expressive facial expression and hand-body movements in signing avatars, especially the use of space of hand-body movements as a fundamental in supporting basic language understanding. These filters and styles underscore the potential for generative AI technology in improving understandable and expressive ASL content. Second is expressing and self-representation needs for DHH learners. The needs should be better supported in future research, especially when our RQ-2 also found that the quality of ASL comments influence participants' willingness to share. Some opt for re-recording to enhance the quality, and T4 emphasized the importance of reviewing and improving particular in facial movements. Therefore, make-out filters automatically improve recording quality could improve willingness to share by reducing creation efforts. For example, filters could be able to make signed comments to show more accurate facial expressions, such as more vivid facial and eyebrow movements. Other non-linguistic features could be useful as well, such as lighting improvements. The third is, the more commonly seen public comments made in text-English could be translated in ASL for DHH learners. Vice versa, ASL comments should be translated into English for general hearing users. With the recent advancement in AI, where generation for AI is possible, it also allows DHH learners to break the cycle and communicate with users in other languages in their own preferred modality. We want to emphasize that the parameters we select for our design, for each design aspect, are solely for our evaluation studies, which is based on our selected video. We do not claim these parameters are generalized to broader contexts. Importantly, future research should investigate how participants' experiences are impacted when multiple comments are presented, as Danmaku, in nature, allows multiple users to comment on the same video in a scene. Our findings show the promises of Danmaku in enhancing inclusive video-based learning, especially through the edutainment and social interaction features. Thank you all. We can be reached at the following information.
Generate a brief summary highlighting the main points of the transcript.
GenerateGenerate a concise and relevant title for the transcript based on the main themes and content discussed.
GenerateIdentify and highlight the key words or phrases most relevant to the content of the transcript.
GenerateAnalyze the emotional tone of the transcript to determine whether the sentiment is positive, negative, or neutral.
GenerateCreate interactive quizzes based on the content of the transcript to test comprehension or engage users.
GenerateWe’re Ready to Help
Call or Book a Meeting Now