METHOD OF DEFINING FREE PLACES IN VIDEO CONTENT FOR IMPOSITION TYPHLOCOMMENTS
DOI:
https://doi.org/10.15588/1607-3274-2020-4-13Keywords:
Audiodescription, typhlocomments, video content for visually impaired persons, scale, WaveLet transform of signals.Abstract
Context. The problem of accessibility of video content is one of the most pressing problem for people with visual impairments. To solve this problem, the methods and means of construction, editing and adaptation of video content for visually impaired persons are development.
Objective. The goal of the work is to develop the method of searching silent areas in the scale for the imposition of typhlocomments and improve the search modules in the software-algorithmic complex of adaptation of video content for visually impaired persons.
Method. The method of searching for places that free from dialogues and other important sounds in video content is implemented. These places of video content are used for inserting typhlocomments. The algorithms of scanning and filtration modules for arrays and searching places available for the imposition of the typhlocomments are developed. This will allow for additional smoothing of the spectrum. Smoothing runs in the forward direction, then in reverse. After calculating the correlation of these two smoothed arrays, we see that almost all the short signals are deleted. For the useful signal, the values that smoothed in the forward and reverse direction overlap and therefore remain in the array. Next, the correlation between the smoothed arrays is compared with the set threshold, and if it does not match the set value, then this element of the array is 0. As a result of the algorithm, we have a list of places. For each item in the list, the beginning of the silence place and its length are specified.
Results. On the basis of the developed method of finding places to insert typhlocomments and improved modules, the testing of software-algorithmic complex was performed in standard configuration, and with the additional smoothing module. The first version of the software-algorithmic complex gave the next result: 120 useful pauses. Version with the additional module found 140 useful pauses.
Conclusions. The results of the experiment make it possible to evaluate the developed method and improved modules of scanning, filtration and smoothing. These modules giving a significant gain of results (about 13%), searching for the places for the imposition of typhlocomments, which improves the adapted video content for people with visual impairments.
References
Szarkowska A. Auteur Description: from the director’s creative vision to audio description, Journal of Visual Impairment & Blindness, 2013, Vol. 7, pp. 383–387. DOI: 10.1177/0145482X1310700507
Igune G. W. Inclusion of Blind Children In Primary Schools: A case study of teachers’ opinions in Moroto district. Uganda, Norway, University of Oslo, 2009, 108 p.
Mekhalfi M. L., Melgani F. An Indoor Guidance System for Blind People, International Conference on Industry Academia Collaboration. Cairo, Egypt, March 2014, proceedings, 2014, Vol. 1. DOI: 10.13140/RG.2.1.4192.5204
Qing L., Youngjoon H. A Context-Aware-Based Audio Guidance System for Blind People Using a Multimodal Profile Model, Sensors (Basel), 2014, Vol. 14(10), pp. 18670– 18700. DOI: 10.3390/s141018670
Remael A. Audio Description with Audio Subtitling for Dutch Multilingual Films: Manipulating Textual Cohesion on Different Levels, June 2012 Meta: Translators’ Journal, 2012, Vol. 57(2), pp. 385–407. DOI: 10.7202/1013952ar
Szarkowska A., Jankowska A. Audio describing foreign films, The Journal of Specialised Translation, 2015, Vol. 23, pp. 243–269.
Cronin B., King S. R. DVS Theatrical [Electronic resource]. Access mode: http://www2.edc.org/NCIP/library/v&c/cronin.html.
Chottin M. Mobility Technologies for Visually Impaired People Through the Prism of Classic Theories of Perception, Mobility of Visually Impaired People. Springer International Publishing, 2018, Section 3, pp. 77–108. DOI: 10.1007/9783-319-54446-5_3
Media Subtitler [Electronic resource]. Access mode: https://www.divxland.org/en/media-subtitler/.
Sabine B., Braun S., Orero P. Audio description with audio subtitling – an emergent modality of audiovisual localization Perspectives Studies in Translatology, 2010, Vol. 18(3), pp. 173–188. DOI:10.1080/0907676X.2010.485687
Walczak A. Audio description on smartphones: making cinema accessible for visually impaired audiences, Universal Access in the Information Society, 2017, Vol. 17. pp. 833–840. DOI: 10.1007/s10209-017-0568-2
Façanha A. R., de Oliveira A. C., Lima M. V., Viana W., Sánchez J. Audio Description of Videos for People with Visual Disabilities, International Conference on Universal Access in Human-Computer Interaction, 21 June 2016 : proceedings. Cham, Springer, 2016, pp. 505–515. (Lecture Notes in Computer Science, Vol. 9739). DOI: 10.1007/9783-319-40238-3_48
Gagnon L., Foucher S., Byrns D., Chapdelaine C., Turner J., Mathieu S., Laurendeau D., Nguyen N. T., Ouellet D. Towards computer-vision software tools to increase production and accessibility of video description for people with vision loss, Universal Access in the Information Society, 2009, No. 8(3), pp. 199–218. DOI: 10.1007/s10209-008-0141-0
Donoho D. Nonlinear Wavelet Methods for Recovery of Signals, Densities, and Spectra from Indirect and Noisy Data : project report : 437. Stanford University, Stanford, 1993, 34 p.
Graps A. An Introduction to Wavelets, IEEE Computational Science and Engineering, Summer 1995: proceedings. Los Alamitos, IEEE, 1995, pp. 1–18. DOI: 10.1109/99.388960
Meyer Y. Wavelets: Algorithms and Applications. Society for Industrial and Applied Mathematics, Philadelphia, 1993, 133 p.
Kaiser G. A Friendly Guide to Wavelets. Birkhauser, Boston, 2010, 321 p.
Polikar R. The wavelet tutorial. Second edition. Part I [Electronic resource]. Access mode: http://web.iitd.ac.in/~sumeet/WaveletTutorial.pdf
Lytvyn V. V., Demchyk A. B., Oborska O. V. Mathematical and software submission video content for visually impaired people, Radio Electronics, Computer Science, Control, 2016, Vol. 3, pp. 73–79. DOI: doi.org/10.15588/1607-32742016-3-9
Downloads
How to Cite
Issue
Section
License
Copyright (c) 2020 A. B. Demchuk, O. V. Lozynska
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Creative Commons Licensing Notifications in the Copyright Notices
The journal allows the authors to hold the copyright without restrictions and to retain publishing rights without restrictions.
The journal allows readers to read, download, copy, distribute, print, search, or link to the full texts of its articles.
The journal allows to reuse and remixing of its content, in accordance with a Creative Commons license СС BY -SA.
Authors who publish with this journal agree to the following terms:
-
Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License CC BY-SA that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
-
Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
-
Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work.