Skip to main content
Log in

Empowering the visually impaired by revolutionizing tactile text conversion using effective character calibration algorithm

  • ORIGINAL ARTICLE
  • Published:
International Journal of System Assurance Engineering and Management Aims and scope Submit manuscript

Abstract

In the context of rapid population growth, the World Health Organization has indicated in 2024 that nearly 285 million individuals are visually impaired. Education is globally accepted as being important for world development, but the visually impaired population suffers from major hurdles in accessing educational material. To overcome difficulties in improving content accessibility, especially for visually impaired individuals, two automated algorithms have been developed. The initial algorithm uses advanced segmentation methods to break down images into independent elements like lines, words, and individual letters. This accurate breakdown allows for correct processing of the image information. The second algorithm calibrates the correlation between the segmented letters and the resulting Braille tactile translations, allowing for accurate translation of complex visual details into tactile translations. These algorithms were tested rigorously with both the languages, i.e., English and Hindi. There are two different output modes that were intended to meet the different needs of users. Through experimental trials, the algorithms were tested and experimented with various characters, for which the results were high in every aspect of the various variety of characters. The algorithms were also tried out with various styles of handwriting to simulate various case studies and for improved generalizability, where the model fares well in terms of execution time. The model's performance was tested over multiple iterations in diverse scenarios, and the accuracy of results varied with factors like the amount of content translated into tactile text and the accuracy of content retrieval from the tactile system. The experimental results demonstrate that the introduced algorithms work with high effectiveness and reliability without applying Optical Character Recognition procedures.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+
from $39.99 /Month
  • Starting from 10 chapters or articles per month
  • Access and download chapters and articles from more than 300k books and 2,500 journals
  • Cancel anytime
View plans

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Algorithm 1
Algorithm 2
Algorithm 3
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  • Al-Salman A, AlSalman A (2024) Fly-LeNet: a deep learning-based framework for converting multilingual braille images. Heliyon. https://doi.org/10.1016/j.heliyon.2024.e26155

    Article  Google Scholar 

  • Ashida K, Nagai H, Okamoto M, Miyao H, Yamamoto H (2005) Extraction of characters from scene images. IEICE, D-II J88-D-II(9):1817–1824

    Google Scholar 

  • Ezaki N, Kiyota K, Minh BT, Bulacu M, Schomaker L (2005) Improved TextDetection methods for a camera-based text reading system for blind persons. In: Proceedings of the 8th international conference on document analysis and recognition, pp 257–261

  • Foong O, Razali N (2011) Signage recognition framework for visually impaired people. In: Proceedings of international conference on computer communication 103 and management, pp. 488–492

  • Hanif SM, Prevost L (2007) Texture based text detection in natural scene images: a help to blind and visually impaired persons. In: Proceedings of conference & workshop on assistive technologies for people with vision & hearing impairments assistive technology for all ages, M.A. Hersh (Ed.)

  • Hasan R, Aboud IS, Hassoo RM (2024) Braille character recognition system. Al-Rafidain J Comput Sci Math 18(1):30–39

    Google Scholar 

  • Kalai Selvi U, Anish Kumar J (2014) Camera based assistive text reading system using gradient and stroke orientation for blind person. Int J Latest Trends Eng Technol 4(1):325–330

    Google Scholar 

  • Kokkonis G, Psannis K, Asiminidis C, Kontogiannis S (2018) Design tactile interfaces with enhanced depth images with patterns and textures for visually impaired. Int J Trend Sci Res Dev 3(1):1174–1178

  • Liu H, Guo D, Zhang X, Zhu W, Fang B, Sun F (2020) Toward image-to-tactile cross-modal perception for visually impaired people. IEEE Trans Automat Sci Eng 18(2):521–529

    Article  Google Scholar 

  • Liu Y, Yamamura T, Ohnishi N, Sugie N (1998) Extraction of character string regions from a scene image. IEICE, D-II J81-D-II(4):641–650

    Google Scholar 

  • Luo Y, Liu C, Lee YJ, DelPreto J, Wu K, Foshey M, Rus D, Palacios T, Li Y, Torralba A, Matusik W (2024) Adaptive tactile interaction transfer via digitally embroidered smart gloves. Nat Commun 15(1):868

    Article  Google Scholar 

  • Matsuda Y, Omachi S, Aso H (2010) String detection from scene images by binarization and edge detection. IEICE, D J93-D(No.3):336–344

    Google Scholar 

  • Muhsin ZJ et al (2024) Review of substitutive assistive tools and technologies for people with visual impairments: recent advancements and prospects. J Multimodal User Interfaces 18(1):135–156

    Article  Google Scholar 

  • Mukhiddinov M, Kim S-Y (2021) A systematic literature review on the automatic creation of tactile graphics for the blind and visually impaired. Processes 9:1726. https://doi.org/10.3390/pr9101726

    Article  Google Scholar 

  • Pascolini D, Mariotti SP (2012) Global estimates of visual impairment: 2010. Br J Ophthalmol 96(5):614–618

    Article  Google Scholar 

  • Pazio M, Niedzwiecki M, Kowalik R, Lebiedz J (2007) Text detection system for the blind. In: Proceedings of the 15th European signal processing conference, pp 272–276

  • Schlenoff CI, et al. (2024) Standards and performance metrics for on-road automated vehicles

  • Sonobe S, Tanaka H, Fujiyoshi A (2024) Automatic generation of tactile graphics of characters to support handwriting of blind individuals. In: International conference on computers helping people with special needs. Springer, Cham, pp 260–266

  • Tanaka N, Okudaira M (2009) A study on image processing for action assistance of visually impaired humans using a camera equipped cellular phone. Tech Rep Inst Image Inf Telev Eng 33(6):173–176

    Google Scholar 

  • Tanwar P (2019) Use of tactile diagrams in teaching science to visually impaired learners at the upper primary level. Disabil CBR Incl Dev 29:109–116. https://doi.org/10.5463/dcid.v29i4.772

    Article  Google Scholar 

  • Wu A, Yuan Y, Zhang M (2024) Vision-braille: an end-to-end tool for Chinese braille image-to-text translation. arXiv:2407.06048

  • Yi C, Tian Y, Arditi A (2014) Portable camera-based assistive text and product label reading from hand-held objects for blind persons. IEEE ASME Trans Mechatron. https://doi.org/10.1109/TMECH.2013.2261083

    Article  Google Scholar 

  • Zhang X et al (2024) The aerial guide dog: a low-cognitive-load indoor electronic travel aid for visually impaired individuals. Sensors 24(1):297

    Article  Google Scholar 

Download references

Funding

No funding for this research article.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sushilata Devi Mayanglambam.

Ethics declarations

Conflict of interest

No conflicts of interest.

Research involving human participants and/or animals

This manuscript does not contain any studies involving animals performed and any studies involving human participants performed by any of the authors.

Informed consent

In this article we never ever used any human or animal sample.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sindhu, P., Mayanglambam, S.D. Empowering the visually impaired by revolutionizing tactile text conversion using effective character calibration algorithm. Int J Syst Assur Eng Manag (2025). https://doi.org/10.1007/s13198-025-02959-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s13198-025-02959-2

Keywords