Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Image analysis and error detection in source software code

Not Accessible

Your library or personal account may give you access

Abstract

Subject of study. We study cognitive mechanisms for the analysis of images of source software code and detection of errors in it by a human. Aim. The aim of this study is to determine the influence of the professional skill of visual error detection in a Python source code on the control over oculomotor processes to be used for the subsequent modeling of the identified principles in artificial intelligence systems. Method. The eye movement properties of a human during error detection in the images of software code were investigated using eye movement tracking technology by means of the Russian software–hardware suite “Neurobureau,” which enabled a complete set of psychophysiological studies to be conducted. The test subjects completed two tasks: explaining the software code and finding an error in it. Each task comprised 10 stimuli with software code in Python language normalized to length and complexity and with syntax highlighting. The task completion time was not limited. Eight programmers with different professional experience (from 1 to 13 years) participated in the study. Main results. With increasing professional skill in image analysis, humans develop eye movement strategies enabling more effective performance of tasks with minimum effort. These strategies entail division of an integral software code into individual units relevant for the analysis. We discovered that experienced programmers exhibited fewer fixations, shorter scanpath length, and larger saccade amplitudes than novice programmers. Notably, the speed of saccades, in particular the broad searching eye movements during the task of software code explanation, increases with professional experience. We demonstrated that visual error detection is mainly determined by recognition of the details in a text that can be found exclusively based on the semantics and grammar of the programming language. We established that reading of software code texts is different from both viewing visual scenes and reading texts written in a natural language. Professional experience minimizes the efforts in this work, similar to any other type of professional activity. Eye movement control during code reading is guided by the knowledge of the programming language, understanding of context, and knowledge of the semantics and grammar of the language, as well as the low-spatial-frequency description of the images of strings and words, which is used to develop the skill of reading and recognition of the general configuration of strings and words in a natural language. Practical significance. The conclusions obtained in this study can be used to describe the existing techniques and create new neuromorphic algorithms (based on the strategies developed during human evolution) for the generation and correction of software code.

© 2022 Optica Publishing Group

PDF Article
More Like This
Saccadic and perceptual performance in visual search tasks. I. Contrast detection and discrimination

Brent R. Beutter, Miguel P. Eckstein, and Leland S. Stone
J. Opt. Soc. Am. A 20(7) 1341-1355 (2003)

No wavefront sensor adaptive optics system for compensation of primary aberrations by software analysis of a point source image. 2. Tests

Giampiero Naletto, Fabio Frassetto, Nicola Codogno, Enrico Grisan, Stefano Bonora, Vania Da Deppo, and Alfredo Ruggeri
Appl. Opt. 46(25) 6427-6433 (2007)

Cited By

You do not have subscription access to this journal. Cited by links are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.

Contact your librarian or system administrator
or
Login to access Optica Member Subscription

Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.