[LAGRAM] Gesche Westphal-Fitch, Artificial grammars and visual patterns

06
juin.
2016.
14h00
16h00

Salle 159

  Artificial grammars and visual patterns

Gesche Westphal-Fitch

University of Vienna

Human language relies on complex syntactic rules. How these rules are acquired and what cognitive mechanisms are needed in order to acquire and apply them are hotly debated topics. Artificial grammar learning (AGL) is a useful method for exploring which regularities in sensory input can be acquired without explicit instruction, using string sets ("grammars") that are precisely experimentally controlled.

Formal language theory is a theoretical framework that allows grammars to be divided into categories of varying complexity. Most AGL work has been conducted using finite state languages. However, it is now widely acknowledged that human language requires computations that go beyond the finite state. Examples of crossing dependencies in natural language suggest that human syntactic competence reaches beyond the next level of context-free languages and currently it is categorised as "mildly context-sensitive".  Although crossing dependencies in theory require a more sophisticated memory capacity than centre-embedded (nested) dependencies, there is some research that they are easier to process, counter to what formal language theory would predict.

Crossing dependencies and centre-embedded dependencies are hard to process in the auditory domain due to the heavy load on short-term memory. These restrictions may obscure our picture of the true extent of human processing abilities. Unlike auditory stimuli, the presentation mode in the visual domain can be sequential or simultaneous, allowing a titration of memory load. I will present data from experiments using visual artificial grammars containing crossed and nested dependencies. Results show that whether or not the underlying rule is acquired depends strongly on the short-term memory load and that performance depends not only on the type, but also on the number of dependencies that have to be processed. The implications for our understanding of human processing abilities in the visual and auditory domain will be discussed.