Conceptual Abstraction in Humans

PI Justus Piater (Università di Innsbruck)

universität innsbruck
universität innsbruck


Durata: 31/05/2024 a 30/05/2027
Finanziato da: Provincia autonoma di Bolzano - Alto Adige
Budget: 449.380,00 Euro

Descrizione

Everyday concepts, such as the ability of a glass to hold a liquid, or a plate to support the cake, are learnt by children effortlessly in early childhood through simple interactions with the physical world, everyday activities within the environment they encounter and learn to understand. Bringing this level of effortless learning to artificial systems, both physical (robots) and non-physical (software artefacts) is one of the big challenges of AI. This project seeks to develop a proof-of-concept theoretical framework and implementation for learning conceptual abstractions by robots via autonomous sensorimotor interaction with objects and by observing and interacting with humans. These abstractions will allow the robot to reason about and solve tasks irrespective of their concrete, sensory manifestation, to transfer skills to novel tasks, and to communicate with humans on the basis of shared conceptualizations. Inspired by cognitive science, the key innovation and enabling technology is to build on a logical formalisation of such interactions based on image schemas (simple yet abstract notions such as containment and support humans learn in early childhood for conceptual and metaphoric thinking) and on affordances (actions an object offers an actor such as putting the cake on the plate). 

The specific core aims of the project are therefore fourfold: 

  1. To define the basic ontological structure and terminology for experiential learning and abstraction, and to extend and modify formal logical approaches to image schemas and affordances to enable robotics-specific representation and reasoning capabilities; 
  2. To extract higher-level conceptual descriptions from observed human-robot interaction data to support algorithms for the automatic recognition of actions & plans with automatic labelling and algorithms for orchestration of actions with information regarding the capabilities of the involved agents; 
  3. To develop a workflow and layered architecture to extract higher level conceptual descriptions from sensory data and robotic actions that can be linked up with automatically learned as well as humanly curated formalisations of image schemas; 
  4. To provide a detailed validation of the approach through a carefully designed simple robotic world that foresees the interaction with objects and humans in which transfer learning and acquisition of conceptual abstractions can be systematically verified. The key deliverable of this project, thus, will be a proof-of-concept theoretical framework and implementation that delivers an interface between the robotics environments, the learnability of abstractions deriving from embodied interpretation of interactions in such environments, and logical reasoning approaches for such interactions.

Partner

Lead Partner Università di Innsbruck, Partner Libera Università di Bolzano