Education in the AI era: a long-term classroom technology based on intelligent robotics

0
Education in the AI era: a long-term classroom technology based on intelligent robotics

Research contribution

The current paper aims to fill the gap exposed in the previous section by proposing a technological didactical tool, called Robobo, developed within the scope of the Robobo Project (The Robobo Project 2024), to train AI topics at different education levels, from K12 to higher education, and aligned with the last recommendations about AI literacy. This tool is grounded on the idea of using robots for learning about AI as they perfectly represent the concept of intelligent agent, so they can be used for teaching all the core topics in this discipline. The Robobo robot contains all the hardware and software features to support such goal, and also a range of materials that can be adapted to different educational levels and skills, leading to a long-term educational proposal.

To simplify the target of this paper, the application of AI techniques to robots will be named here as Intelligent Robotics, and the following sections will be devoted with the presentation and analysis of the Robobo Project within this scope.

Theoretical basis

The intelligent agent perspective of AI

The intelligent agent perspective of AI is typically illustrated through a cycle like the one shown in Fig. 2 (green, blue, pink, and orange blocks) (Russell and Norvig 2021). It is based on four main elements: the environment, real or simulated, where the agent “lives” and interacts with humans; the sensing stage where it perceives information from the environment; the acting stage where it executes actions on the environment, and the agent itself, which encompasses different internal elements and processes that allow it to reach goals in an autonomous fashion. In a more general setup, this agent can be situated in an environment with other intelligent agents or even people, and it can communicate with them, so a multiagent system arises (as displayed in Fig. 2 with three agent cycles). In this scope, robots can be naturally perceived as intelligent agents, embodied, and situated in real or simulated bodies and environments (Murphy 2019). In addition, multirobot systems can also be easily presented to students.

Fig. 2
figure 2

The multiagent system interaction cycle.

Taking this scheme as the background approach to the type of AI perspective we aim to teach to students in the Robobo Project, the following fundamental topics have been considered: (1) Sensing, (2) Acting, (3) Representation, (4) Reasoning, (5) Learning, (6) Collective AI and (7) Ethical and legal aspects. Most of these topics are those of the AI4K12 initiative (AI4K12 2023), but some modifications have been done to adapt them to ER, and a new one has been included: collective AI.

Sensing the environment

Artificial agents obtain data from their sensors and from direct communication channels (see Section “Collective AI”). For the first case, students must learn the difference between sensing (information provided by the sensor) and perception (information in context), and how the properties of the real environments imply some kind of uncertainty in the perception. This is a key concept for real AI systems, as it imposes one of the classical limits of symbolic approaches (section “Introduction”). In the realm of AI literacy, the fundamentals of cameras, microphones, distance sensors and tactile screens must be covered.

Agent actions

Students must be aware of the main types of actuators and effectors that AI systems may have, such as motors, speakers, or screens, and how they are found in smartphones, autonomous cars, or videogames. The intelligent agent selects internally the actions to be applied in order to fulfil its tasks through a series of complex processes that will be later explained. Consequently, it is necessary to understand the types of actions that can be executed and their effects on the environment.

The agent

Inside the agent itself, many processes are executed, but following the five ideas, we could reduce them to three: representation, reasoning, and learning.

Knowledge representation

How digital data is stored and what is represents in the computational system that constitutes the core of the agent must be addressed. It is a new topic for most of students who do not belong to the computer science field. It encompasses one of the main issues in AI, how to represent knowledge, and the differentiation between symbolic and sub symbolic approaches. This topic encompasses learning about the representation of images, sounds, speech, and other structures like graphs or trees that make up core aspects in AI.

Reasoning and decision making

Reasoning comprises all the methods involved in action selection and decision making. It is related with computational thinking and problem solving, although it includes specific algorithms and approaches to achieve an autonomous response. Considering the objective that the agent must achieve (the task to solve), it must reason how to fulfil it in an autonomous way using the available sensorial information and the established representation. Function optimization, graph search, probabilistic reasoning or reasoning based on rules are examples of topics to be learned in this scope.

Machine learning

This is the most popular topic in AIEd, as machine learning algorithms (ML) have evolved and improved notably in the last decades. They allow intelligent agents to create models from data which can be used for decision making and prediction. There are three main types of methods in this realm that students must understand: supervised, unsupervised and reinforcement learning. Learning about these methods must be faced in a general way, because there are many new algorithms constantly arising, and it is not possible to cover them all. Thus, AI literacy should focus on general and global processes such as data capture, data preparation, algorithm selection and parameterization, training stage, testing stage, and results analysis.

ER is especially suitable for reinforcement learning teaching because the algorithms behind this approach are designed to operate in systems based on agents, interacting with the environment in a continuous cycle of state, action, and reward. This is a very relevant property of ER, because it is not simple to find practical application cases where to train reinforcement learning with students, mainly in lower educational levels.

Collective AI

The general perspective of AI system illustrated in Fig. 2 implies that individual agents are situated in an environment where they interact with other agents and people. It is important for students to understand such AI ecosystem, because in the future, we will live surrounded by intelligent agents (smart houses, buildings, or cities), performing a natural interaction with them, so the fundamentals of these global systems like communication modalities, cloud computing, cybersecurity, interface design or biometric data must be addressed in AI literacy.

Ethical and legal aspects

The ethical and legal aspects behind AI are still under study and development, but it is important to enhance a critical view of students about AI technologies that is supported by their own knowledge of AI principles, so they can have an independent and solid opinion of the autonomous decision making of these systems. Many questions arise about the type of activities that should be faced with AI or restricted, the benefits and drawbacks of the commercial use of this technology, and the central role that humans should play in this scope (Huang et al. 2023).

All the previous topics are relevant in the scope of AI, with enough impact to be considered as independent research areas. Depending on the educational level, it is necessary to cover them with more or less technical detail, so the bibliography that teachers must use will be different. A general recommendation is to follow classical textbooks on AI, like (Russell and Norvig 2021) or (Poole and Mackworth 2010), as the starting point to learn the fundamentals of these seven topics at all levels. Specific textbooks, adapted to pre university ones, are still under development, while specific ones for the highest levels are very common.

To sum up, within the Robobo Project, the previous seven topics have been considered to frame the AI literacy. Obviously, their particular contents have been adapted to the different education levels, as it will be explained later.

Methods and materials

To reach the research goal established in Section “Research contribution”, and considering the theoretical frame explained in Section “Theoretical basis”, the research methodology applied has been a combination of educational and engineering methods. The materials developed in this work were created in the scope of the Robobo Project, a long-term educational and technological initiative, composed of three main elements: (1) a mobile robot, (2) a set of libraries and programming frameworks, and (3) specific documentation plus tutorials to support teachers and students in AIEd.

To develop the first one, mainly hardware, a classical engineering cycle of conceptualization, design, prototyping, testing, and refining was applied during the first year of the project in 2016 (Bellas et al. 2017). The validation was carried out in controlled sessions with students until reliable hardware functioning was achieved. To develop the software components, the Unified Process for software development was applied. This methodology proposes the development of an iterative and incremental scheme, in which the different iterations focus on relevant aspects of the Robobo software. In each iteration, the classic phases of analysis, design, implementation, and testing were carried out, so that by the end of the process, weaknesses and performance issues that need to be addressed in the next iteration were detected. Each iteration incorporated more functionalities than the previous one until the last iteration ended with the Robobo software implementing all the functionalities (Esquivel–Barboza et al. 2020). The software development was mainly carried out between 2016 and 2019, although improvements have continued to be incorporated up to the present day. Finally, for the educational materials, we have followed a typical educational design research methodology. Specific teaching units and classroom activities were created and evaluated by studying the learning outcomes they provided to the students, as well as the acceptance level of teachers, in an iterative cycle of testing and improvement in real sessions carried out in formal an informal education from 2017 to 2023. To this end, quantitative and qualitative analyses were performed using surveys and direct teacher feedback, leading to a set of materials that have been validated in several cycles (Bellas et al. 2022).

Focusing now on the explanation of the main materials used in this work, it must be pointed out that the key element of the Robobo Project is the smartphone that is used in the real platform, which provides most of the features established in Section “Theoretical basis” to support AIEd. The robotic base has a holder on its top (Fig. 3) to attach a standard smartphone. Communication between the smartphone and the robotic base is handled via Bluetooth. Students can use their own smartphone, which reduces the investment cost of schools and promotes a positive use of this type of device at schools, as recommended by the UNESCO (Global Education Monitoring Report Team 2023)Footnote 1.

Fig. 3: Representation of the Robobo sensors and actuators.
figure 3

Black typeface represents those of the smartphone and blue ones those of the platform. Left: General view of the front of Robobo. Top right: Detailed view of the pan-tilt unit. Bottom right: Representation of the back of Robobo, with LEDs and infrared sensors.

Robobo: the real robot

The electronic and mechanical design of the mobile physical platform is detailed in (Bellas et al. 2017). The core concept behind the Robobo robot consists of combining the simple sensors and actuators of the platform with technologically advanced sensors, actuators, computational power, and connectivity the smartphone provides. From a functional perspective, the different components of the Robobo hardware (displayed in Fig. 3) can be classified into five categories: sensing, actuation, control, communications, and body.

Robobo sensors:

On the base:

  • Infrared (IR): It has 8 IR sensors, 5 in the front and 3 in the back, used for distance detection, as well as for avoiding collisions or falls from high surfaces.

  • Encoders: They provide the position in which the motor shaft is located. They can be utilized for odometry, to correct the trajectory of the movement, or just to realize if the robot is really moving.

  • Battery level: To know the degree of autonomy of the robot, useful for long-term experiments or for challenges related with energy consumption.

On the smartphone:

The characteristics of the smartphone’s sensors change depending on the model, but these are the most common:

  • Camera: Perhaps the most relevant Robobo sensor for teaching AIEd, due to the number of applications it supports, such as colour detection, object identification, face detection, among others.

  • Microphone: It allows to detect specific sounds, ambient noise, and of course, the user voice for speech recognition.

  • Tactile screen: This sensor allows to detect different types of touches, like tap or fling, which are typically carried out by users to allow natural HRI.

  • Illumination: This sensor provides the ambient light level, which is. useful in different applications to adapt the robot response according to it, or for energy saving purposes.

  • Gyroscope: It allows to identify the orientation of Robobo in the space. It can be used for navigation, map tracking, inclinations, and changes in the slopes.

  • Accelerometer: It calculates Robobo’s acceleration, identifying the real movement of Robobo even when no actuation is performed by the hardware.

  • GPS: It provides the global positioning of the robot using this popular technology, although it just works outdoors.

Robobo actuators

On the base:

  • Motors: Perhaps, they are the most relevant Robobo actuators. Two motors are attached to the wheels for navigation purpose. Other two motors are in the pan-tilt unit, enabling the horizontal and vertical rotation of the smartphone, which provides Robobo with wider range of possible movements. This is normally understood by students as “head” movements, increasing the personality of the robot expressions.

  • LEDs: They are utilized to transmit simple information to the user in a visual way. For instance, a warning condition when the battery is low, or a different colour depending on the distance to the walls.

On the smartphone:

The characteristics of the smartphone actuation change with the model, but again, we can find some common ones:

  • Speaker: It can be used to play sounds or produce speech, which is fundamental in natural interaction.

  • Torch: It is an adjustable light which is useful in many cases, for instance, to increase the illumination of a scenario to improve the camera response.

  • LCD screen: It is very useful for displaying visual information. Usually, the screen shows Robobo’s “face”, which can be changed to display different emotions.

Robobo control unit

Robobo’s control unit is the smartphone. It runs all the processes related with receiving information from the base and sending commands to the actuators through Bluetooth. In addition, it receives and sends commands to an external computer, as detailed in Section “Software and development tools”. Finally, it runs some algorithms onboard, related with image and sound processing. The computational power of the smartphone models can be very different, but most of the existing models have processors with more capability than required in most of the educational challenges, as it has been tested with students.

Robobo communication system

Current smartphones are equipped with WiFi, 5 G and Bluetooth connections. The first one is the most relevant in the educational scope, as it allows to connect the robot to the internet through the schools’ network, which is very common nowadays. As a consequence, students can use information taken from internet sources on their programs, as weather forecast, news, music, and they can also carry out direct communication by sending/receiving messages or emails.

Robobo body extensions

Finally, it must be highlighted that the Robobo base has a series of holes in its lower part to attach different types of 3D printed accessories (Fig. 4). Only the holes are provided to serve as structural support for the accessories, leaving the design completely free to the users, opening the possibility of multiple solutions to the challenges proposed to students while learning AIEd concepts under a STEM methodology.

Fig. 4: Top left: Lower part of Robobo with the holes for attaching customized accessories.
figure 4

Top middle: Example of 3D printed accessories. Top right and bottom: Different applications that can be performed with Robobo and the accessories, such as pushing, drawing, or even developing and outdoor version with bigger wheels.

Software and development tools

The Robobo software includes an entire ecosystem of applications, developer/user libraries, and simulators that allow easy adaptation to different learning objectives (Esquivel–Barboza et al. 2020). The software has been designed following a modular architecture that facilitates the addition of new capabilities in the future, as well as the configuration of which of these capabilities are available in a particular learning context. Therefore, it provides the technological foundations and functionalities that make Robobo an adaptable learning tool for different levels which is also in continuous evolution.

The core software runs on the smartphone and provides all the intrinsic sensing, actuation, and control capabilities of the robot. It also provides standard programming interfaces for local or remote access to these capabilities. Having the core of the software of the robot running on a regular smartphone allows to upgrade the hardware (the smartphone) of the robot in an almost unlimited way, leading to a long-term investment for educational institutions.

Figure 5 shows the software architecture from the perspective of a user of the Robobo platform. In this context, a user is a student who uses the robot to solve AI tasks by means of programming. However, teachers could also be considered users, who focus on designing tasks for the students (including the necessary teaching units and physical or simulated environments). As can be seen in the figure, depending on the educational level, a student can use different software libraries and programming languages to develop the challenges with Robobo. For working in the real environment, it is required to have the programming computer connected to the same local network as the robot (no cables are required). At any given time, the students can choose to run their program in the simulated environment or in the real robot. Moreover, the robot is entirely functional in a context where there is no Internet connection available (restricted local network), since even functionalities such as voice recognition have an implementation suitable for offline operation. Finally, it should also be noted that the problems to be solved usually involve a single robot interacting with its environment but several robots collaborating to solve a common task are also supported.

Fig. 5
figure 5

Representation of the software architecture of the Robobo platform from the user’s point of view.

What makes Robobo a suitable platform for long-term AIEd education are the functionalities it provides and how they have been adapted to different skills and educational levels, together with the set of teaching units designed to exploit them. A high level of semantic and conceptual homogeneity is always maintained in order to facilitate a progressive learning experience. Thus, at K12 levels, we propose the use of a programming model based on blocks, supported by Scratch, with a limited set of available functionalities that nevertheless allow experimenting with a multitude of AI topics. In the case of K16 and higher education levels, the use of the Python programming language is proposed along with a growing set of sensing, actuation and control functionalities. At this point it is important to mention that the use of Python is justified by the fact that it is currently the language of preference for data scientists and artificial intelligence developers (TIOBE Index 2024) and by the abundance of freely available AI libraries and tools that can be used together with the software provided by the Robobo.

Table 1 shows a simplified view of the main functionalities provided by the Robobo software platform, classified into three categories: perception (including self perception and sensing of the environment), actuation and control. As can be seen, even at the lowest level, the robotic platform provides a range of capabilities that are not usually available in other robots commonly used for educational purposes. Thus, from an early learning stage, students can explore fundamental AI concepts by creating adaptive behaviours supported by the variety of possibilities to sense the environment (through visual, acoustic, or tactile inputs) while interacting and even expressing internal emotional states (using speech, sounds or facial expressions).

Table 1 Overview of the main capabilities provided by the Robobo software platform, indicating the target education level as well as which programming language can be used.

In the Scratch 3 environment, an extensive set of new blocks that allow the student to work with all available Robobo functionalities, have been defined1. In Fig. 6, it is displayed a subset of programming blocks that provide access to some of the sensing capabilities, as well as a simple program that defines the speed of the robot in response to its environment (namely depending on the distance at which a green object is detected using the camera).

Fig. 6: Robobo programming example using Scratch 3.
figure 6

Left: a subset of the new programming blocks we have designed for Robobo. Right: a simple program that uses some of these new blocks together with the generic Scratch blocks.

Regarding Python language, Robobo contains advanced capabilities that allow teachers to imagine an almost unlimited number of tasks to solve in different scenariosFootnote 2. For example, they can propose challenges in the scope of computer vision, autonomous driving, or natural interaction, which can be faced in a simple fashion. Furthermore, it is always possible for students to combine these intrinsic capabilities of Robobo with other AI technologies and tools that are common in the field and are available through external libraries.

To briefly exemplify the way of programming using the Python interface, a code fragment is included in Fig. 7. First of all, it is necessary to import the classes that provide access to the functionalities we want to use. Likewise, if desired, any external library commonly used in the field of AI, such as TensorFlow, OpenCV, Scikit-learn, etc., can be imported. Then remote communication with the Robobo robot (in a real or simulated environment) is established. At this point it is possible to start programming the behaviour of the robot using the functions available in our Robobo.py library. It is important to clarify that if we want to change the execution of our program between the real robot and the simulator, it is only necessary to modify the line of code that establishes communication with the robot (Fig. 7).

Fig. 7
figure 7

Simple Python code illustrating how to program a controller for Robobo, that can be executed both using a real robot or a robot in a simulated environment.

As shown in Fig. 5, a dedicated simulator, called RoboboSim, was developed adjusted to the necessities of schools and studentsFootnote 3. It was built on top of the Unity engine, and it was designed to be easy to use, requiring minimal training. The students and teachers only need to download the application, choose one of the available virtual environments (worlds) and start a remote connection with the simulated robot, in the same way as with a real robot. The user interface requires minimal configuration, and it was designed following a video game like design aimed to be user friendly for young students. The students can solve different challenges on the available worlds using Scracth3 or Python, since both are supported by RoboboSim.

RoboboSim can be configured in two levels of realism. (1) The standard one includes basic physical modelling of the robot (weight, friction, motors). The real robot response was empirically characterized, and different models were created. Therefore, in this realism level, RoboboSim is reasonably faithful to the real robot model, and the programs developed in this level do not require much adjustment to run in the real platform. (2) The simplified realism level does not perform physical simulation, and the response is deterministic. It is recommended for those students who are starting with robotic simulations, or for those who prefer focusing on programming concepts, facilitating the accomplishment of the challenges proposed.

A key property of the RoboboSim in the realm of autonomous robotics is that it includes an optional random mode of operation. This mode implies that particular objects in the simulation appear in slightly different positions each time the environment is restarted. The purpose of this property is to force students to develop more robust programs, leading to autonomous behaviours that can adapt to changes in the environment.

Educational resources

Two main types of educational resources have been developed and tested in the last years in the scope of the Robobo Project. First, Teaching Units (TU), which include specific activities, guides, and solutions to learn about different AI topics. Second, documentation, which allows students and teachers to use the robot in a more independent fashion, adapting it to their particular needs and exploring new possibilities.

Teaching units and lessons

In the case of TU, the Robobo Project has been mainly applied to formal education. Consequently, these materials have been designed to be aligned with the latest AI literacies introduced in Section “Introduction” and including the AI topics explained in Section “Theoretical basis”, although they can be used as independent lessons in informal education, like extracurricular activities, workshops, specific AI courses or summer courses. The main target for these units has been the teachers, who are the main actors in the real introduction of AI in education.

For secondary school and high school (from 14 to 18 years old), 7 TUs based on Robobo have been developed in the scope of the AI+ educational project (Bellas et al. 2022), specifically focused on intelligent robotics. In higher education, more specialized teaching units have been developed for Vocational Education and Training (VET) through the AIM@VET project (Renda et al. 2024), and others for the different courses on intelligent robotics belonging to the University of A Coruña (Llamas et al. 2020). Table 2 includes a set of representative TU. It aims to provide an overall view of the type of activity that can be implemented at classes with this robot. They have been organized in a sort of incremental complexity level, although they could be adapted to increase or decrease it.

Table 2 Different challenges proposed in the realm of the Robobo Project, indicating: the different AI topics addressed in them, the educational level to which we consider that it is appropriated, the Robobo functionalities involved in each challenge and the link where all information is available to download.

Documentation and tutorials

The Robobo Project includes a solid documentation adapted to different educational levels, including manuals, reference guides and programming examples. Table 3 shows a description of these support materials, and the links to them.

Table 3 Documentation of the different components of the Robobo Project mentioned in this article.

All the documentation is open access, and available at the Robobo WikiFootnote 4. New users should start by downloading and configuring the RoboboSim. It is recommended to try the first programs using the scratch framework, for instance, by trying the sample projects availableFootnote 5. A next step would be to try them in the real robot, which requires following the documentation about the Android app and the platform initial configuration. Finally, for those students of higher levels, the Python documentation should be reviewed. It includes a complete library reference, as well as specific documents for the most advanced features, like object recognition, lane detection, or video streaming.

Teaching methodology

The methodology used in all the TUs of the Robobo Project is based on Project Based Learning (PBL), as it best fits the STEM approach in which educational robotics have shown clear advantages. Each TU presents a challenge that students must solve with the robot, organized in teams or individually, depending on the specific learning objectives and the teacher’s criterion. The challenge faces a real problem, which must be solved in a real or simulated environment, as established by the teacher depending on the learning focus (dealing with the real robot implies using more time dealing with technical issues). To apply the PBL methodology in the Robobo TUs, five typical phases of an engineering project are carried out by students:

  1. 1.

    Problem analysis and requirement capture: understanding the problem to solve and its relevance.

  2. 2.

    Organization and planning: dividing the whole problem into subproblems.

  3. 3.

    Solution design: programming the solution.

  4. 4.

    Solution validation: testing the solution in the robot.

  5. 5.

    Presentation of results and documentation: showing the final response, submitting the solution, and answering whatever question from the teacher.

The teacher’s role in each of these steps is very relevant, as he/she must monitor the students’ advance and solve their questions, which can be open as this type of project allows for different valid solutions. The teacher must evaluate the final solution of the student, which can be based on the robot’s performance, on the code, on the progress and attitude, or on a specific exam or test. In terms of the specific background on AI topics, teachers should have previous training on them according to the educational level, which is out of the scope of the Robobo Project. In the case of secondary school teachers, to support them in this new discipline, the TUs developed in the realm of the AI+ project make up a teacher guide, with a recommendation about the theoretical contents to be taught, a possible organization of the TU into activities and tasks, and all the code solutions to the challenge. As an example, Fig. 8 displays the organization of a TU focused on natural interaction with Robobo that is included in the teacher guide. It can be observed how the division of the final goal into activities and tasks is provided, including an initial stage for theoretical contents.

Fig. 8
figure 8

An example of TU organization that is provided to teachers in a project based on Robobo.

link

Leave a Reply

Your email address will not be published. Required fields are marked *