Developments and Achievements

The project was structured in TEN main tasks. In the following will report on the main activities performed and main outcomes per each task.


User needs, design and specifications, and scenarios definitions

The project encompasses the development of several prototypes, such as: Orthodia system; HMI based on BCI and eye-tracking data; and electric vehicles.

In parallel and interacting with all the other tasks, both giving inputs and receiving feedbacks a user needs and scenarios definition task was proposed. In this task, user needs and case-study scenarios for the different aspects of the project were proposed, developed and detailed. In summary, in this task studies were carried aiming to define the specifications and functionalities of prototypes such as:

  • User needs and definition of application scenarios (in cooperation with APPC, and Rehabilitation services);

  • Mechanical design and specifications of mobile platforms;
  • Power drive and regenerative system specifications, battery capacity, recharging methodology for a given performance and range.

Activity 1.1 – Assessment of user needs

Concerning the evaluation of user needs, a characterization of mobility, accessibility and safety of individuals with severe motor impairment, such as users suffering from Cerebral Palsy (CP), was conducted in Coimbra Cerebral Palsy Association (APCC). To carry out this assessment a sample of 16 individuals with CP was collected. To these individuals we gave an evaluation protocol with a form with clinical and sociodemographic data and a questionnaire on mobility, accessibility and safety, especially concerned with the use of powered wheelchairs.

Main result: The main limiting factors include building/vehicle access, difficulty in reverse drive and lack of safety. The most valued features of a powered wheelchair are comfort and structure, easy navigation and wheelchair control and safety. The lack of safety in the outdoors was a relevant limiting factor. Almost all individuals requested improvements of the powered wheelchair. The most requested improvements were safety related or related with navigation problems.

Activity 1.2 – Assistive Navigation System according to user needs

From the user needs assessment we concluded that the lack of safety and the difficulty in navigating powered wheelchairs were the most limiting factors, in general, and also the most suggested (57,14%) to be improved. An Assisted Navigation System (ANS) based on collaborative control, able to deal with the reported limitations was improved and tested both in simulated environment conditions and in real conditions.

Activity 1.3 – Orthodia systems specifications

For Orthodia item, 3 prototype systems were developed, two for human gait characterization (a vision system, and a pair of instrumented shoes), and another for human gait rehabilitation (electric assisted pedal). However, before prototype implementations, it was necessary to specify all the requisites of each system, to achieve their main functionalities.

For vision system: once the main goal is to characterize the gait of each user by extracting the main human joints angles profiles, and given that users will be mainly persons with low mobility, the main aspects considered were:

  • create a cheap and portable system;

  • replicate 3D human main limbs;
  • need of 2 or more video cameras able to acquire at least 30fps;
  • acquire joint angles;

  • need to characterize users at different speeds;
  • create a set of marks with easy fixture in the user.

For instrumented shoes: the instrumented shoes will characterize the human gait, but now by the center of pressure (CoP), and by the vertical and horizontal forces exerted by feet. For the development of this innovative shoes the items identified as to be taken into account were:

  • create a wireless solution;
  • able to measure vertical and horizontal forces;
  • allow acquisitions at 100Hz;
  • possible integration with android platform;
  • able to adjust to different foot sizes.

For the electric assisted pedal: the goal is to perform a fast and more efficient rehabilitation of persons with mobility problems. To do that, is intended to assist with an electric motor, the user limbs, with different profiles, and accordingly to the severity level in each side (right or left and which percentage of severity). The main topics identified to consider were:

  • measure the exerted force in both pedals;
  • measure the user heart rate;
  • allow different types of control;
  • easy use by wheelchair users;
  • intuitive user interface that replicates a 3D virtual outdoor scenario.

Together with the CHUC Rehabilitation Team, it was outlined some case studies where the developed prototypes shall be used, aiming to help to diagnose more efficiently any difference to normal gait pattern. Some of these studies are: the comparison of bionic prosthesis and hydraulic prosthesis; the comparison of 11 persons subjected to orthopedic surgery past 2 years, to conclude if after this years the patients are completely recovered or not.

Activity 1.4 – EcoAMobility component: autonomous electric vehicle

During this project an autonomous electric vehicle, the ISRobotCar, was designed and implemented. The ISRobotCar is an electric vehicle platform targeted to research in the following fields: traction, energy storage, vehicle to vehicle communication (V2V), localization and perception. To achieve its objective, it was designed a modular architecture (both in terms of hardware and software) that enables the integration of components related with the research subjects (battery management system, traction system, environment perception, accurate navigation system, etc.).


Orthodia: Software development for data pre-processing and feature extraction

 Activity 2.1 – Vision system for human gait acquisition and characterization

The purpose of this system is to determine the pattern of angles of each user joints during his walk, and then relate them to standards profiles in order to be able to perform their pathologic diagnosis. After the development of this system, it was performed some tests with healthy people (in human walk meaning) in order to be able to get the typical patterns of human gait; and after that, it also was performed tests to people with certain mobility problems, and also get their standards. Thus, the users walking patterns can be compared to normal patterns, and to patterns of each disease, and thus associate the user to the pathology that is most similar to his pattern (and what is his percentage ratio) or conclude that the user walk normally. Another possible approach to this system is the user gait recovery monitoring, that is, a patient who has had a motor disability and has physiotherapy treatments, or something like that, can make various examinations with the system (along its recovery) in order to be able to assess whether if there is progress or not, and whether the treatment is actually being efficient. The following sub-tasks were undertaken:

  1. Vision-based acquisition system specification and implementation;
  2. System’s cameras calibration;
  3. Cameras alignment;
  4. Specification of the localization of marks in the user;
  5. User characterization menu;
  6. Pelvis calibration;
  7. Data processing and data visualisation software.

Activity 2.2 – Instrumented shoes for human gait acquisition and characterization

The developed product is a pair of shoes instrumented with force sensors in three axes, which can measure vertical forces, lateral , longitudinal and the center of pressure exerted on shoes. These features are not, in general, in currently marketed products in this area, which makes the developed product innovative. The product is being designed to allow the characterization of human gait based on ground reaction forces. Was designed taking into account the principles of dynamics, mobility and flexibility characteristic of human gait. There are several areas with great interest the analysis of human walking: biomechanics, interactive computer games, sport and physical and rehabilitation medicine. This versatility translates into a strong economic viability of the product, as these areas are very considered in terms of commitment and investment in new technologies. And as concerns rehabilitation, the main focus of this product have as potential customers in Portugal, about 500 centers of Physical Medicine and Rehabilitation. This product is also interesting for applications dedicated to the elderly (in check gait stability and its evolution). Given the simplicity of use and affordable price, this product can be purchased for use in gyms, nursing homes or individually at home, physical rehabilitation training for physical training and in the future for interactive computer games.

As with vision system were done tests to healthy and unhealthy persons, taking the same procedure of patterns comparison for conclude if the gait of users are or not normal, and what are the percentage of similarity to that normality.

The following main software packages were developed:

  1. Gait acquisition software module;
  2. Gait characterization software module;
  3. Normal gait profile generation;
  4. Gait patterns comparison software.


Orthodia: prototype of the automatic device for gait rehabilitation

Activity 3.1 – Static rehabilitation pedal

In order to achieve a gradual recovery of the mobility of the lower limbs (mainly, but upper limbs can also take place) of persons with this problem is being developed a rehabilitation exercise pedal, which consists of a maintenance bicycle modified by the introduction of a motor that is coupled to the bicycle pedal; with the insertion of force sensors in two pedals; and the introduction of a heart rate monitor.

Activity 3.2 – Instrumented shoes prototypes

Two instrumented shoes prototypes were developed during the project. The second version was proposed to Ignition Exchange InovC, which was approved in September 2014. In partnership with Active Space Technologies, this new prototype was developed with a more appealing and robust design, and more compact hardware. Thus it is no longer needed external communication box, as in the first prototype, being all the hardware installed inside the footwear.


P300-based BCI and neurofeedback applications

Activity 4.1 – Self-paced control of a P300-based BCI-speller using one- time calibration

A self-paced P300-based BCI approach that relies on a short user-specific calibration was developed. The variability across BCI sessions conducted on different days was analyzed in three different domains: raw P300 event related potentials, features space, and classifier projections. A core methodology based on statistical spatial filtering is tested and then two approaches are implemented to tackle the performance decay: distance threshold adjustment (DTA) to dynamically adapt the separation boundary between target and non-control classes, and semi-supervised self-training (SSST) approach for automatic recalibration using unlabeled samples collected during the online usage.

Significance: The combination of self-paced control and one-time calibration is a relevant issue to improve the usability and acceptance of BCI. The results achieved with the proposed methods give positive indicators that these two issues can be simultaneously attained without performance loss for most subjects.

Activity 4.2 – Reliable gaze independent hybrid BCI combining visual and natural auditory stimuli

New method: A hybrid visual and auditory P300-based BCI (HVA-BCI) combining simultaneous visual and auditory stimulation was developed. Auditory stimuli are based on natural meaningful spoken words to increase stimuli discrimination and to decrease user’s mental effort in associating stimuli to the symbols. The visual part of the interface is covertly controlled ensuring gaze-independency.

Results: Four conditions were experimentally tested by ten healthy participants: visual overt (VO), visual covert (VC), auditory (A), and hybrid visual covert and auditory (HVA). The average online accuracy obtained with the hybrid approach was 85.3%, which is more than 32% over VC and A approaches. Results of the questionnaires indicate that the HVA approach was the less demanding gaze- independent interface. Interestingly, the grand average of the P300 evoked by the hybrid approach coincides with an almost perfect sum of P300 evoked by the VC plus the A tasks.

Comparison with existing methods: The proposed fully gaze-independent P300-based HVA-BCI approach presents 32% more online accuracy than unimodal gaze-independent approaches based on visual covert and on auditory control.

Conclusions: The proposed approach shows that the simultaneous combination of visual covert control and auditory modalities can effectively improve the performance of gaze-independent BCIs.

Activity 4.3 – Developments on BCI-based robotic wheelchair steering under human-machine shared control

ROBCHAIR is a brain-actuated Robotic Wheelchair (RW) being developed at ISR – University of Coimbra since the mid-1990s. The latest research developments aimed to offer a certain degree of autonomy to users suffering from severe motor disabilities that are not able to steer the RW by means of a conventional Human-Machine Interface (HMI). Brain-Computer Interface (BCI) is a plausible HMI to these users, once it does not require any muscular activity. The RW relies on a new collaborative-control based assistive navigation system designated as CollabNAV, and is now able to operate in real dynamic-changing environments. We tested experimentally a self-paced P300-based BCI as the system HMI to allow the user to issue asynchronously two types of commands: global goals in a map, and local goals in the form of discrete steering commands. Although RobChair is able to navigate autonomously between goal maps, the proposed paradigm aims to enhance user’s capabilities, allowing him/her to freely decide where to move at any point of the navigation task. Based on the experiments carried out with four able-bodied users, we can conclude that the proposed collaborative navigation system allows safe navigation of the robotic wheelchair in real indoor environments. Additionally, it is also able to deal with sparse steering commands provided asynchronously through a self-paced P300-based BCI, showing a higher overall task performance, compared with alternative synchronous P300–based approaches.

Activity 4.4 – Application of neurofeedback BCI to serious games – Tests with a plug-and-play system towards wearable setups

A study of presenting neurofeedback to users while retaining a gamification approach was tested in the MSc thesis of Ivo Baptista [DEEC-UC,2015]. In this Study, stimulus response was obtained through Steady-State Visually Evoked Potentials (SSVEP): in SSVEP each direction or instruction can be associated with a repetitive stimulus blinking at a predetermined, accurate frequency. When a user looks at a stimulus, neuronal signals in its visual cortex synchronize themselves with the stimulus being gazed at. This allows for continuous frequency-domain analysis of the signal in question, as opposed to the time-domain analysis undertaken in the case of P300.

BCI systems based on electroencephalography (EEG) are of special interest, since EEG is a noninvasive technique with high temporal resolution, low cost and easy acquisition. However, the EEG signal acquisition has low signal to noise ratio (SNR), as well as low spatial resolution. In this work, a computer game controlled via BCI was developed. The control is done through the detection of steady state visually evoked potentials (SSVEP), which are evoked as a response to a repetitive visual stimulus. The game consists in a spaceship traveling a track, where obstacles appear. The player’s objective is to avoid those obstacles by looking at the correct stimulus. The considered commands were the following: UP, DOWN, LEFT, RIGHT. Each command has an associated visual stimulus with a distinct frequency. With the objective of detecting the user’s desired command, feature extraction and classification methods were implemented. Online tests with five participants were performed. For each participant, a gaming session was done for each classification algorithm. The online system attained a mean classification success rate of 89.5% with the choice of the feature with the highest value, and 91% with the Bayes linear discriminant analysis classifier. Comparative tests between single graphic stimuli and pattern reversal stimuli were performed for four of the participants in the online tests, concluding that pattern reversal stimuli is a better choice for controlling the game.


Multimodal HMI fusing BCI eye-tracking/head-tracking data

Activity 5.1 – Extracting and classifying EOG and head pose information

The ability of an intelligent system to recognize the user’s emotional and mental states is of considerable interest for human-robot interaction and human- machine interfaces. In this activity it was developed an automatic recognizer of facial expression around the eyes and forehead based on electrooculographic (EOG) signals. Six movements of the eyes, namely, up, down, right, left, blink and frown, are detected and reproduced in an avatar, aiming to analyze how they can contribute for the characterization of facial expression. The recognition algorithm extracts time and frequency domain features from EOG, which are then classified in real-time by a multiclass LDA classifier. The offline and online classification results showed a sensitivity around 92% and 85%, respectively. This research work is in the context of human-robot interaction and human- machine interface, and it is just a part of the whole system. The overall system integrates EEG, EOG, EMG and GSR, and aims also to characterize user’s mental state with applications in Human-robot Interaction.

Activity 5.2 – Development of an eye-tracking algorithm

While an eye movement detector can show a user’s gaze direction, in order to provide accurate user’s head pose information to onlookers a multimodal HMI should be able to detect and measure individual saccades and their a amplitudes. For this purpose, an algorithm was developed that relies on training-free ocular movement measurement based on the signal’s first and second derivative characteristics.

Activity 5.3 – EOG and EMG for facial expression detection towards emotion recognition

While the two previous activities concern only the processing of signals acquired from eye and head movements, translating a person’s emotional states into usable states (e.g. by a Human-Machine Interface) also entails classifying signals obtained from facial features detectable when emotions are expressed. The developed approach was based not on the more popular Computational Vision field but on Electromyographical (EMG) signal processing and classification. Being generated upon muscular activation, EMG signals can be used to pinpoint what muscles a user is employing to generate a particular facial expression (or any other type of muscular movements). Since humans employ different facial muscles to convey different emotional states, it should then be possible to build a classification method on those premises. The work developed here has as its cornerstones the research works by [Fridlund and Cacioppo, 1986] and [Ekman and Friesen, 1978]. Experimental tests of the developed system were taken in which a single participant is asked to make several expressions voluntarily and the system attempts to classify them.


Electric vehicle platform development and instrumentation

Activity 6.1 – Instrumentation: Electronic Controller Unit

The developed vehicle instrumentation box is composed by two main modules: 1)- the Electronic Controller Unit (ECU); 2)- and the General Power Module (GPM). The ECU collects data from all sensors installed on the platform, such as battery state sensors and encoders’ data, and it provides a connection via an USB port to control other vehicle components. The ECU uses microcontroller STM32F407VG from ST Microelectronics. The high-level software was developed using Robot Operating Software (ROS). A prototype vehicle was equipped with navigation and sensory-based perception systems, namely vision cameras, laser-rangefinders, GPS, IMUs and encoders for odometry calculation. The proposed objective was attained of building an onboard hardware/software infrastructure that can both the control the vehicle and collect data for research tasks.


Energy storage and management, steering and traction systems

In this task the steering, traction and energy management systems of the electric vehicles were designed and developed, namely:

  • Energy storage and management systems: Lithium-Ion battery pack and onboard battery monitoring system, battery management system, and state of charge (SOC);
  • Energy models, state of charge estimation and battery management systems;
  • Traction system: electric actuator with generative braking capability; power drivers and controllers;
  • Steering system: electric actuators, power drives and controllers;
  • Experimental test of each component/sub-system; Performance analysis by simulations and laboratory testing of controllers and energy models.

Activity 7.1: Energy storage and management systems

The Battery Pack Assembly: The lithium-Ion battery cells were fully charged individually, prior to its assembly on a battery pack with 24 cells in series. In this way we could guarantee the pack was balanced in terms of cell SOC. Extreme care was taken during the assembly of the battery to avoid a possible short circuit. The mounting tools were completely isolated.

Battery Management System: The Energy Storage System (ESS) is a key component for EVs. This includes the battery and all the management and monitoring devices that compose the Battery Management System (BMS). Most commercial BMSs use proprietary SOC estimation algorithms that cannot be changed or improved. Others take too simplistic approaches like direct voltage measurements that lead to unacceptable accuracy. To overcome those limitations a new BMS with an open and flexible architecture, allowing the implementation of new SOC estimation algorithms was developed at the ISR- UC, named ISR-BMS. This new setup provides also the freedom to test different charge, discharge and cell balancing algorithms.

Activity 7.2: Energy models, state of charge and battery management systems (BMS)

An accurate model representing the characteristics of the battery is essential for SOC estimation accuracy. Battery models range from complex models that reproduce the electrochemical reactions inside the battery, requiring a very high computational effort, to simple models like the Coulomb counting that have very good run-time performance but can be very imprecise due to error accumulation from current measurements. An offline technique to model the cell was used, based on datasets made in Laboratory. The chosen model consists on an Electrical Equivalent Circuit (EEC) [Huria et al., 2012]. For this purpose an automated test setup was used, that allows to characterize the cell under different charge and discharge conditions at controlled temperature.

Battery Management System: the developed ISR-BMS implements balancing on charge or discharge via individually controlled passive balancing resistors. This method removes the excess charge from the most charged cell(s) until the charge matches those with lowest charge in the pack. The implemented cell balancing algorithm allows the parameterization of different variables. The ISR- BMS was fully integrated in the ISRobotCar. It communicates with The Electronic Control Unit (ECU) via CAN Bus using a proprietary protocol. All the state variables from every cell and from the BMS are available under request.

Test setup for Lithium-Ion cells modelling: To fully characterize the Lithium-Ion cells it is essential to measure their key parameters at different temperatures as their performance is fully dependent on environment conditions. There are several climatic chambers commercially available but their cost can be prohibitive, are not easy to parametrize or do not have an appropriate interface for being controlled externally. The required test setup needs also a controllable DC current load to set the discharge current on the cell and a CC/CV charger to fully characterize the charge/discharge cycle. To achieve this goal a climate chamber with temperature control and an integrated electronic DC load was designed and assembled.

Activity 7.3: Traction system: electric actuator with regenerative braking capability, power drivers and controllers

The following developments were undertaken:

  1. A test bench was developed in ISR-UC with the purpose of modelling the powertrain of small EVs such as the ISRobotCar; the EV powertrain was tested to acquire the motor power driver, motor, gearbox and tire efficiencies.
  2. A fuzzy logic model of regenerative braking (FLmRB) for modeling EVs’ regenerative braking systems (RBSs) was developed. The model has the vehicle’s acceleration and jerk, and the road inclination as input variables, and the output of the FLmRB is the regeneration factor, i.e. the ratio of regenerative braking force to the total braking force. The regeneration factor expresses the percentage of energy recovered to the battery from braking. The purpose of the FLmRB development is to create realistic EV models using as least as possible manufacturers intellectual property data, and avoiding the use of EV onboard sensors. To tune the model, real data was gathered from short and long- distance field tests with a Nissan LEAF and compared with two types of simulations, one using the proposed FLmRB, and the other considering that all the braking force/energy is converted to electric current and returned back to charge the battery (100% regeneration). The results show that the FLmRB can successfully infer the regenerative braking factor from the measured EV acceleration and jerk, and road inclination, without any knowledge about the EV brake control strategy.
  3. A Brushed DC Motor was installed on the electric vehicle and adapted, and configured with different drive modes.
  4. A second motor, a Permanent Magnet Synchronous Motor with high efficiency was installed in the electric vehicle. A mechanical coupling was designed and installed in order to mount the motor in the electric vehicle.
  5. A third motor technology was chosen in order to replace in the future the DC Motor by a SynRM (Synchronous Reluctance Motor) (High Efficiency). A control algorithm for the SynRM was developed with the maximum torque per ampere strategy. A flux observer based on the voltage and current motor models was implemented in order to improve the behavior of the SynRM at low and high speeds. A test bench was used in order to coupling the SynRM to an Induction Motor actuating as load which is feed with an inverter connected to the grid.

Activity 7.4: Steering system and powerdriver module

A steering system was developed for ISRobotCar, which is electrically assisted by a 150W, 12V, brushed DC motor controlled by a Roboteq MDC2230 Motor controller.

A power driver was designed and developed allowing position, speed and torque control of two motors. Physical human-robot interactions require reliable low-level force control layers, allowing a direct interaction of the persons with the robot. So, the force control can allow the control of the contact forces of the robot, ensuring low interaction forces and safe physical human-robot interactions. In order to achieve reliable force control of the developed robotic platforms, no commercial powerdrives were found with capability to accurately control the current that is provided to the motor, which motivated us to develop the powerdriver.


Real-time distributed control architecture, vehicular communication and magnetic sensor ruler

Activity 8.1: A real-time distributed control architecture (RT-DCA), its hardware and software components, suited to autonomous and semi- autonomous vehicles was built and installed in ISRobotCar.

Activity 8.2: Inter-vehicle communications require regulation and standardization to enable communication among several vehicles. This requirement led to the development of several standards. This standardization initiatives appeared in different geographies to address common regulations as radio spectrum regulations, thus leading to the developments of standards and protocols in Europe, United States of America and Japan. ISRobotCar commutations are based on IEEE 802.11p and ITS ISO CALM (Continuous Air- interface Long and Medium range). ISRobotCar vehicular network is based on Grand Cooperative Driving Challenge (GCDC) Communications Stack [Jongh, 2011].

Activity 8.3: ISR-MSR2 magnetic sensor ruler: localization is a central topic in autonomous systems like driverless cars. Localization is commonly based on laser scanner, GPS and vision systems. These systems are, however, unsatisfactory under some circumstances. A complementary system was developed for magnetic guidance based on magnetic markers detection. The emergence of new sensors with better resolution, low noise, high gain, high sampling rates, and no hysteresis motivated the development of ISR-MSR2 (Institute of Systems and Robotics Magnetic Sensor Ruler 2); the ISR-MSR2 allows the simultaneous measurement of 3-axis magnetic field strength in multiple points on space.

Activity 8.4: Conventional joysticks are still the most frequent interface between a human and a Powered Wheelchair (PW), even though many users find it extremely complicated to use the joystick in the standard way, corresponding to the direct control of the PW. The solution to this problem may be the design of an assistive robot capable of providing safety and mitigate some difficulties that the user might have. Human-machine interface hardware/software modules, to support indoor navigation of a robotic wheelchair, were developed aiming to provide safety, mitigate hand tremors and offering a discrete driving of the PW of users incapable of providing a continuous steering command. All the algorithms were implemented and tested under the framework ROS (Robot Operating System).


Technologies and algorithms for autonomous vehicle navigation

Activity 9.1: Collaborative Vehicle Self-Localization using Multi-GNSS Receivers and V2V/V2I Communications

A collaborative self-localization approach using a multi-GNSS receiver setup and V2V/V2I communications was designed, implemented and tested through field experiments. The purpose is to develop a low-cost alternative (equipment and installation), without compromising the process of localization estimation. The proposed method uses two GPS receivers installed on the vehicle, disposed longitudinally on the cover (to maximize the distance among receivers) in order to estimate the Y aw and Pitch angles. To estimate the Roll angle, the proposed approach uses information from a receiver on the road infrastructure or on a nearby vehicle using V2V and/or V2I communication (compliant with 802.11p standard). Experimental results revealed that the performance on both absolute positioning and Pitch angle estimation is very high. The method is accurate in Yaw angle estimation enabling acceptable results in heading computation. Therefore the use of this algorithm in risk assessment on crossroads approaching scenarios, is achievable with na affordable setup. The Roll angle estimation can reach satisfactory results if there is a third receiver in the vicinity and in a non-collinear position with respect to the installed onboard receivers.

Activity 9.2: Detection and Tracking of Moving Objects Using 2.5D Motion Grids

Autonomous vehicles require a reliable perception of their environment to operate in real-world conditions. Awareness of moving objects is one of the key componentes for environmental perception. Under activity 9.2, a method was developed for detection and tracking of moving objects (DATMO) in dynamic environments surrounding a moving vehicle equipped with a Velodyne laser scanner and GPS/IMU localization system, First, at every time step, a local 2.5D grid is built using the last sets of sensor measurements. Along time, the generated grids combined with localization data are integrated into an environment model called local 2.5D map. In every frame, a 2.5D grid is compared with an updated 2.5D map to compute a 2.5D motion grid. A mechanism based on spatial properties is presented to suppress false detections that are due to small localization errors. Next, the 2.5D motion grid is post-processed to provide an object level representation of the scene. The detected moving objects are tracked over time by applying data association and Kalman filtering. The experiments conducted on different sequences from KITTI dataset showed promising results, demonstrating the applicability of the proposed method.

Activity 9.3: Polar-grid representation and kriging-based 2.5D interpolation for urban environment modelling

A novel spatial interpolation approach based on polar-grid representation and kriging predictor was proposed for 3D point cloud sampling. Discrete grid representation is a widely used technique because of its simplicity and capacity of providing an eficiente and compact representation, allowing subsequent applications such as artificial perception and autonomous navigation. Bidimensional occupancy grid representations have been studied extensively in the past two decades, and recently 2.5D and 3D grid-based approaches dominate current applications. A key challenge in perception systems for vehicular applications is to balance low computational complexity and reliable data interpretation. To this end, we contribute with a discrete 2.5D polar-grid that upsamples the input data, i.e. sparse 3D point cloud, by means of a deformable kriging-based interpolation strategy. Experiments carried out on the KITTI dataset, using data from a LIDAR demonstrate that the approach proposed in this work allows a proper representation of urban environments.

Activity 9.4: Static/Dynamic Environment Modeling Using Voxel Representation

Perception of dynamic environments is one of the key components for intelligent vehicles to operate in real-world environments. A novel approach for static/dynamic modeling of the environment surrounding a vehicle was proposed which comprises two main modules: (i) a module which estimates the ground surface using a piecewise surface fitting algorithm, and (ii) a voxel- based static/dynamic model of the vehicle’s surrounding environment using discriminative analysis. The proposed method was evaluated using KITTI dataset. Experimental results provides evidence of the applicability of the proposed method.


Feasibility studies, field tests of prototypes in several environments, and dissemination of results

Diversified and intense experimental activity was conducted during the project to assess the performance of the prototypes developed in the project.
Details can be found in the pdf file (QREN-ProjectB-Report-FI.pdf) that contains a more extended report the project.


(images can be found in the pdf file (QREN-ProjectB-Report-FI.pdf)

  • Electric vehicle fully instrumented during the project, and with autonomous navigation and environment perception modules (ISRobotCar);
  • Vision system for human gait characterisation;
  • Instrumented shoes;
  • Active pedal for mobility rehabilitation;
  • Battery management system of Lithium-Ion batteries;
  • Power-drivers for DC motors control (voltage and current based);
  • Cell characterisation setup;
  • Test bench for EV powertrain modelling;
  • Experimental setup for SynRM control;
  • ISR-MSR2 Magnetic sensing ruller;
  • Robotic walker.