Our goal was to implement a hardware-software vision system designated for home care of elderly people or convalescents.
The project is based on visual analysis of person’s behaviour. The system allows to detect health or life-threatening situations like falls or fainting.
We have prepared a short YouTube clip, to demonstrate the so far obtained results.
The aim of project was the implementation of a hardware-software vision system for supporting home care of living alone elderly people or convalescents. It allows to perform real-time analysis of human behaviour staying inside a certain room and to detect situations that are health or life threatening such as falls or fainting.
The vision system is based on foreground object segmentation and moving object detection. An algorithm able to provide correct results in typical indoor conditions i.e. sudden and gradual illumination changes, moved background objects (e.g. a chair) and stopped foreground objects (people) is used. In the next step the position of the human is determined. This is based on centre of gravity or bounding box analysis.
In the future, more types of health or life-threatening situations will be detected. We are planning to use a microphone as a second source of information. These should allow to detect screams or falls. Moreover, the detected objects will be classified (on the basis of simple shape features or using shape matching). The aim of this procedure will be to distinguish human silhouettes from other objects especially from the equipment inside the room. Also the problem of shadows will be addressed.
In the first stage, the related work analysis was performed in order to assess different possible algorithmic solutions and their hardware implementations. In the end, a discussion within the project team was held and system details were determined.
In the second part a software model was created. It was implemented in the MATLAB environment and in C++ language with use of the OpenCV library. The model was used for testing solutions and as a reference for the designed system, especially for hardware modules. In this stage, also test sequences for system evaluation were recorded.
In the third stage, the vision system was split into the hardware resources and the processor part. Image acquisition, preprocessing, filtration, foreground and moving object segmentation and connected component labelling is carried out in hardware. Software part operates only on meta-data (e.g. parameters of detected objects).
This part of project required implementing communication between PL (programmable logic) and external RAM, data exchange between PL and PS (processor system), image acquisition in HDMI standard and auxiliary visualization on VGA output.
As for now, we are using bare-metal application on PS. In the future, we are going to use PetaLinux OS for:
- the second part of the vision system (using data received from PL),
- audio signal processing,
- data logging (a simple database),
- simple web-service (allowing for authorized persons the access to statistics and current image)
In the last stage, the solution was tested in simulated conditions. Also a report and a video showing the performance and capabilities of the system were prepared.
Enclosed is a general scheme of the system.