We looked into 3-D printing the case for i-Robot to place the raspberry pi. It will take more than 4-6 days to print the box and cost around $23 (buy the filament – 1.75 PLA).Due to cost and time constraints, we have decided to opt for another storage solution.
Also, this week our raspberry pi camera suddenly stopped working. This might be because the connecting cable was faulty. We bought a new camera and the camera is now working.
We are currently working on code to integrate the openCV camera with iRobot. The iRobot will turn based on the angle detected by the camera module. Simultaneously, we are working on detecting cars on the road using openCV for cruise control.
Since our last post we added two main modules to our project. We organized our various blocks (eg. Camera, Sonar, iRobot Sensors) into a python class (autonomousController).
The first was a web server module using CherryPi (http://www.cherrypy.org/). This was implemented to provide an easier method of visualizing what the analysis and sensor feedback performed on the Raspberry Pi. It has been tested displaying live images from our custom camera module. See pureRobotics/autonomousController/__init__.py
The second was modifying the line detection algorithms to calculate the angle from center we will be using to turn the iRobot. This algorithm was then added to our Camera module and tested against paper lines. It was able to detect in real time a simulated road from the Raspberry Pi camera using parallel paper lines and provide the angle relative to iRobot Y axis (Y axis running front to back on the iRobot). See pureRobotics/autonomousController/camera.py
Our next step is to integrate our custom autonomousController framework with the pyCreate framework.
NOTE: all code references are provided on our GitHub (https://github.com/shiboo18/pureRobotics).
Setup sonar sensor with Raspberry Pi. Thanks to ModMyPi for their great getting started tutorial and code http://www.modmypi.com/blog/hc-sr04-ultrasonic-range-sensor-on-the-raspberry-pi.
We have built on their examples to create a sonar class that provides the ability to create and continuously retrieve distance measurements using the Python threading framework to handle the background process. That can be downloaded on our GitHub https://github.com/shiboo18/pureRobotics
This week we worked on two aspects of our project. The first was the computer vision needed to predict a lane from the Raspberry Pi camera. In order to do this, we created a function in OpenCV to detect lanes in an example road. This produced the below image
In our simulated case, the complexity of the surrounding environment will be less. From the two edges detected, we will create an average line which will predict the centered direction of the road in relation to the iRobot. This will be used to center the iRobot on the road.
This leads to the second component we worked on this week. We implemented the PyCreate library found here (https://github.com/mgobryan/pycreate) on our raspberry pi and iRobot. We debugged the library working from manually sending hex codes to debug our iRobot to using the automated char to binary conversion method used in the PyCreate library. One roadblock was the serial rate used throughout the PyCreate project was different from the rate used on our iRobot. Once we modified the PyCreate module to use the correct serial rate, we were able to successfully control the iRobot motors and gather sensor data using this library.
This library will allow us to integrate with the OpenCV Image edge analysis to control the iRobot in our simulated environments.
We are using the following links to help us install OpenCV and configure Raspberry Pi camera:
Also, we have created a GitHub repository for version control and sharing our work:
We were able to control the iRobot from Raspberry Pi Connected through serial USB. Huge thanks to Pai-Ying and Wei (https://cnit581wkph.wordpress.com/2016/03/05/id-like-to-record-and-share-how-to-control-the-irobot-from-raspberry-pi/) for their tutorial.