Main Page

From KameRider
Jump to: navigation, search

| Robots | Software | Videos | Simulation | Research | Publications | Resources | Tutorials | About |


KameRider Team Videos for RoboCup 2019

KameRider OPL Team Video for RoboCup 2019
KameRider SSPL Team Video for RoboCup 2019


Robots

First Prototype

Photo1.jpg 20140506 011359.jpg Photo6.jpg Photo3.jpg
Front Rear Robot Arm (with Kinect) Switch Box with Emergency Button (3D Printing)

Open Robot Platform

SPLphoto1.jpg SPLphoto2.jpg SPLphoto3.jpg SPLphoto4.jpg

Social Standard Robot Platform

0909.JPG 0948.JPG
RoboCup 2017 Nagoya @Home SSPL Open Challenge

Software

2D Navigation

2D Navigation
Screenshot from 2014-05-06 13-21-44.png

Object Detection and 3D Point Cloud

Object Detection
Screenshot from 2014-05-06 13-25-58.png


Videos

FollowMe Task SLAM Map Building
FollowMe Task
SLAM Map Building
Robot Arm Object Manipulation Tele-operation via Android
Robot Arm Object Manipulation
Tele-operation via Android
Voice Interaction Sound Source Localization by HARK
Voice Interaction
Sound Source Localization by HARK
[Demo] Travel to a designated location to pick up and [Demo] Travel around several locations with dynamic
bring back an object obstacle avoidance
[Demo] Travel to a designated location to pick up and bring back an object
[Demo] Travel around several locations with dynamic obstacle avoidance


Simulation

Simulation Development with SIGVerse

In order to speed up our robot development, we are also developing robot simulation with SIGVerse. SIGVerse is a robotics simulator that can simulate human-in-the-loop human-robot interaction, capable of representing various task scenarios in RoboCup @Home.

We have developed a virtual Handyman task that resembles the GPSR task in @Home. We develop and improve our gesture detection system in the virtual Interactive Clean Up task for better human gesture recognition. In the virtual human navigation task, we train our robot system to understand the sentence generation in GPSR. We also use the power of simulation system to conduct repetitive robot learning of huge amount of data via crowdsourcing.

Handyman task
Interactive Clean Up task
Human Navigation task
Collaborative Learning task


Research

KameRider team is a collaborative effort that aims to develop an open robot platform for service robotics. Started from 2013, the limited development resources and manpower team condition had urged a strong motivation to develop a more affordable yet functional solution to take part in RoboCup @Home league and service robot development. The current team objectives are as follows:

A. Utilize open source solutions for both hardware and software developments for low cost and large community support to develop an open robot platform for service robot research and development.

B. Open source the developed robot platform with support wiki, source codes on GitHub and 3D printing parts to ensure easy reproducibility, to build up a community-driven development effort.

C. Development and participation in RoboCup @Home challenges to benchmark the robot performance.

D. Support the educational initiative – RoboCup@Home Education http://www.robocupathomeedu.org

The technical challenge of this work is the reduction of complexity and standardization of robot system requirements, while not compromise too much of the technical challenges intended in RoboCup @Home. However, the impact of this work is believed to significantly promote the participation of RoboCup @Home league to foster service robot development.

Open Source Robot Platform Development

The open robot platform has a current basic robot hardware configuration for fundamental robot platform and add-on modular component systems for customized applications. For example, a manipulator system (with top vision) and an extended top vision system are added to the hardware configurations during RoboCup Japan Open 2015 and RoboCup 2015 Hefei for the applications in Restaurant task and Follow Me task.

TutleBot as the basic robot hardware platform. TurtleBot is a low cost (basic kit is approximately USD 1,000), personal robot kit with close integration to popular open source software, ROS (Robot Operating System). The open source robot kit is adapted as the basic mobile platform for this development. The vertical range of the mobile manipulation can be adjusted with an elevated arm with linear motor, a secondary vision system is paired with the robotic arm for object recognition in the manipulation tasks, and 3D printed parts for component systems. An interactive interface with speech and facial expressions is in development for human-robot interaction. A general laptop PC (currently working on a single board computer system) with speakers and microphone is served as the main robot controller.

ROS as the robot software framework. ROS (Robot Operating System) is an open source robot software framework with a large community to provide huge col-lection of robotic tools and libraries. With ROS as the fundamental software frame-work, this work will adapt and assemble ROS packages and stacks to realize the navi-gation, manipulation, vision and speech functions of the robot in order to perform the tasks in RoboCup @Home.

Cloud-connected. The robot system is controlled by an onboard computer system as the main robot controller to ensure stable low-level controls. Furthermore, the computer system can be connected with cloud systems for extra computing (e.g. image processing), knowledge database (e.g. dialogue engine) and online resources (e.g. wearable data).

Robot Navigation

With the Kobuki and MS Kinect sensor as the mobile base hardware configuration, the TurtleBot navigation package is used for robot navigation with map building using gmapping and localization with amcl, while running the navigation stack in ROS. With the prebuild map and predefined waypoint locations, we can then instruct the robot to travel to a specific goal location with path planning using actionlib.

Navigation in known and unknown environments (Help-me-carry). With the top second vision system configuration, we have developed the navigation system in known and unknown environments for Help-me-carry and Restaurant tasks. Based on the TurtleBot navigation package, we have combined it with the people tracking package, for online update of the map while following the operator in the unknown environment.

Speech Interaction and Sound Source Localization

For human speech interaction, we use CMU Pocket Sphinx as our robot speech recognizer. It is a lightweight speech recognizer with a support library called Sphinxbase. We build our application with the latest version "sphinxbase-5prealpha". We use gstreamer to automatically split the incoming audio into utterances to be recognized, and offers services to start and stop recognition. The recognizer requires a language model and dictionary file, which can be automatically built from a corpus of sentences. For text-to-speech (TTS), we are using the CMU Festival system together with the ROS sound_play package.

In order to improve the speech recognition efficiency, we use a strategy for the ro-bot to listen for activation keyword first, and once the keyword is recognized, it switches to ngram search to recognize the actual command. Once the command has been recognized, the robot can switch to grammar search to recognize the confirma-tion, and then switch back to keyword listening mode to wait for another command.

In order to improve the speech recognition accuracy in a noisy environment, we use SphinxTrain tool in the CMUSphinx to train the recordings of sentences taken in the noisy environment. The SphinxTrain tool extracts the sounds of the noisy envi-ronment by using a large number of the above recordings as a database. We use the obtained parameters to replace the original parameters for better speech recognition detection.

Sound source localization. Apart from human speech interaction, we have also tested sound source localization using HARK for possible people search when the person is speaking outside of the robot visual perception area.

Robot Vision and Person/Gender/Object Recognition System

A second vision system is built on top of robot with MS Kinect for people/gender/object detection and recognition. The people tracking package is used to track people in the Follow Me task.

Person recognition. We built a Convolutional Neural Network (CNN) with TensorFlow as well as Keras. When we need to add a new person into our database, we use a digital camera to take about 1000 pictures of this person, and then we use Haar-like face detection method to detect human face in each picture and add the face region of each picture to our database with its labels. After that we use a laptop with NVIDIA GTX1070 to run the CNN, so that we can get our own model to recognize the person.

Gender recognition using an online API from Baidu-AI. During the competition, we capture and upload the photo to the Baidu-AI cloud server, and we can get the gender recognition results labeled on the photo.

Object recognition system. We use YOLO (You Only Look Once) for object detection. In the Storing Groceries task, we use the Kinect sensor for shelf detection, table detection and object detection. Before the competition, we take photos of the predefined object, and then we do labeling by adding annotation labels and bounding boxes for each image. We capture the images in different angle, different light condition as well as different background to ensure suitable generalization of our model to deal with the competition conditions.

Human gesture detection. In our human gesture detection system, we use CMU OpenPose as our skeleton detector. It is a real-time multi-person key-point detection library for body, face, hands, and foot estimation. The OpenPose demo requires a RGB image and then returns the number of people as well as their skeleton positions. To get the human pointing direction, 3D coordinates of the wrist joint and elbow joint are necessary. We combine the OpenPose result and point cloud library (PCL) to get the positions described in the head RGB-D sensor coordinate system, then the TF matrix is used to transform them to the map coordinate system. Additionally, space vector method is used to calculate which point on the ground the human is pointing to.

Robot Arm and Object Manipulation

We are using TurtleBot Arm for object manipulation. It consists of 5 Dynamixel AX-12A servo motors, controlled by an ArbotiX-M controller board/USB2Dynamixel. We use MoveIt! as the arm software framework, and we have integrated the arm control with object detection by color detection and object recognition by image processing for object manipulation. Once we recognized the object, we perform object localization by 3D point cloud to obtain the position of the object and calculate the inverse kinematic to make the movement of the arm to grasp the object. Also, by using MoveIt! we can plan the arm movement including obstacles avoidance to avoid collision with the surrounding objects.

Elevated arm. An elevated arm is developed for flexible height manipula-tion. The current design is target to enable object manipulation at the height ranges from 0.3m to 1.8m.


Publications

Journals and Book Chapters

[35] T. Inamura, J. T. C. Tan, Y. Hagiwara, K. Sugiura, T. Nagai, H. Okada, “Framework and Base Technology of RoboCup@Home Simulation toward Long-term Large Scale Human-Robot Interaction,” Intelligence and Informatics (Japan Society for Fuzzy Theory and Intelligent Informatics), Vol. 26, No. 3, pp. 698-709, 2014 (in Japanese) | PDF

[34] F. Duan, J. T. C. Tan, J. G. Tong, R. Kato and T. Arai, "Application of the Assembly Skill Transfer System in an Actual Cellular Manufacturing System," IEEE Transactions on Automation Science and Engineering, Vol. 9, No. 1, pp. 31-41, January 2012 | PDF

[33] M. Morioka, S. Adachi, S. Sakakibara, J. T. C. Tan, R. Kato and T. Arai, "Cooperation between a High-Power Robot and a Human by Functional Safety," Journal of Robotics and Mechatronics (JRM), Vol. 23, No. 6, pp. 926-938, September 2011 | PDF

[32] J. T. C. Tan, F. Duan, R. Kato, and T. Arai, "Safety Strategy for Human-Robot Collaboration: Design and Development in Cellular Manufacturing," Advanced Robotics, Vol. 24, No. 5-6, pp. 839-860, April 2010 | PDF

[31] J. T. C. Tan, F. Duan, R. Kato, and T. Arai, "Man-Machine Interface for Human-Robot Col-laborative Cellular Manufacturing System," International Journal of Automation Technology (IJAT), Vol. 3, No. 6, pp. 760-767, August 2009

[30] F. Duan, J. T. C. Tan, R. Kato and T. Arai, “Operator Monitoring System for Cell Production,” Advanced Robotics, Vol. 23, No. 10, pp. 1373-1391, 2009

[29] F. Duan, M. Morioka, J. T. C. Tan, and T. Arai, "Multi-modal Assembly-Support System for Cell Production," International Journal of Automation Technology (IJAT), Vol. 2, No. 5, pp. 384-389, August 2008

[28] F. Duan and J. T. C. Tan, “Multi-Modal Assembly-Support System for Cellular Manufactur-ing,” Operations Management Research and Cellular Manufacturing: Innovative Methods and Approaches, ISBN13: 9781613500477, October 2011

[27] J. T. C. Tan, F. Duan, R. Kato, and T. Arai, "Collaboration Planning by Task Analysis in Hu-man-Robot Collaborative Manufacturing System," Advances in Robot Manipulators, Ernest Hall (Ed.), ISBN: 978-953-307-070-4, InTech, Austria, EU, 2010

Conference Proceedings

[26] J. T. C. Tan, Y. Mizuchi, Y. Hagiwara, T. Inamura, “Representation of Embodied Collaborative Behaviors in Cyber-Physical Human-Robot Interaction with Immersive User Interfaces,” in Proc. of The ACM/IEEE International Conference on Human-Robot Interaction (HRI 2018) Late Breaking Results Poster Session, March 2018

[25] S. F. Chik, C. F. Yeong, E. L. M. Su, T. Y. Lim, F. Duan, J. T. C. Tan, P. H. Tan, P. J. H. Chin, “Gaussian Pedestrian Proxemics Model with Social Force for Service Robot Navigation in Dynamic Environment,” in Proc. of the Asian Simulation Conference, August 2017

[24] W. Li, Z. An, S. Jiang, J. T. C. Tan, F. Duan, C. Zhu, H. Yu, “An Incremental Learning-Based Mechanism for Object Recognition in Cloud Robotic System,” in Proc. of the IEEE 7th Annual International Conference on CYBER Technology in Automation, Control, and Intelligent Systems (CYBER 2017), July 2017

[23] S. Sheng, P. Song, L. Xie, Z. Luo, W. Chang, S. Jiang, H. Yu, C. Zhu, J. T. C. Tan, F. Duan, “Design of an SSVEP-based BCI system with visual servo module for a service robot to execute multiple tasks,” in Proc. of the IEEE International Conference on Robotics and Automation (ICRA 2017), May 2017

[22] C. Xu, W. Li, J. T. C. Tan, Z. Chen, H. Zhang, F. Duan, “Developing an identity recognition low-cost home service robot based on turtlebot and ROS,” in Proc. of the 29th Chinese Con-trol And Decision Conference (CCDC), May 2017

[21] J. T. C. Tan, Y. Hagiwara and T. Inamura, "Learning from Human Collaborative Experience: Robot Learning via Crowdsourcing of Human-Robot Interaction," in Proc. of The 12th Annual ACM/IEEE Int. Conf. on Human-Robot Interaction (HRI 2017), Vienna, Austria, March 2017 | PDF

[20] W. Li, P. Song, J. T. C. Tan, C. Zhu and F. Duan, "Verification the feasibility of SIGVerse for human-robot interaction simulation through following task," in Proc. of The IEEE Int. Conf. on Robotics and Biomimetics (ROBIO 2015), December 2015 | PDF

[19] J. T. C. Tan and Y. Suda, “Autonomous Cruise Control and Platooning towards Intelligent Personal Mobility Vehicles,” in Proc. of the 22nd ITS World Congress, Bordeaux, France, October 2015

[18] J. T. C. Tan, Y. Hagiwara, T. Inamura, “Crowdsourcing of Virtual Human-Robot Interaction for Robot Learning of Collaborative Actions and Communication Behaviors,” in Proc. of The IEEE Int. Conf. on Intelligent Robots and Systems (IROS 2015) Late Breaking Results Poster Session, Hamburg, Germany, October 2015

[17] J. T. C. Tan and Y. Suda, “Automatic Vehicle Following of Personal Mobility Vehicles for Autonomous Platooning,” in Proc. of the 3rd International Symposium on Future Active Safety Technology Toward zero traffic accidents (FAST-zero ‘15), September 2015 | PDF

[16] J. T. C. Tan, Y. Hagiwara and T. Inamura, "Perception and Communication in Collaborative Human-Robot Interaction," in The 33th Annual Conference of The Robotics Society of Japan, September 2015

[15] J. T. C. Tan, Y. Hagiwara, T. Inamura, “Robot Learning Framework via Crowdsourcing of Human-Robot Interaction for Collaborative Strategy Learning,” in the 24nd Int. Symposium on Robot and Human Interactive Communication (RO-MAN 2015) Interactive Session, Kobe, Japan, August 2015

[14] X. Wen, F. Duan, Y. Yu, J. T. C. Tan and X. Cheng, “Design of a multi-functional system based on virtual reality for stoke rehabilitation,” in Proc. of the 11th World Congress on Intelligent Control and Automation (WCICA 2014), pp. 2412-2417, June 2014

[13] J. T. C. Tan, K. Okuno and T. Inamura, "Integration of Work Operation and Embodied Multimodal Interaction in Task Modeling for Collaborative Robot Development," in Proc. of the 4th Annual IEEE Int. Conf. on Cyber Technology in Automation, Control, and Intelligent Systems (IEEE-CYBER 2014), Hong Kong, China, pp. 615-618, June 2014

[12] J. T. C. Tan, T. Inamura, Y. Hagiwara, K. Sugiura, T. Nagai and H. Okada, "A New Dimension for RoboCup @Home: Human-Robot Interaction between Virtual and Real Worlds," in Proc. of The 9th Annual ACM/IEEE Int. Conf. on Human-Robot Interaction (HRI 2014), Bielefeld, Germany, pp. 332-332, March 2014

[11] J. T. C. Tan, T. Inamura, K. Sugiura, T. Nagai and H. Okada, "Human-Robot Interaction between Virtual and Real Worlds: Motivation from RoboCup @Home," in Proc. of The 5th Int. Conf. on Social Robotics (ICSR 2013), Bristol, UK, pp. 239-248, October 2013

[10] J. T. C. Tan and T. Inamura, "Embodied and Multimodal Human-Robot Interaction between Virtual and Real Worlds," in Proc. of The 22nd Int. Symposium on Robot and Human Interactive Communication (RO-MAN 2013), Gyeongju, Korea, pp. 296-297, August 2013

[9] T. Inamura and J. T. C. Tan, "Development of RoboCup@Home Simulation towards Long-term Large Scale HRI," in Proc. of The 17th Annual RoboCup Int. Symposium 2013, Eindhoven, Netherlands, July 2013

[8] J. T. C. Tan and T. Inamura, "Integration of Work Sequence and Embodied Interaction for Collaborative Work Based Human-Robot Interaction," in Proc. of The 8th Annual ACM/IEEE Int. Conf. on Human-Robot Interaction (HRI 2013), Tokyo, Japan, pp. 239-240, March 2013

[7] T. Inamura and J. T. C. Tan, "Development of RoboCup @Home Simulator: Simulation platform that enables long-term large scale HRI," in Proc. of The 8th Annual ACM/IEEE Int. Conf. on Human-Robot Interaction (HRI 2013), Tokyo, Japan, pp. 145-146, March 2013

[6] J. T. C. Tan, F. Duan and T. Inamura, "Multimodal Human-Robot Interaction with Chatterbot System: Extending AIML towards Supporting Embodied Interactions," in Proc. of The IEEE Int. Conf. on Robotics and Biomimetics (ROBIO 2012), pp. 1727-1732, December 2012

[5] T. Inamura and J. T. C. Tan, "Long-term Large Scale Human-Robot Interaction Platform through Immersive VR System --Development of RoboCup @Home Simulator--," in Proc. of the IEEE/SICE International Symposium on System Integration (SII 2012), Fukuoka, Japan, pp. 242-247, December 2012

[4] T. Inamura and J. T. C. Tan, "Simulation platform that enables long-term large scale HAI --Development of RoboCup @Home Simulator--," in the International Workshop on Human-Agent Interaction (iHAI 2012), Vilamoura, Algarve, Portugal, October 2012

[3] J. T. C. Tan and T. Inamura, "SIGVerse - a Cloud Computing Architecture Simulation Platform for Social Human-Robot Interaction," in Proc. of The IEEE Int. Conf. on Robotics and Automation (ICRA 2012), Saint Paul, MN, USA, pp. 1310-1315, May 2012

[2] J. T. C. Tan and T. Inamura, "Extending Chatterbot System into Multimodal Interaction Framework with Embodied Contextual Understanding," in Proc. of The 7th Annual ACM/IEEE Int. Conf. on Human-Robot Interaction (HRI 2012), Boston, MA, USA, pp. 251-252, March 2012

[1] J. T. C. Tan and T. Inamura, "What Are Required to Simulate Interaction with Robot? SIGVerse - A Simulation Platform for Human-Robot Interaction," in Proc. of The IEEE Int. Conf. on Robotics and Biomimetics (ROBIO 2011), Phuket, Thailand, pp. 2878-2883, December 2011


Resources

Hardware

Software

RoboCup@Home

RoboCup@Home Simulation


Tutorials

Learn more from http://www.robocupathomeedu.org/


About

Team KameRider

Team KameRider is a collaborative effort with the current collaboration members as follows:

  • Team Leader: Jeffrey Too Chuan Tan (Nankai University, China; OC RoboCup@Home, RoboCup@Home Education, World Robot Summit)
  • Undergraduate and postgraduate students from Nankai University (China)

Participation and Achievements in RoboCup @Home

  • RoboCup Japan Open 2018 Ogaki
    • RoboCup @Home Education [2nd Place]
  • RoboCup China Open 2018 Shaoxing
    • RoboCup @Home Technical Challenge [1st Place]
  • RoboCup Asia-Pacific 2017 Bangkok
    • RoboCup @Home [1st Place]
    • RoboCup @Home Education [1st Place]
  • RoboCup 2017 Nagoya
    • SSPL Overall ranked 4th out of 7 qualified teams
  • RoboCup Japan Open 2017 Nagoya
    • RoboCup @Home Education [1st Place]
    • RoboCup @Home Simulation [2nd Place]
  • RoboCup 2016 Leipzig
    • Overall ranked 7th out of 23 qualified teams
  • RoboCup Japan Open 2016 Aichi
    • RoboCup @Home Education [2nd Place]
    • RoboCup @Home Simulation [1st Place]
  • RoboCup 2015 Hefei
    • Overall ranked 7th out of 17 qualified teams
    • Top 9 teams to enter Stage 2
  • RoboCup Japan Open 2015 Fukui
    • RoboCup @Home SPL Beta [1st Place]
    • RoboCup @Home Simulation [3rd Place]
  • RoboCup Japan Open 2014 Fukuoka
    • JSAI Award [Standard Platform for RoboCup @Home]
    • RoboCup @Home Simulation [2nd Place]
  • RoboCup Japan Open 2013 Tokyo
    • JSAI Award [SIGVerse for RoboCup @Home Simulation]
    • RoboCup @Home Simulation [2nd Place]

Links


| Robots | Software | Videos | Simulation | Research | Publications | Resources | Tutorials | About |