ArmarX KIT armarx.humanoids.kit.edu

The robot development environment (RDE) ArmarX aims at providing an infrastructure for developing a customized robot framework that enables the realization of distributed robot software components.  Link to repository

Co-Fusion UCL github.com/martinruenz/co-fusion

Co-Fusion is a dense SLAM system
that takes a live stream of RGB-D images as input and segments the scene
into different non-static objects.

CPR Load Support EPFL github.com/epfl-lasa/cpr_load_support

This ROS package provides a controller for a robotic arm to support a heavy load at a expected height and lower it down (when it released) to a user-friendly height and carry it around.
Lifting from the Deep UCL github.com/DenisTome/Lifting-from-the-Deep-release

Convolutional 3D Pose Estimation from a Single Image. Created by Denis Tome and Chris Russell and Lourdes Agapito.

Load Share Estimation EPFL github.com/epfl-lasa/load-share-estimation

This ROS package is used to compute the load share of an object being supported by a robot and a third party (such as a person).

Ridgeback UR5 Controller EPFL github.com/epfl-lasa/ridgeback_ur5_controller/tree/devel

This ROS package provides an admittance controller for mobile-base robots with robotic arms.
Task Adaptation EPFL github.com/epfl-lasa/task_adaptation

This ROS package provides a task-adaptive behaviour for robotic arms using switching dynamical systems.
Lift Help Predictor EPFL github.com/epfl-lasa/lift_help_predictor

This ROS package provides a a collection of tools to gather, visualize, and analyze human joint data and subsequently perform on-line detection of need for help during object lifting tasks.
MaskFusion UCL github.com/martinruenz/maskfusion

This repository contains MaskFusion, a real-time, object-aware, semantic and dynamic RGB-D SLAM system that recognizes, segments and assigns semantic class labels to different objects in the scene, while tracking and reconstructing them even when they move independently from the camera.
xR-EgoPose UCL github.com/facebookresearch/xR-EgoPose

The xR-EgoPose Dataset is a dataset of ~380 thousand photo-realistic egocentric camera images in a variety of indoor and outdoor spaces. The code contained in this repository is a PyTorch implementation of the data loader with additional evaluation functions for comparison.
Expert System for Robot Design KIT https://gitlab.com/h2t/expert_system

The expert system supports the systematic mechatronic design of robot components. Based on user requirements, the program generates design solutions for robot components. Currently, the design of sensor-actuator-controller units for robot joints and robot hands are supported. The ontological knowledge base of the system includes, among others, the knowledge gained during the design of ARMAR-6 in the SecondHands project and will be continuously expanded.
RobotUnit for real-time robot control KIT LINK

The RobotUnit offers real-time control for robotic systems. It features a two-layered controller architecture and supports centralized, synchronous control. The RobotUnit was mainly developed for the control of the humanoid robot ARMAR-6, but is general in concept. Therefore, it is already used to control other non-humanoid robotic systems. Core design principles are low latency, robustness, flexibility and strict separation of the real-time control from non-real-time parts, such as network communication.
ArmarX Setup Tool KIT LINK

The ArmarX setup script allows for diverse configurations of ArmarX spanning from a minimal version to a configuration that includes all optional dependencies.

Real-Time control drivers KIT LINK

KIT decided to use EtherCAT as the centralized and real-time bus for the robot ARMAR-6. In addition to the EtherCAT support in the sensor-actuator units, the central control software must also support EtherCAT.

Controllers for Robot Human Interaction KIT LINK

KIT has developed several controllers that enable the robot to collaborate with humans in a reactive and safe way. To this end, these controllers operate all robot joints in the torque mode. Therefore, interaction forces with the human or the environment can be detected in real-time and the controller can adapt the motion of the robot’s arm appropriately.

KIT Object Database KIT LINK

KIT has created 3D model scans of the SecondHands reference object and tool set which was provided by Ocado. The objects and tools have been scanned at the KIT Object Modelling Center. KIT has created 3D model scans of the SecondHands reference object and tool set which was provided by Ocado. The objects and tools have been scanned at the KIT Object Modelling Center,

Grasping Pipeline for Known and Unknown Objects  KIT The grasping pipeline for known and unknown objects, developed in SecondHands, combines several compo-
nents of ArmarX, that are all open source. The pipeline is comprised of the following software components:

Point cloud filter

Unknown object segmentation

Grasp candidate generation

Grasp candidate storage

Grasp execution

Grasping of Small Objects with Underactuated Hands KIT KIT developed a system that enables the design and recording of grasping actions that enable the robot to pick up small objects from a flat surface. The grasping strategy builds on feed forward Cartesian position and velocity control. All necessary parts for this grasp execution are available online:

Cartesian waypoint controller for Cartesian position/velocity control

Recorded grasping strategies for side and top grasps

Grasp execution

 Janus Recognition Toolkit  KIT-ISL LINK

The Janus Recognition Toolkit (JRTk) is a general-purpose speech recognition toolkit useful for both research and application development and is part of the JANUS speech-to-speech translation system. The JRTk provides a flexible Tcl/Tk script based environment which enables researchers to build state-of-the-art speech recognizers and allows them to develop, implement, and evaluate new methods.

 Sequence-to-sequence ASR  KIT-ISL LINK

The open-source Python Neural Network (pynn) framework  provides a set of utility modules to completely train and build end-to end speech recognition models.

 Bimanual Human Action Dataset  KIT LINK

This is a RGB-D dataset of bimanual human actions which features 540 recordings of 6 subjects with a total playtime of approximately 2 h 18 min. The subjects perform a variety of bimanual manipulation tasks in kitchen and workshop contexts using several objects, like preparing cereals or sawing wood. The purpose of this dataset is to support learning from human observation, especially bimanual human actions. The subjects were encouraged to perform the tasks bimanually as they would naturally do them.

Dynamical Systems based Obstacle Avoidance EPFL LINK

The implementation is written in C++ and provides an API to create agents and obstacles in the environment. It allows to set velocities and pose (position and orientation) of both agents and obstacles to handle dynamic environments. By providing, a velocity to one agent, the algorithm returns a modulated velocity that ensures the avoidance of the obstacles on the agent’s path. Obstacles are currently represented as ellipses or aggregation of multiple ellipses. Tools are available to automatically generates the aggregation of the obstacles.

 Aligner: A pipeline for annotating object poses and labels for RGBD data  UCL LINK

Aligner is a tool to annotate the pose of known 3D objects in RGBD sequences. This information is useful to create datasets for evaluation or training purposes in domains such as object pose estimation.

 Tensorflow Mask-RCNN  Uniroma1 LINK

UniRoma1 developed a Tensorflow version of the Mask-RCNN architecture. This repository contains an implementation of Mask-RCNN based on the Matterport implementation. The network is fully developed in tensorflow 1.14 with the use of pure tensorflow API functions, without relying on any high level API (eg. Keras).

SecondHands Tools Dataset Uniroma1 LINK

For training the Mask-RCNN network a labelled dataset of maintenance tools was created using images collected at KIT and labelled by Ocado. The dataset provides more than 1000 fully labelled images and allows the Mask-RCNN model to achieve an accuracy score of ∼ 95% which results in a reliable and robust recognition model.