Franka Emika Panda in Gazebo with ROS and Docker

In my last post I’ve written about creating Gazebo packages from the existing panda models and mentioned that I’m also working on a docker image that includes ROS. Well here it is 🙂 You can check out the GitHub repository (be sure to leave a star if you like it), and – assuming you have an operating docker host – it should be very straight forward to set up.

Since I spent way too much time fixing dependencies and writing configuration files, I would like to share my experience (and of course the code :P) so that the solutions to the problems I encountered along the way are in one place.

High Level Overview

Before diving into the setup of the individual parts, I want to quickly talk about how the various components interact. This overview will make it easier to understand what each system is trying to accomplish, and what it expects.

A high level overview of how Panda is controlled in Gazebo. The simulator replaces the real robot at the level above the controllers (i.e. they use different controllers).

Contrary to my initially guess – that I could simply extend the Gazebo package I created earlier – I found that it is easier to start completely from scratch for this image. The logic here is that the robot will be spawned into Gazebo from ROS, and hence it makes sense to properly integrate things into the ROS architecture. This means using .urdf to describe the robot model, instead of .sdf files like I did in the previous post. It also means that I settled for one model – the robot with the gripper – although support for both versions can be added.

The main pipeline goes from MoveIt via ROS Control to Gazebo. MoveIt receives a goal pose and plans a trajectory to it. It then sends position commands to ROS control which sets targets for a series of PID controllers. These controllers are what steer the simulated joints in Gazebo. Gazebo then sends the joint encoder readings back to the ROS framework, where they are picked up by the PID controllers to stabilize the robot, MoveIt to decide the next step of the trajectory, and RViZ to visualize the robot. MoveIt and Gazebo both use the .urdf I generated in the last post, and I got the controller parameters from Erdal Pekel’s blog post.

ROS Control and Stabilizing the Robot

In retrospect getting the controllers to work is quite easy. While configuring it, it was quite annoying, because they didn’t seem to be loading properly. The issue was dependencies 🙂 Before I go into the packages needed to connect ROS control to gazebo, however, I want to show you the config file that I used to set the PID controls. I got this file from Erdal Pekel’s Blog who spend quite a bit of time tuning those numbers. Amazing job!

# panda: #useful if you use a namespace for the robot
# Publish joint states
joint_state_controller:
type: joint_state_controller/JointStateController
publish_rate: 50
panda_arm_controller:
type: effort_controllers/JointTrajectoryController
joints:
panda_joint1
panda_joint2
panda_joint3
panda_joint4
panda_joint5
panda_joint6
panda_joint7
gains:
panda_joint1: { p: 12000, d: 50, i: 0.0, i_clamp: 10000 }
panda_joint2: { p: 30000, d: 100, i: 0.02, i_clamp: 10000 }
panda_joint3: { p: 18000, d: 50, i: 0.01, i_clamp: 1 }
panda_joint4: { p: 18000, d: 70, i: 0.01, i_clamp: 10000 }
panda_joint5: { p: 12000, d: 70, i: 0.01, i_clamp: 1 }
panda_joint6: { p: 7000, d: 50, i: 0.01, i_clamp: 1 }
panda_joint7: { p: 2000, d: 20, i: 0.0, i_clamp: 1 }
constraints:
goal_time: 2.0
state_publish_rate: 25
panda_hand_controller:
type: effort_controllers/JointTrajectoryController
joints:
panda_finger_joint1
panda_finger_joint2
gains:
panda_finger_joint1: { p: 5, d: 3.0, i: 0, i_clamp: 1 }
panda_finger_joint2: { p: 5, d: 1.0, i: 0, i_clamp: 1 }
state_publish_rate: 25
PID parameters for the low-level joint controllers

To get these controllers to actually control the simulation, we can start them with a single line in the launch file, and point them to the `panda_controller.yaml`. To communicate with gazebo, however, they will need the `gazebo_ros_control` package, which doesn’t seem to ship with `gazebo_ros` nor with `ros_control`. Additionally the default ros container doesn’t ship with any controllers, so I had to install the effort controllers in the `effort_controllers` package, but also the `joint-trajectory-controller` which will install provide the controllers mentioned in above file. I didn’t test if the other effort controllers are necessary, so there might be a dependency on which this doesn’t actually depend.

If the low-level controllers are working properly, the arm in the simulation should freeze in a position that is different from the one it will settle in based on gravity. Usually, the simulation starts, and the arm falls. Then – a short while later – the controllers finish loading and stabilize the arm in some (slightly awkward) position.

Creating the MoveIt configuration

The MoveIt package in the repository (called `panda_moveit`) is the result of me using the moveit setup wizzard with the generated .urdf file. Initially, I tried using the official `panda_moveit_config` package, but failed to get things to work for two key reasons: (1) the official moveit config is meant for the physical robot, which uses controllers unsupported by Gazebo. This mismatch in controllers is detrimental. (2) I (unknowingly) chose a different naming convention in the .urdf compared to the officially used one. This can be fixed by me renaming things, but at this point I had already found out about reason (1), and thought I can keep it if I have to create my own config anyway.

To create this package, I followed the MoveIt Setup Assistant Tutorial, trying to stick as close to the existing `panda_moveit_config` as possible. For example, I included the ‘ready’ pose that panda uses to give the robot a default state 🙂

One file that I had to add manually to the generated configuration to get it to work was a config file for the MoveIt controllers. This tells MoveIT which controller groups it should use to actuate which motors, and what messages to send to the ROS control nodes. I then had to replace the `panda_moveit_controller_manager.launch.xml` with the newly created ones.

controller_list:
name: panda_arm_controller
action_ns: follow_joint_trajectory
type: FollowJointTrajectory
joints:
panda_joint1
panda_joint2
panda_joint3
panda_joint4
panda_joint5
panda_joint6
panda_joint7
name: panda_hand_controller
action_ns: follow_joint_trajectory
type: FollowJointTrajectory
default: true
parallel: true
joints:
panda_finger_joint1
panda_finger_joint2
The controller configuration for MoveIt

To start MoveIt and the planning context together with Gazebo, I added the `move_group.launch` file to the launch file in the `panda_gazebo` package. I didn’t touch the other launch files that the moveit setup assistant generated; hence, they are likely in a broken state.

Controlling the Simulator from RViZ

Finally, and to add some icing on the cake, controlling the Gazebo simulation through RViZ provides a very good test if everything is working as it should. I added the RViZ node to the launch file in the `panda_gazebo` package. After launching the file and waiting for everything to load, I created a layout in RViZ, configured it to my liking, and stored the layout as an .rviz config file. I then added this file as an argument to the launch options for the RViZ node.

A strange situation that I encountered here was that the static frame RViZ tries to use as reference in the global settings was set to `map` instead of `world`. I’m not 100% sure why the default is map, but if it is set to this, the interactive markers won’t show for panda’s endeffector.

With all this in place, I can now use the interact handles to drag the robot into a desired goal position and click ‘plan and execute’.

Putting It All Together in Docker

After all this work of assembling the pieces into a working pipeline, I thought I can take some additional time to alleviate some of the pain others may have when setting this up; especially the pain that comes from missing dependencies. As a result, I decided to create a docker image that is portable and will set all this up for you.

The Dockerfile itself is very short:

FROM osrf/ros:kinetic-desktop-full-xenial
RUN apt-get update \
&& apt-get install -y \
ros-kinetic-gazebo-ros \
ros-kinetic-gazebo-ros-pkgs \
ros-kinetic-gazebo-ros-control \
ros-kinetic-joint-state-controller \
ros-kinetic-effort-controllers \
ros-kinetic-position-controllers \
ros-kinetic-joint-trajectory-controller \
ros-kinetic-ros-control \
ros-kinetic-moveit\
&& rm -rf /var/lib/apt/lists/*
COPY assets/catkin_ws/src/panda_gazebo/models /root/.gazebo/models
COPY assets/catkin_ws/ /catkin_ws
COPY assets/ros_entrypoint.sh /ros_entrypoint.sh
CMD roslaunch panda_gazebo panda.launch
view raw Dockerfile hosted with ❤ by GitHub
Dockerfile used to generate the image

It just installs the necessary ROS packages (it is totally my own stupidity, but I can’t stress enough how much time I wasted figuring out which packages I need), adds the workspace created above, and then modifies the image’s entrypoint to lay the catkin workspace over the default ROS workspace.

One interesting thing happens in line 16, where I fix a bug that is very unique to gazebo in docker. As a container starts ‘fresh’ gazebo doesn’t have any models downloaded, which means it will download the two models it needs to create `empty.world`. This costs time and, unfortunately, interacts with ROS control in such a way that the controllers crash. To correct this, I pre-populate gazebo’s model cache with the models it needs.

The other file I use for building the image is the build script:

#!/bin/bash
# generate the .urdf
docker run –rm -v ${PWD}/assets/panda_xacro:/xacro osrf/ros:kinetic-desktop-full rosrun xacro xacro –inorder /xacro/panda_arm_hand.urdf.xacro > ${PWD}/assets/catkin_ws/src/panda_description/urdf/panda.urdf
# build the ROS packages
docker run –rm -it -v ${PWD}/assets/catkin_ws:/catkin_ws osrf/ros:kinetic-desktop-full-xenial bash -c "apt update && apt install -y python-catkin-tools && cd /catkin_ws && catkin build"
# Build the image
docker build -t panda_gazebo_sim:latest ${PWD}
view raw build.sh hosted with ❤ by GitHub
Build script for the docker image

The script is again very short. Line 4 generates the .urdf file from the .xacro file (a process I used in my post on how to get panda into Gazebo). Line 7 is interesting, because we spin up a clean ROS image to build the catkin workspace that we created, and then use those generated files to add them to the final image. Arguably this could be solved more elegantly via multi-stage builds; however, I learned about this feature after implementing things this way. Thus I will leave it as is until a future refactor, and will use multi-stage builds in future images.

That’s it! These posts usually take my entire weekend to write. If you think they are good, be sure to to like the post, and leave a star on my GitHub repository. This way I know that writing these is a worthy use of my time.

Happy coding!

Advertisement

H5-Index of HRI Venues

H5-Index of various venues typically targeted by researchers in HRI

I got bored during a late Saturday evening, so I decided to query Scopus for the last 5 years of publications for major venues in HRI. After aggregating all this data, I wanted to look at the impact factor and related metrics of these papers. However, I realized only afterwards that I can’t calculate the impact factor from the data I gathered. Instead I did the next best thing, computed the H5 scores for all the venues, and rank ordered them by this score. Maybe this graph is useful to some 🙂

Thank you for reading, and (although there is no programming here) happy coding!

Franka Emika Panda in Gazebo (without ROS using Docker)

Screenshot of the panda arm with and without gripper attachment.

The background for this post is that I am currently visiting ISIR, and I’ve started a new project working with the panda robot by Franka Emika. Unfortunately, we only have a single physical robot, and we are 2 PhDs using it for different experiments. To minimize downtime, I am, hence, setting up a simulator for it, and, to facilitate reusability, I decided to go with a Docker based setup.

The image is available on GitHub and called panda_sim. You can clone it, build it, and see how it works for you. While I tried to keep ROS out of this, the current model still loads the gazebo ros plugin for control. If I get around to it, I will remove this dependency in a future version, to get it completely ROS independent.

I am currently working on a ROS based image that runs a Gazebo server and a ROS controller for the robot with gripper. This way, you can easily spin up a simulator for your robot behavior. You can even do that in parallel for some epic deep reinforcement learning action.

Preparing the Robot Model

To use Panda in the simulator, we need to convert the existing model into .sdf format. While Gazebo can work with .urdf files, this requires a parallel ROS installation, which we try to avoid. Internally, the .urdf is converted to .sdf anyway, so we might as well supply .sdf and save the dependency.

Screenshot of the location of the .xacro files on GitHub.

First, we need to get the model from franka_ros which is located in the `franka_description` folder. It is a .urdf model, so in order to use it in Gazebo, we need to add some additional information such as joint inertia, or that the arm should be attached rigidly to the world frame. Erdal Pekel also has a tutorial how to bring Panda into Gazebo (using ROS). I used his numbers and suggestions to modify the files.

Next, as we are ripping out the model from an existing ROS package, we will also need to update the paths in the .urdf. In particular, I removed `$(find franka_description)/robots/` in both the `panda_arm.urdf.xacro` and the `panda_arm_hand.urdf.xacro`, and changed the `robot_name` to panda. I also changed the `description_pkg` value to `panda_arm_hand` or `panda_arm` in the `panda_arm.xacro`, depending on the model; this name of the package needs to match the name of the model folder (see below). In `hand.xacro` the value for description_pkg is hardcoded, so I introduced the description_pkg variable, and set it appropriately.

Converting .xacro to .sdf

After doing all the necessary modifications, we need to convert the .xacro to a .urdf. Docker can again be incredibly helpful, as we can spin up a throw away ROS instance, do our conversion, and save the result on the host:

docker run -v <path/to/modified>/franka_description:/xacro osrf/ros:kinetic-desktop-full rosrun xacro xacro --inorder /xacro/robots/panda_arm_hand.urdf.xacro > panda_arm_hand.urdf

Next, we want to convert the model to a .sdf. This is again solved in an easy one liner using docker:

docker run -v <path/to/generated/urdf>:/urdf gazebo:latest gz sdf -p /urdf/panda.urdf > model.sdf

Assembling the Model

Finally, all that is left is to assemble the pieces into a full Gazebo model of Panda. For this we create a new folder called `panda` and copy the meshes folder and the model.sdf in there. We then create a `models.config` to describe the model to Pandas as follows

<?xml version="1.0"?>
<model>
<name>Panda Robot</name>
<version>1.0</version>
<sdf version="1.6">model.sdf</sdf>
<author>
<name>YOUR NAME</name>
<email>YOUR EMAIL</email>
</author>
<description>
A sdf model of the Franka Emika Panda robot adapted from an existing urdf model.
This model is intended to be used in Gazebo.
</description>
</model>
view raw model.config hosted with ❤ by GitHub
The model.config file

Here is the folder structure:

panda_arm_hand
|__meshes
|  |__collision
|  |  |__files from franka_description
|  |__visual
|     |__files from franka_description
|__model.config
|__model.sdf 

Copying this folder into ~/.gazebo/models will make it available to Gazebo. To pack it into a docker image, I wrote a small script in `build.sh` that will construct the above model and then build a docker image with the models already installed. The script will create a tmp folder where it stores the fully constructed models, so you can also run the script and get just the models, if that’s what you need.

Thank you for reading, and happy coding! If you liked this article, and would like to hear more ROS, Gazebo, or Panda related stuff, consider giving this post a like, or leaving me a comment 🙂

How to play custom animations during speech on a NAO (or Pepper)

I’ve been asked multiple times now how to sync animations and speech on a NAO – or Pepper for that matter; especially from Python.

The answer to that is, there are two options:

  1. The first one is to create the animation in Choreograph and then export it to a python script. You then create your usual handle to the text-to-speech module, but instead of calling the say method directly, e.g., `tts.say(“Hello”)`, you call it through the module’s `post` method, e.g., tts.post.say(“Hello”). This method exists for every function in the API and essentially just makes a non-blocking call. You can then call your animation.
  2. You create a custom animation in Choreograph, upload it to the robot, and call it through AnimatedSay or QiChat. Other than being the, I think, cleaner solution, it allows you more fine grained control over when in the sentence the animation starts and when it should stop. This is what I will describe in more detail below.

Step 1: Create the Animation

timeline

Fairly straight forward, and the same for both solutions. You use Choreograph to create a new Timeline box in which you create the animation that you would like. You then connect the timeline box to the input and output of the behavior and make sure it works as you’d expect when you press the green play button.

Step 2: Configure the Project and Upload it to the Robot

In this step, you configure the new animation to be deployed as an app on the robot.

properties-choreography

Go to the properties of the project.

properties-configured

Then make sure to select a minimum naoqi version (for NAO 2.1, for Pepper 2.5), the supported models (usually any model of either NAO or Pepper respectively) and set the ID of the Application. We will use this when calling the animations, so choose something snappy, yet memorable. Finally, it is always nice to add a small Description.

project_structure

Next, we need to reorganize the app a bit. Create a new folder and name it after your animation; again, we will use this name to call our behavior, so make sure it’s descriptive. Then move the behavior that contains your animation – by default called behavior1.xar – into the folder you just created, and rename it to behavior.xar .

buttons_upload

Finally, connect to your robot and use the first button in the bottom right corner to upload the app you just created to your robot.

Step 3: Use ALAnimatedSpeech from Python

Note: If you don’t want NAO to use the random gestures it typically uses when speaking in animated speech, consider setting the BodyLanguageMode to disabled. You can still play animations, but it won’t automatically start any.

For existing animations – that come with the robot by default – you call the animation like this

"Hello! ^start(animations/Stand/Gestures/Hey_1) Nice to meet you!"

Now, animations is nothing but an app that is installed on the robot. You can even see listed it in the bottom right corner of Choreograph. Inside the app, there are folders for the different stable poses of NAO like Stand, or Sit, which are again divided into types of animations, e.g., Gestures which you can see above. Inside these folders there is, yet another, folder named after the animation (Hey_1), inside of which is a behavior file called behavior.xar.

We have essentially recreated this structure in our own app and installed it right next to the animations app. So, we can call our own animations using the exact same logic:

"Hello! ^start(pacakge_name/animation_name) Nice to meet you!"

It also works with all the other aspects of the ALAnimatedSpeech module, so ^stop, ^wait, ^run, will work just as fine. You can also assign tags to your animations and then make it choose random animations for that tag group.

Finally, please be aware that the robot will return to it’s last specified pose after finishing an animation. Hence, if you want the robot to wait in a different position after the animation finished, you will have to do that by creating a custom posture. I have some comments on that here: The hidden potential of NAO and Pepper – Custom Robot Postures in naoqi v2.4

I hope this will be useful to some of you. Please feel free to like, share, or leave a comment below.

Happy coding!