
Hannover Messe 2026 - Conversations from the Floor
Five days at Hannover Messe 2026 with a microphone and a notebook. Recorded conversations with engineers at Agile Robots, Franka Robotics, Aeon Robotics, and Sarcomere Dynamics about what does not work yet, what surprised them, and where the real unsolved problems in robotics actually live.
- Agile Robots - What a Humanoid Actually Looks Like Up Close
- The Generalization Problem - Where University Assumptions Break Down
- Walking and Manipulation - Not Together Yet
- Perception and Sensor Fusion - The Calibration Problem
- The Deployment Problems Nobody Talks About
- Failure Detection and the Architecture Underneath
- The Grasp Policy and What the Assembly Demo Actually Uses
- AgileCore - Higher Level Intelligent Control
- Franka Robotics - How You Actually Teach a Robot
- What Data Collection Actually Looks Like
- The Real Bottlenecks
- Who Collects the Data
- Aeon Robotics - The Hand Is the Hard Part
- Force Sensing Without Tactile Sensors
- What Dexterous Manipulation Actually Requires
- The Demo-to-Deployment Gap
- The ROS Question
- Sarcomere Dynamics - Building the General Purpose Hand
- What Sarcomere Actually Builds
- The Software and Control Stack
- What This Company Represents
- What Conversations at HM2026 Actually Told Me
Most people who go to a trade fair come back with tote bags. I went to Hannover Messe 2026 with a microphone.
The plan was to find the engineers, not the product managers, not the booth guides, the people who actually built the systems on display. Past experience at expos had taught me these conversations are rare. Engineers are usually somewhere behind the demo, not in front of it.
What I found at HM2026 was different.
My first conversation happened before I even had the microphone out, at the Franka Robotics teleoperation demo where you could actually control a dual-arm setup yourself. I tried it, started asking questions, and within two minutes we were talking about data collection pipelines, operator fatigue, and what it actually takes to train a policy that generalizes beyond a controlled lab environment.
That conversation set the tone for the rest of the week.
What followed were recorded conversations and several more unrecorded ones that gave me a clearer picture of the gap between university robotics and industrial deployment than anything I had read or studied before. This is not a recap of Hannover Messe 2026. This is an account of what the engineers said when asked the questions that do not appear in press releases, about what does not work yet, what surprised them, and where the real unsolved problems actually live.
Agile Robots - What a Humanoid Actually Looks Like Up Close
The first thing you see when you walk into the Agile Robots booth is the catwalk. Agile ONE and its smaller sibling Agile ONE S are walking and occasionally dancing on an elevated runway built specifically for the fair. Seeing a humanoid robot walk in person for the first time is genuinely different from watching a video of it. The weight of the machine, the way it corrects balance mid-step, the sound of the actuators: none of that comes through on a screen.
But the catwalk was not what kept me there.
Next to it was an assembly station where two Agile ONE robots were working together to assemble a miniature toy robot. Two humanoids, coordinating on a shared task, manipulating small components with five-finger hands. That was the thing worth watching closely.
I found Christoph, a team lead for a team called Interactive Humanoid Robotics at Agile. His team covers perception, motion planning, control, SLAM, navigation, and learned behaviors, the entire software stack for humanoid applications.
I asked him what Agile ONE can do today that was impossible two years ago. His answer was more measured than I expected.
"Agile ONE is still very young. It is an early prototype and we will for sure iterate on the hardware. This is not a product today. It is a product for the future."
The target is to have the first robots at external customers by September. The hardware that makes it distinctive, he explained, is the five-finger hands, 21 degrees of freedom, integrated tactile sensing, a direct lineage from the founders' earlier research on dexterous hands at the DLR institute in Oberpfaffenhofen. The compute stack runs on two PCs in the chest, an NVIDIA edge device for model inference and a second PC dedicated entirely to real-time control.
One thing that comes through clearly at Agile, both in this conversation and from my own time visiting their headquarters, is how output-driven the culture is. The answers I got were direct, specific, and comfortable with uncertainty. That is rarer than it sounds at a trade fair.
The Generalization Problem - Where University Assumptions Break Down
I asked about perception-based manipulation for unseen objects. This is where the conversation got interesting.
In a university lab, manipulation research typically uses standard benchmark objects, the kind that appear in ImageNet, in common vision datasets, in every robotics paper. The assumption is that a model trained on enough diverse data will generalize to new objects.
Industrial parts do not work like that.
"Most of our use cases are industrial and most of the industrial parts used to assemble a robot are so specific that very few models have seen such parts before and they don't really have a name for it. Most of them are somehow shiny metal parts that are somehow cylindrical and then have some gaps, and even we find it hard to name some of the parts."
Standard VLM generalization that works for everyday objects does not transfer to the unnamed, reflective, geometrically similar components that industrial assembly actually involves. The options are prompt engineering, task-specific fine-tuning, or custom vision methods built for specific parts. None of these is the clean generalization story that most research papers suggest is imminent.
Walking and Manipulation - Not Together Yet
I asked whether any demo had combined locomotion and manipulation. The honest answer was no, not publicly.
In the lab, yes. At the fair, no.
The control architecture explains why. All body parts run from the same real-time PC in the chest, in the same process. But the walking controller takes priority. When the robot is walking or balancing it may need the arms for counterbalance, and in those situations the locomotion controller takes over the arms entirely. The current operational mode is either walking, or stopped in a balancing stance with the upper body free for manipulation tasks. Not both simultaneously in a reliable way.
This is not a criticism. It is a description of where a first-generation humanoid prototype actually is. This is the honest state of bipedal manipulation in 2026 from the company that arguably has the most advanced humanoid in Europe right now.
Perception and Sensor Fusion - The Calibration Problem
I asked about how Agile ONE handles sensor fusion across its multiple cameras and tactile sensors. The answer revealed something that does not appear in robotics papers.
For most manipulation tasks, the team simply chooses one camera rather than fusing multiple inputs. The reason is calibration. Agile has a dedicated internal team called Agile Sense whose primary responsibility is camera calibration and extrinsic sensor alignment. Without precise calibration between sensors, fusing point clouds from multiple cameras produces artifacts rather than improvements.
"Unless you solve that nicely, ideally if you have a very good calibration you don't need to apply a lot of fusion."
This is something that gets abstracted away in university courses. You learn the math of sensor fusion, implement a Kalman filter, and move on. What Agile's setup reveals is that the real constraint is not the algorithm, it is the physical calibration between sensors that drifts, degrades, and requires dedicated engineering attention to maintain. A humanoid with cameras on its head, chest, and hands is not one calibration problem. It is a continuously changing one as the robot moves and its joints flex.
The Deployment Problems Nobody Talks About
Battery life is not fully characterized yet. There is no docking station. Communication runs over Wi-Fi.
The deeper problem is safety certification.
"If you have an emergency where you have to e-stop it, it means the robot will fall. It is not inherently stable. As long as it needs active balancing control to maintain a stable stand, the emergency mode is really tricky."
Without a safe e-stop solution, human workers still need to be physically separated from these humanoids. A 70kg machine falling unexpectedly is a serious hazard regardless of the AI layer on top. This is not a software problem, it is a fundamental constraint of legged locomotion that production deployment has not solved.
The question everyone is asking, when will humanoids work alongside humans in factories, is not answered by AI capability. It is answered by safety certification frameworks that do not yet exist for legged mobile robots in Europe.
Failure Detection and the Architecture Underneath
"Most of our tasks we still do in a cascaded way where not everything comes from one magic model. We have an actual state machine layer and only subtasks are solved end-to-end with some policy."
This matters for anyone coming from a university background where end-to-end learned policies are often framed as the goal. In practice a state machine with learned components for specific subtasks is more controllable and easier to deploy reliably. And failure recovery only works if failure cases were explicitly in the training data; a model trained only on successful demonstrations will not know how to recover when something goes wrong.
This mirrors something fundamental in ROS2 and MoveIt2 pipeline design. The most reliable systems I have built are not the ones with the most sophisticated single component, they are the ones where each stage has a clearly defined input, output, and failure condition. A state machine gives you that structure explicitly. An end-to-end model gives you performance on the training distribution and opacity everywhere else. The industry has not abandoned classical methods because they are outdated. It has kept them because they are auditable.
The Grasp Policy and What the Assembly Demo Actually Uses
In conversations with other engineers at the booth I learned more about the assembly demonstration specifically. The grasp policy for the task is trained entirely on data collected by Agile's own team, not from a primitive library, though primitive-based approaches have been explored for some use cases. The demo is trained for a specific task with objects expected roughly in position, after which perception takes over for precise pose estimation. Everything outside the manipulation target is effectively ignored by the perception system during the pick.
This kind of task-specific engineering, where each phase of a manipulation sequence has its own calibrated parameters, is something that rarely appears in academic manipulation papers but is standard practice in industrial deployment.
The Diana series demo nearby made a different point. Two arms assembling a rotor on fixed coordinates with a force-torque sensor and a 2F gripper, no vision. Precise, repeatable, reliable. Sometimes the right engineering answer is not the most sophisticated one.
AgileCore - Higher Level Intelligent Control
AgileCore, Agile's software platform, is designed ultimately for factory-level optimization, not just a single robot station but an entire production environment. Christoph's team does not use it directly, as it is developed by a separate team at Agile, but the architecture he described is built around LLM and VLM based agents that make automation workflows easier to configure and deploy without requiring deep programming expertise from the end user. The agent structure uses one main conversational agent that calls specialized sub-agents or tools for specific tasks, rather than multiple top-level agents running in parallel which Christoph described as quickly becoming confusing. The target is a system where a factory operator can interact with an entire automation environment through a single intelligent interface.
Franka Robotics - How You Actually Teach a Robot
Franka Robotics shares the Agile Robots booth. They are owned by Agile now, also Munich based, and at HM2026 they were showing two things, a VLA model use case and a teleoperation demo where you could remotely control a dual-arm setup yourself.
I tried the teleoperation before I had the microphone out. That became my first conversation of the week.
Corinna has been at Franka for four and a half years, starting as a software engineer and now also leading a team on the planning and future development side. The system she was demonstrating is the FR3 Duo, two single-armed seven-axis robots combined into a dual-arm setup. The input device is the Franka Geluido, based on an open source project that Franka adapted to better mirror the kinematics of the robot arms. The control runs through FCI, the Franka Control Interface, a real-time loop that maps encoder values from the input device directly to the robot.
What Data Collection Actually Looks Like
The teleoperation setup is not just a demo. It is a data collection tool for training manipulation policies.
I asked how they handle data quality. Every human demonstration has errors, hesitations, suboptimal motions. The answer surprised me.
"Actually we don't filter. Our aim is to collect data even if it is bad ones. We really want to have as much data as possible to have a good base for training AI models."
The pipeline records episodes and marks them as good or bad, but the goal is volume first. Bad demonstrations are kept deliberately, the reasoning being that a model trained only on perfect executions will not generalize to the imperfect conditions of real deployment.
For a basic pick and place task, picking a red cube and placing it in a box, training required around 200 to 300 episodes. And even then, variety matters. Different positions, different approach angles, enough variation in the data that the model develops something closer to generalization rather than memorization of a single trajectory.
For more complex tasks in varied environments the number climbs into the thousands. At the fair itself, with different lighting and a different physical environment than the lab, Corinna's team was actively fine-tuning on-site. The model was not performing as expected in the new conditions so they collected additional data at the booth and retrained during the show.
That detail stayed with me. A company showing a manipulation demo at one of the world's largest industrial fairs was simultaneously debugging and retraining their model because the lighting was different. That is not a failure. It is an honest picture of where learned manipulation actually is right now. The gap between lab performance and deployment performance is not solved by shipping the product. It follows you there.
The Real Bottlenecks
I asked what the biggest bottleneck in data collection was: training time, labeling, or operator fatigue.
All three, but not equally.
Policy training time is the largest time cost in the pipeline. Labeling is largely automatic from their system and sometimes skipped or done poorly, which Corinna acknowledged directly.
But the answer that landed hardest was about operator fatigue. Performing the same manipulation task repeatedly for hours through a teleoperation device is exhausting. Corinna tried it herself for a few hours one day. She was too tired after.
This is something that does not appear in papers about imitation learning or learning from demonstration. The human in the loop is a resource that depletes. The ergonomics of data collection, meaning how long an operator can perform a task reliably before fatigue degrades the quality of the demonstrations, is a real engineering and logistical constraint that every company building these systems has to manage.
I have observed that many companies have started hiring people from non-engineering backgrounds just to teleoperate robots all day. Currently robots are creating more jobs than they might take back in the future.
Who Collects the Data
The teleoperation system is not purely internal. Customers receive it to collect task-specific data at their own sites. Fine-tuning at the customer site is part of the deployment model, not an edge case but an expected step in the process.
This means the line between product delivery and ongoing engineering support is blurred in a way that most robotics product descriptions do not make explicit. Deploying a Franka manipulation system is not a one-time integration. It involves on-site data collection, fine-tuning, and iteration at the deployment environment, by the customer, by Franka, or both.
For a student or engineer thinking about what industrial manipulation deployment actually involves, that is a significant reframe. The work does not end at the lab bench and robots often don't work out of the box.
Aeon Robotics - The Hand Is the Hard Part
Aeon Robotics is based in Braunschweig, Lower Saxony. They are not building a full humanoid. They are building the part that most humanoid companies are quietly struggling with: the hand.
I spoke with Dr. Lars Heim, co-founder and COO, at their booth. The product on display was their dexterous robotic hand, available in 11, 15, or 19 degrees of freedom configurations depending on the use case, with the highest configuration including finger spreading. In the palm: an RGB camera and a time-of-flight depth sensor. Weight: 900 grams. Price: €12,000 per unit as a standalone. It connects to a standard cobot via ISO flange and has onboard edge AI for local inference.
The spec alone is striking. But what Lars explained next is what made the conversation worth recording.
Force Sensing Without Tactile Sensors
Most dexterous hands use tactile sensors to measure contact force. Aeon does not.
"We are measuring the force very precisely without the need of tactile sensors through the motor currents in the drives, in each finger joint, that's in the millinewton area. So very precisely."
Direct drives in every finger joint, not tendon-driven, means each joint has its own actuator and the current draw of that actuator is a direct proxy for the force being applied. The AI layer then uses this force data alongside visual data to learn how to grip rigid objects and sensitive objects with appropriate force without damaging them.
This is a meaningful architectural choice. Tendon-driven hands are more compact but introduce backlash, require more maintenance, and make per-joint force measurement harder. Direct drives are bulkier but give you cleaner data at every joint. For a hand that needs to be taught manipulation behaviors through demonstration, the quality of that per-joint force signal directly affects what the AI can learn from it.
In a separate conversation, Dr. Sönke Michalik the CEO described how the hand integrates with ROS at the control level: the gripper interfaces through their control stack and can be operated within a ROS environment, with the current version running on ROS1.
What Dexterous Manipulation Actually Requires
I asked Lars what people outside the industry most underestimate about dexterous manipulation compared to a standard gripper.
"The complexity is the most important point. We have up to 19 degrees of freedom in a single gripper, and all of those degrees of freedom have to be controlled and programmed. But to create such high degrees of freedom you have to have very small actuators, very miniaturized technology to make it even possible."
A standard 2F gripper has one degree of freedom in practice: open or close. The control problem is trivial. A 19 DOF hand has 19 coupled control problems running simultaneously, each with its own dynamics, each interacting with the others through contact with an object. The miniaturization constraint means the actuators operating in that space are working at the edge of their thermal and mechanical limits.
His view on why this matters beyond academic interest:
"You can have the best humanoid robot that works perfectly, can run, but if it can't interact with its environment, the hand is the point of action. How would it act within an environment without a hand? It is the core point for such a system."
This is the argument that justifies Aeon's entire product thesis. The locomotion problem for humanoids is largely being solved. The manipulation problem, and specifically the dexterous grasping problem, is not. A company that owns that layer has a defensible position regardless of which humanoid platform becomes dominant.
The Demo-to-Deployment Gap
I asked about the gap between what the demo shows and what the system does in real industrial deployment.
Lars's answer was more confident than I expected. For the cobot-plus-hand configuration, a standard collaborative arm with the Aeon hand as end effector, the technical gap is smaller than people assume. An integrator needs to certify the full system for the specific use case, but the technology is deployable as shown.
Current use cases include diagnostics, handling samples and sample racks in laboratory environments. On the industrial side, Aeon is in active conversations with automotive suppliers for test stands and production line applications.
For full humanoid deployment the picture is different. The regulatory barrier in Europe, particularly around human-robot collaboration, is the real constraint. Not the technology. The same theme that came up at Agile, from a different angle.
The ROS Question
In both conversations with Dr. Heim and separately with Dr. Michalik, the topic of ROS came up. Aeon is currently running on ROS1.
"We are about to switch to ROS2 in the near future. Especially because we are working with some universities and for security reasons from the university side they are switching to ROS2, so we have to do that too."
The migration is in planning, not started. Lars did not expect it to be technically complex once begun.
But the more interesting part of this conversation was how the hand actually integrates into a ROS-based system today. Aeon provides a URDF model of the hand so it can be loaded directly into a robot description alongside the arm, the full kinematic chain from base to fingertip is available for motion planning and visualization in RViz from the start. On top of that they provide an API that abstracts the low-level finger joint control, so a developer working in ROS does not need to manage individual joint commands directly. The API sits between the high-level task logic and the firmware running on the hand hardware.
This matters for integration complexity. Adding a 19 DOF hand to an existing manipulator setup is not trivial, but having a URDF and a control API ready to use means the integration surface is well-defined. The hard problems are calibrating the hand camera relative to the arm's tool frame, tuning the grasp force thresholds for specific objects, and building the perception pipeline that feeds object pose into the grasp planner. The hand itself is not the integration bottleneck; the perception and planning layers around it are.
What is interesting about the ROS1 situation is the direction of pressure driving the eventual migration. It is not internal engineering requirements. It is external partners, specifically universities, requiring ROS2 for security compliance. A commercial company's middleware roadmap being shaped by academic collaboration requirements is not something that appears in any robotics curriculum. But it is apparently how some of these decisions actually get made.
Sarcomere Dynamics - Building the General Purpose Hand
Sarcomere Dynamics is not a company most people in European robotics have heard of. They are Canadian, based in British Columbia, and they came to Hannover Messe 2026 as part of Canada's partner country presence at the fair. They have been building near-human dexterous robotic hands for years, winning the Innovation World Cup last year for their ARTUS hand. This year they were showing something new.
The booth had a dual-arm manipulation platform with ARTUS hands as end effectors and a head unit, assembled on an open source robot structure used for rapid hardware bring-up. The demo was not fully functional at the fair. Some parts were still being worked on. I found that more honest and more interesting than a looping pre-programmed demo running perfectly on repeat.
I spoke with Zane, a senior mechanical engineer on the R&D team.
What Sarcomere Actually Builds
The ARTUS Lite has 20 degrees of freedom, real-time force feedback, and a robust high-force grasp designed as a universal end effector for industrial automation, teleoperation, and hazardous environments. It is compatible with all major robotic arms and integrates into existing systems without requiring a complete hardware overhaul. The product thesis is straightforward: rather than a factory investing in multiple task-specific grippers with separate software stacks, a single general purpose hand handles the full range of manipulation tasks across a production environment.
Sarcomere has diversified across the dexterity spectrum, with lower options for tighter budgets and higher dexterity versions aimed at the humanoid market. The bi-manual platform on display was built for two specific customers and is planned for broader release later this year, modular across the ARTUS lineup depending on use case.
The Software and Control Stack
Sarcomere's background is mechanical. Firmware is written in C. Customer control is handled through a Python API available via pip that provides joint control and feedback directly. Purchase the hand, install the package, build your application on top. For teleoperation they have two methods: a data glove sending position commands directly to the hand, and a VR headset that tracks finger digits and arm position simultaneously for full upper body teleoperation of a humanoid platform. Wireless latency and VR tracking accuracy for fine finger movements are the honest challenges Zane described, problems that hardware quality alone does not solve.
At the joint level the current implementation uses PID position control with actuation-side encoding. For the humanoid platform they have implemented gravity compensation and a form of impedance control combining position and force data. Zane described it directly:
"It is not a secret that impedance control is the best way to design a collaborative robot. You do not want to hurt people or damage things in the surrounding environment."
Force sensors in the fingers are on the roadmap to enable hardware-level impedance control and future haptic feedback for teleoperation operators.
What This Company Represents
Sarcomere is a useful case study in a transition several hardware companies are going through right now. They built a technically differentiated product and sold it as a standalone component. Now they are building the complete system around it because customers need a full solution, not just a part.
That transition from component supplier to system integrator is not primarily a technical challenge. It is an organizational and software challenge for a team whose expertise is mechanical. The control architecture is sound. But building a complete bi-manual manipulation platform that a customer can actually deploy is a different category of work from designing precision actuators.
The dual-arm platform at HM2026 is the physical evidence of that transition in progress. Not finished. Not polished. But real, and being built for paying customers.
What Conversations at HM2026 Actually Told Me
I went to Hannover Messe expecting a technology gap between university lab work and industrial deployment. I found one. But it was not where I expected it to be.
The technology is closer than the headlines suggest. ROS still runs in production at companies. Dexterous hands are deployable today on standard cobots. Manipulation policies trained on real data work in controlled conditions. The tools and methods that universities teach are very close.
What is different is everything around the technology. How problems are scoped. How systems are tested and certified. How data is collected at scale by human operators who get tired. How a demo that works perfectly in a lab requires on-site fine-tuning when the lighting changes. How a robot that ships to a customer is not the end of the engineering work but the beginning of a new phase of it. How a middleware decision at a company gets shaped by a collaboration partner's requirements. These are not things that appear in papers or lecture slides. They are what deployment actually looks like.
The ROS picture across the fair was telling in its own right. Aeon is still on ROS1 and planning a migration to ROS2. A senior engineer at RobCo told me directly that ROS is too brittle for production and that for focused manipulator deployments it creates complexity that serves no purpose. Meanwhile other companies are building entirely custom stacks. The community narrative that ROS2 is the universal answer for industrial deployment is more contested in practice than it appears from the outside. The right middleware decision follows from the architecture and the task, not the other way around.
The other thing that stayed with me is more uncomfortable. Autonomous dexterous manipulation, legged humanoids, edge AI inference on mobile platforms - all of it is real, all of it is working at some level, and all of it is still at the earliest viable stage. These are not products disrupting industry today. They are credible signals that they will. The engineers building them know this better than anyone. What struck me most was how openly they said so.
That honesty, from people who have spent years building these systems, was the most valuable thing I came back with. Not the tote bags.
Ruchit Bhanushali is an M.Eng Intelligent Robotics student in Germany. This piece was written from recorded conversations and field notes taken at Hannover Messe 2026.