Method for dynamic multi-dimensional spatio-temporal human-machine interaction and feedback
Published in USPTO, 2024
📌 Overview
This patent, filed by Siemens and granted on December 3, 2024, describes a system and method enabling bi‑directional communication between industrial machines and humans in automated manufacturing environments. It projects real-time interaction intent and future motion cues using visual, audio, thermal, or haptic carriers to enhance safety and clarity during collaboration (European Publication Server).
Core Innovation
Interaction Reasoning Layer
- Combines perception of human presence with machine task and state information to reason about imminent and near-future interactions.
- Generates two types of “images”: an interaction image for the immediate machine action zone and a foreshadowing image for upcoming actions (European Publication Server).
Multi‑Domain Projection System
- Visual, audible, tactile, or temperature-based projections convey real‑time machine intent.
- Projectors or wearable devices display distinct interaction and foreshadowing cues with differentiating attributes (e.g., color, intensity, timing) (European Publication Server).
Programmable Output Detector
- Captures encoded data embedded in projected images.
- Translates that into machine-readable messages, optionally delivering via networks or wearable devices to guide human response (European Publication Server).
Technical Highlights
System Components:
- Cyber‑mechanical system with machine, controller, task planner, perception sensors, and projection hardware.
- Interaction reasoner that fuses human proximity, environment, and task context to generate guidance outputs (European Publication Server).
Method Flow:
- Define high-level goals and split into tasks.
- Sense environment and human presence.
- Identify imminent interactions.
- Create interaction and foreshadowing projections.
- Detect outputs with programmable devices and relay encoded information to humans via communication channels (European Publication Server).
Customization:
- Visual cues tailored by human identity or skill level.
- Adjustable output modalities for accessibility (e.g. colorblind-friendly alternatives, stronger signals for novices) (European Publication Server).
Benefits & Use Cases
- Enhanced Safety: Humans receive cues before hazardous motion, reducing reliance solely on machine-side avoidance or proximity detection.
- Improved Collaboration: Transparent machine intent fosters trust and smoother hand‑offs in tasks like shared assembly or material transfer.
- Psychological Comfort: Workers feel informed and anticipatory about machine actions, reducing stress and ambiguity.
- Adaptive Display: Scalable for multi-robot settings, overlapping projections, and personalized human-machine pairing (European Publication Server).
TL;DP Summary Table
Element | Description |
---|---|
Goal | Enable machine‑to‑human feedback in industrial HRI \ |
Mechanism | Interaction + foreshadowing “images” in visual/audio/haptic/thermal domains \ |
Detection & Encoding | Embedded info captured by detectors, delivered via network or wearables \ |
Adaptability | Contextual, personalized to user skill and identity \ |
Outcome | Safer, more coordinated human‑machine workflows |
BibTeX Citation:
@misc {bank2024method, title= {Method for dynamic multi-dimensional spatio-temporal human machine interaction and feedback} , author= {Bank, Hasan Sinan and Little, Michael} , year= {2024} , month=dec # "~3", publisher= {Google Patents} , note= {US Patent 12,157,235} }