0
Discussion

Discussion of “A Review of Intent Detection, Arbitration, and Communication Aspects of Shared Control for Physical Human–Robot Interaction” (Losey, D. P., McDonald, C. G., Battaglia, E., and O'Malley, M. K., 2018, ASME Appl. Mech. Rev., 70(1), p. 010804) OPEN ACCESS

[+] Author and Article Information
James P. Schmiedeler

Fellow ASME
Department of Aerospace and Mechanical Engineering,
University of Notre Dame,
Notre Dame, IN 46556
e-mail: schmiedeler.4@nd.edu

Patrick M. Wensing

Department of Aerospace and Mechanical Engineering,
University of Notre Dame,
Notre Dame, IN 46556
e-mail: pwensing@nd.edu

Manuscript received January 9, 2018; final manuscript received January 24, 2018; published online February 14, 2018. Editor: Harry Dankowicz.

Appl. Mech. Rev 70(1), 015503 (Feb 14, 2018) (3 pages) Paper No: AMR-18-1004; doi: 10.1115/1.4039146 History: Received January 09, 2018; Revised January 24, 2018

A unifying description of the shared control architecture within the field of physical human–robot interaction (pHRI) facilitates the education of those being introduced to the field and the framing of new contributions to it. The authors' review of shared control within pHRI proposes such a unifying framework composed of three pillars. First, intent detection addresses the robot's interpretation of human goals, representing one-way communication. Second, arbitration manages the respective roles of the human and robot in the shared control. Third, feedback is the mechanism by which the robot returns information to the human, representing one-way communication in the opposite direction. Interpreting existing contributions through the lens of this framework brings out the importance of mechanical design, modeling, and state-based control.

Physical human–robot interaction (pHRI) has indeed become a foundational aspect of modern robotics research and practice. This is clearly reflected in the prominent recommendations for continued advancement of pHRI methods in the so-called “roadmaps” for robotics research from both the United States [1] and the European Union [2]. A good overview of the pHRI field in general is Haddadin and Croft's chapter in the Springer Handbook of Robotics, which covers at a high level the topics of safety, design, control, motion planning, and interaction planning [3]. The review by O'Malley and colleagues nicely highlights the breadth of the field and then hones in on the specific issue of shared control, proposing a structure to unify the framing of that issue across the broad application domain [4]. Finding such commonality is no small feat given that, for example, the shared control challenges of wearable robots for rehabilitation (e.g., promoting recovery to the point that the robot is unneeded) are quite different from those of teleoperation (e.g., enhancing human dexterity in microsurgery) and co-manipulation (e.g., augmenting human capability). Still, the three pillars of the proposed framework, intent detection, arbitration, and feedback/communication (the two are seemingly used synonymously at times in the review), provide a structure that well captures the implementation of shared control in pHRI even when that control was originally conceived of outside the framework's context.

The discussion herein considers the pillars of the framework in terms of three key points brought out by the review: the importance of mechanical design, the utility of models, and the pervasiveness of finite state machines. While beyond the scope of the review itself, the importance of the mechanical design of the physical coupling between the human and robot is a consistent, if not necessarily explicit, theme. The quality of the physical coupling is absolutely essential to the successful implementation of the shared control. Accessing the user's input for intent detection, implementing the desired control action determined in large part through arbitration, and then communicating back to the human via haptic feedback all fundamentally rely on a robust, reliable mechanical connection. The successful examples of experimental implementation provided in the review necessarily represent the effective marriage of both shared control and good interface design, since neither alone is sufficient. In terms of modeling, the review emphasizes that it can be leveraged to facilitate shared control, especially in circumstances of repetitive motions. Intent detection alone is a complex task that seeks to recognize relationships among multiple actions. Probabilistic approaches are common, but need to be both data efficient and robust for practical implementation. Therefore, any benefit that can be gained from modeling must be exploited. Finally, the strategy of switching controllers with a finite state machine is extremely common across applications. User intent is often framed as identifying the desired state, arbitration frequently involves determining the correct state of shared control, and feedback commonly serves to inform the human of the current state assumed by the robot. Clearly, the finite state approach offers a number of advantages that motivate its prevalent use, but it has significant limitations as well.

The review discusses a wide range of sensor modalities that contribute to measuring intent, the choices among which amount to critical design decisions for the pHRI system. Successful implementation requires the management of performance trade-offs among the sensors themselves as well as trade-offs associated with overall system cost, complexity, and power requirements. For example, surface EEG provides noninvasive measurement over a larger area of the brain, whereas intracranial EEG offers more precision in localized measurement and less noise. Surface EMG is similarly noninvasive, but subject to noise and limited in terms of the muscles whose activations can be measured. In contrast, intramuscular EMG can provide access to deeper muscles and reduced noise, but the mechanical connections are often less reliable. Force myography and sonomyography are relatively recent technologies likely to find expanded use in applications that leverage their respective advantages. Force/torque sensing and kinematic measurement are particularly common in lower limb exoskeletons and prostheses. One might typically expect to provide superior intent detection by having access to a larger number of sensor inputs that can be fused together. The review's RiceWrist-S [5] case study, however, provides an instructive example in managing trade-offs in sensor selection. In this case, the user's force input can be reliably estimated without a force/torque sensor using measurement of actuator inputs and joint kinematics with an accurate dynamic model of the robot.

The utility of the dynamic model in that case study is just one example of the important role that modeling plays in pHRI. The review itself emphasizes that models are most critical when the measured signal is not identical to the required intent information. Taking locomotion with the aid of a lower limb exoskeleton or powered prosthesis as an example application, neither the intended phase nor mode of locomotion is directly measurable. Approaches to identifying the desired locomotion mode from kinematic, force sensor, and/or EMG data have included Gaussian mixture models [6] and dynamic Bayesian networks [7], which are similar to hidden Markov models used in other applications cited in the review. These machine learning approaches require significant training data and typically treat the mapping of sensor data to the user's intent as a black-box pattern recognition problem. This is well motivated for cases such as trajectory selection within teleoperation [8], in which first principles models do not capture the fundamental mechanics of the task. Steady-state locomotion, however, is a primarily periodic activity that is largely constrained by its underlying physics. Accordingly, simple biologically inspired template models like the dual spring-loaded inverted pendulum (dual-SLIP) model reproduce many of the key human walking characteristics [9]. Recent work has developed template-based, bio-inspired locomotion control approaches for humanoid robots [10,11] that can be used to approximate human feedback control even in perturbed, nonsteady-state cases. Ongoing work is examining how these models can likewise improve intent detection in robot-assisted locomotion. Shared control in performing other repetitive actions may likewise benefit from replacing black box approaches to pHRI intent detection with techniques informed by physics-based models.

Human intent might be most accurately represented as a generally continuous function with some occasional smooth transitions and a few abrupt changes. Such a representation is quite challenging, though, so intent detection is frequently framed as determining the appropriate state from among a finite set of options. The review points out that human intent is often defined as a trigger to initiate a predefined and automated pose, grasp, or other function of an upper limb exoskeleton or powered prosthesis [12]. The same is true of lower limb exoskeletons and powered orthoses for which intent can amount to selection of a motion state such as walking, sitting, or standing. In many commercially available devices, those intent transitions are trigged through simple button presses. Such approaches tend to emphasize robustness at the expense of the fluency of the human–machine interface. The review's example of the omnidirectional-type cane robot [13] combines discrete walking mode detection with a forward dynamic model of the robot through a Kalman filter to refine the intent resolution to include desired heading and speed changes. One alternative approach that manages the trade-off somewhat differently is the use of accentuated physical cues to trigger mode transition, such as between walking and running on a powered prosthesis [14]. At the within-step level, transitions between gait phases are often conveyed by compensatory body movements such as weight shifting [15].

Even if more fluent interfaces or triggers are employed, switching states still introduces some difficulties. First, the transitions between states must be carefully designed to be smooth enough that inadvertent toggling between them, particularly when intent is detected probabilistically, is not disruptive to the user. As per the feedback/communication portion of the review, though, the user also often requires some indication that the transition has taken place, so design trade-offs are again present here. A larger number of states typically provides for more advanced control over what should ideally be a fluid motion. As the number of states increases, though, the number of parameters that must be tuned to define the control and switching rules correspondingly increases. In some applications, such as rehabilitation, the time required for tuning may exceed the stamina and/or patience of the user. Therefore, future research is warranted into how more continuous approaches can be implemented where possible. As one example, recent work has introduced virtual constraints that continuously parameterize periodic joint patterns as functions of a mechanical phasing variable to create a single controller for a powered lower limb prosthesis over the entire gait cycle [16].

To effectively share and modulate control between the robot and human, the physical interface of the pHRI system must be designed with care. In the cases of co-activity and master-slave role allocations, the interface design may be somewhat less challenging since the robot is rarely required to both transmit sizable forces/torques to the human and appear as transparent. In the teacher-student and collaborative role arbitrations, though, these two capabilities are at least desirable and in some cases, essential. Transmitting large forces/torques with high bandwidth suggests relatively powerful and rigid actuators, the mass and supporting structure of which create challenges in achieving transparency when the human is to operate on her/his own. Designs of this nature limit the amount of control that can be allocated to the human. On the other hand, cable-drive systems that transmit forces/torques from actuators placed in strategically remote locations have limited capacity and bandwidth due in part to the challenge of reliable mechanical connection to the human. Accordingly, such designs limit the amount of control allocated to the robot. Compliant exosuits are one example where recent work has sought to strike the right balance to ultimately reduce a human's metabolic expenditure during locomotion [17].

The prevalence of finite state machines in shared control influences arbitration just as much as it does intent detection. The review points out that role arbitrations are switched between different states just as intent is detected in terms of a desired state. This is true in teleoperation [8] and lower limb prostheses [6], among other applications. In commercially available lower limb exoskeletons, the user can select from a preset number of control arbitrations in combination with the gait mode selection. Some of the work cited in the review introduces more continuous approaches, such as one that allows the robot to interpolate between leader and follower roles [18]. In separate places, the review also highlights how safety constrains the degree of arbitration. In some cases, the human must retain final authority when the human and robot are in conflict so as to ensure safety. In other cases, though, virtual fixtures are used to attenuate or nullify the human's intent when fulfilling it would be dangerous. In the case of a lower limb exoskeleton, the robot would not typically cede authority to the human's intent when doing so would result in a fall.

Modeling plays a key role in arbitration when the desired roles for the human and robot are not explicitly provided. As in the case of intent detection, the approaches are typically probabilistic in nature. For example, the review highlights the data-driven stochastic optimization strategy employed in a haptic reaching task [19] and points out that such approaches are most effective when the same task is repeated for multiple iterations. This is similar to the utility of modeling in shared control of repetitive tasks like walking.

Informing the human of the coupled system's state or the environment's characteristics during interaction is often achieved with haptic feedback. In the case of both kinesthetic feedback and cutaneous or tactile feedback mentioned in the review, the mechanical interface design is critical to ensure that the desired feedback is delivered. Again, the review's RiceWrist-S [5] case study provides a good example. The review explicitly highlights that the design choice to use cable-drive transmissions to reduce mechanical friction and backlash was motivated by the need to more accurately convey the desired virtual environment to the human.

This third pillar of shared control may find utility in co-opting the terms of fluency and legibility that are common in the broader field of HRI, but less typically applied to pHRI. A robot's motion is said to be fluent if it is conveyed in a way that is legible, meaning understandable, to a human collaborator so that she/he can reliably predict the robot's future actions and goals [20]. This has typically been used to describe communication from a robot to a human that is not physically mediated. The same concept, however, is equally applicable to haptic feedback to a human that should likewise be legible and thereby facilitate prediction of future robot actions. One would certainly expect more fluent pHRI performance when the shared control is characterized by feedback/communication from the robot that is legible to the human. Metrics of legibility have been proposed for a variety of applications, including mobile manipulators [21] and robot arms [22]. Future work might adapt these metrics or develop entirely new ones that are appropriate to the feedback in pHRI systems.

O'Malley and colleagues have put together a comprehensive review of shared control in pHRI and introduced a unifying framework with which to describe its implementation across a broad range of applications. The pillars of intent detection, arbitration, and feedback/communication capture the key functions that contribute to successful shared control. The review of the literature in light of this framework highlights the equal importance of good mechanical design of the physical interface, the key role of modeling, and the common reliance on finite state machines in these systems. Opportunities for advancement in all three areas will contribute to more fluent pHRI as the field continues to expand.

Computing Community Consortium, 2016, “ A Roadmap for US Robotics: From Internet to Robotics,” Computing Community Consortium, Washington, DC, accessed Feb. 5, 2018, http://jacobsschool.ucsd.edu/contextualrobotics/docs/rm3-final-rs.pdf
SPARC Robotics, 2016, “ Robotics 2020 Multi-Annual Roadmap for Robotics in Europe,” SPARC Robotics, EU-Robotics AISBL, The Hauge, The Netherlands, accessed Feb. 5, 2018, https://www.eu-robotics.net/sparc/upload/about/files/H2020-Robotics-Multi-Annual-Roadmap-ICT-2016.pdf
Haddadin, S. , and Croft, E. , 2016, “ Physical Human-Robot Interaction,” Springer Handbook of Robotics, Springer International Publishing, Berlin, pp. 1835–1874. [CrossRef]
Losey, D. P. , McDonald, C. G. , Battaglia, E. , and O'Malley, M. K. , 2018, “ Review of Intent Detection, Arbitration, and Communication Aspects of Shared Control for Physical Human-Robot Interaction,” ASME Appl. Mech. Rev., 70(1), p. 010804.
Pehlivan, A. U. , Losey, D. P. , and O'Malley, M. K. , 2016, “ Minimal Assist-as-Needed Controller for Upper Limb Robotic Rehabilitation,” IEEE Trans. Rob., 32(1), pp. 113–124. [CrossRef]
Varol, H. A. , Sup, F. , and Goldfarb, M. , 2010, “ Multiclass Real-Time Intent Recognition of a Powered Lower Limb Prosthesis,” IEEE Trans. Biomed. Eng., 57(3), pp. 542–551. [CrossRef] [PubMed]
Young, A. J. , Simon, A. , and Hargrove, L. J. , 2013, “ An Intent Recognition Strategy for Transfemoral Amputee Ambulation Across Different Locomotion Modes,” International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Osaka, Japan, July 3–7, pp. 1587–1590.
Aarno, D. , Ekvall, S. , and Kragic, D. , 2005, “ Adaptive Virtual Fixtures for Machine-Assisted Teleoperation Tasks,” IEEE International Conference on Robotics and Automation (ICRA), Barcelona, Spain, Apr. 18–22, pp. 1139–1144.
Geyer, H. , Seyfarth, A. , and Blickhan, R. , 2006, “ Compliant Leg Behaviour Explains Basic Dynamics of Walking and Running,” R. Soc. B: Biol. Sci., 273(1603), pp. 2861–2867. [CrossRef]
Liu, Y. , Wensing, P. M. , Orin, D. E. , and Zheng, Y. F. , 2015, “ Dynamic Walking in a Humanoid Robot Based on a 3D Actuated Dual-SLIP Model,” IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, May 26–30, pp. 5710–5717.
Wensing, P. M. , and Revzen, S. , 2017, “ Template Models for Control,” Bioinspired Legged Locomotion, M. Sharbafi and A. Seyfarth , eds., Elsevier, Oxford, UK.
McMullen, D. P. , Hotson, G. , Katyal, K. D. , Wester, B. A. , Fifer, M. S. , McGee, T. G. , Harris, A. , Johannes, M. S. , Vogelstein, R. J. , Ravitz, A. D. , Anderson, W. S. , Thakor, N. V. , and Crone, N. E. , 2014, “ Demonstration of a Semi-Autonomous Hybrid Brain-Machine Interface Using Human Intracranial EEG, Eye Tracking, and Computer Vision to Control a Robotic Upper Limb Prosthetic,” IEEE Trans. Neural Syst. Rehabil. Eng., 22(4), pp. 784–796. [CrossRef] [PubMed]
Wakita, K. , Huang, J. , Di, P. , Sekiyama, K. , and Fukuda, T. , 2013, “ Human-Walking-Intention-Based Motion Control of an Omnidirectional-Type Cane Robot,” IEEE/ASME Trans. Mechatronics, 18(1), pp. 285–296. [CrossRef]
Shultz, A. H. , Lawson, B. E. , and Goldfarb, M. , 2015, “ Running With a Powered Knee and Ankle Prosthesis,” IEEE Trans. Neural Syst. Rehabil. Eng., 23(3), pp. 403–412. [PubMed]
Zeilig, G. , Weingarden, H. , Zwecker, M. , Dudkiewicz, I. , Bloch, A. , and Esquenazi, A. , 2012, “ Safety and Tolerance of the ReWalk Exoskeleton Suit for Ambulation by People With Complete Spinal Cord Injury: A Pilot Study,” J. Spinal Cord Med., 35(2), pp. 96–101. [CrossRef] [PubMed]
Quintero, D. , Martin, A. E. , and Gregg, R. D. , 2017, “ Toward Unified Control of a Powered Prosthetic Leg: A Simulation Study,” IEEE Trans. Control Syst. Technol., 26(1), pp. 305–312. [CrossRef] [PubMed]
Asbeck, A. T. , De Rossi, S. M. , Galiana, I. , Ding, Y. , and Walsh, C. J. , 2014, “ Stronger, Smarter, Softer: Next-Generation Wearable Robots,” IEEE Rob. Autom. Mag., 21(4), pp. 22–33. [CrossRef]
Evrard, P. , and Kheddar, A. , 2009, “ Homotopy Switching Model for Dyad Haptic Interaction in Physical Collaborative Tasks,” World Haptics, Third Joint EuroHaptics Conference and Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems, Salt Lake City, UT, Mar. 18–20, pp. 45–50.
Medina, J. R. , Lorenz, T. , and Hirche, S. , 2015, “ Synthesizing Anticipatory Haptic Assistance Considering Human Behavior Uncertainty,” IEEE Trans. Rob., 31(1), pp. 180–190. [CrossRef]
Hoffman, G. , and Breazeal, C. , 2007, “ Cost-Based Anticipatory Action Selection for Human-Robot Fluency,” IEEE Trans. Rob., 23(5), pp. 952–961. [CrossRef]
Beetz, M. , Stulp, F. , Esden-Tempski, P. , Fedrizzi, A. , Klank, U. , Kresse, I. , Maldonado, A. , and Ruiz, F. , 2010, “ Generality and Legibility in Mobile Manipulation,” Auton. Rob., 28(1), pp. 21–44. [CrossRef]
Dragan, A. , Lee, K. , and Srinivasa, S. , 2013, “ Legibility and Predictability of Robot Motion,” Eighth ACM/IEEE International Conference on Human-Robot Interaction (HRI), Tokyo, Japan, Mar. 3–6, pp. 301–308.
Copyright © 2018 by ASME
View article in PDF format.

References

Computing Community Consortium, 2016, “ A Roadmap for US Robotics: From Internet to Robotics,” Computing Community Consortium, Washington, DC, accessed Feb. 5, 2018, http://jacobsschool.ucsd.edu/contextualrobotics/docs/rm3-final-rs.pdf
SPARC Robotics, 2016, “ Robotics 2020 Multi-Annual Roadmap for Robotics in Europe,” SPARC Robotics, EU-Robotics AISBL, The Hauge, The Netherlands, accessed Feb. 5, 2018, https://www.eu-robotics.net/sparc/upload/about/files/H2020-Robotics-Multi-Annual-Roadmap-ICT-2016.pdf
Haddadin, S. , and Croft, E. , 2016, “ Physical Human-Robot Interaction,” Springer Handbook of Robotics, Springer International Publishing, Berlin, pp. 1835–1874. [CrossRef]
Losey, D. P. , McDonald, C. G. , Battaglia, E. , and O'Malley, M. K. , 2018, “ Review of Intent Detection, Arbitration, and Communication Aspects of Shared Control for Physical Human-Robot Interaction,” ASME Appl. Mech. Rev., 70(1), p. 010804.
Pehlivan, A. U. , Losey, D. P. , and O'Malley, M. K. , 2016, “ Minimal Assist-as-Needed Controller for Upper Limb Robotic Rehabilitation,” IEEE Trans. Rob., 32(1), pp. 113–124. [CrossRef]
Varol, H. A. , Sup, F. , and Goldfarb, M. , 2010, “ Multiclass Real-Time Intent Recognition of a Powered Lower Limb Prosthesis,” IEEE Trans. Biomed. Eng., 57(3), pp. 542–551. [CrossRef] [PubMed]
Young, A. J. , Simon, A. , and Hargrove, L. J. , 2013, “ An Intent Recognition Strategy for Transfemoral Amputee Ambulation Across Different Locomotion Modes,” International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Osaka, Japan, July 3–7, pp. 1587–1590.
Aarno, D. , Ekvall, S. , and Kragic, D. , 2005, “ Adaptive Virtual Fixtures for Machine-Assisted Teleoperation Tasks,” IEEE International Conference on Robotics and Automation (ICRA), Barcelona, Spain, Apr. 18–22, pp. 1139–1144.
Geyer, H. , Seyfarth, A. , and Blickhan, R. , 2006, “ Compliant Leg Behaviour Explains Basic Dynamics of Walking and Running,” R. Soc. B: Biol. Sci., 273(1603), pp. 2861–2867. [CrossRef]
Liu, Y. , Wensing, P. M. , Orin, D. E. , and Zheng, Y. F. , 2015, “ Dynamic Walking in a Humanoid Robot Based on a 3D Actuated Dual-SLIP Model,” IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, May 26–30, pp. 5710–5717.
Wensing, P. M. , and Revzen, S. , 2017, “ Template Models for Control,” Bioinspired Legged Locomotion, M. Sharbafi and A. Seyfarth , eds., Elsevier, Oxford, UK.
McMullen, D. P. , Hotson, G. , Katyal, K. D. , Wester, B. A. , Fifer, M. S. , McGee, T. G. , Harris, A. , Johannes, M. S. , Vogelstein, R. J. , Ravitz, A. D. , Anderson, W. S. , Thakor, N. V. , and Crone, N. E. , 2014, “ Demonstration of a Semi-Autonomous Hybrid Brain-Machine Interface Using Human Intracranial EEG, Eye Tracking, and Computer Vision to Control a Robotic Upper Limb Prosthetic,” IEEE Trans. Neural Syst. Rehabil. Eng., 22(4), pp. 784–796. [CrossRef] [PubMed]
Wakita, K. , Huang, J. , Di, P. , Sekiyama, K. , and Fukuda, T. , 2013, “ Human-Walking-Intention-Based Motion Control of an Omnidirectional-Type Cane Robot,” IEEE/ASME Trans. Mechatronics, 18(1), pp. 285–296. [CrossRef]
Shultz, A. H. , Lawson, B. E. , and Goldfarb, M. , 2015, “ Running With a Powered Knee and Ankle Prosthesis,” IEEE Trans. Neural Syst. Rehabil. Eng., 23(3), pp. 403–412. [PubMed]
Zeilig, G. , Weingarden, H. , Zwecker, M. , Dudkiewicz, I. , Bloch, A. , and Esquenazi, A. , 2012, “ Safety and Tolerance of the ReWalk Exoskeleton Suit for Ambulation by People With Complete Spinal Cord Injury: A Pilot Study,” J. Spinal Cord Med., 35(2), pp. 96–101. [CrossRef] [PubMed]
Quintero, D. , Martin, A. E. , and Gregg, R. D. , 2017, “ Toward Unified Control of a Powered Prosthetic Leg: A Simulation Study,” IEEE Trans. Control Syst. Technol., 26(1), pp. 305–312. [CrossRef] [PubMed]
Asbeck, A. T. , De Rossi, S. M. , Galiana, I. , Ding, Y. , and Walsh, C. J. , 2014, “ Stronger, Smarter, Softer: Next-Generation Wearable Robots,” IEEE Rob. Autom. Mag., 21(4), pp. 22–33. [CrossRef]
Evrard, P. , and Kheddar, A. , 2009, “ Homotopy Switching Model for Dyad Haptic Interaction in Physical Collaborative Tasks,” World Haptics, Third Joint EuroHaptics Conference and Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems, Salt Lake City, UT, Mar. 18–20, pp. 45–50.
Medina, J. R. , Lorenz, T. , and Hirche, S. , 2015, “ Synthesizing Anticipatory Haptic Assistance Considering Human Behavior Uncertainty,” IEEE Trans. Rob., 31(1), pp. 180–190. [CrossRef]
Hoffman, G. , and Breazeal, C. , 2007, “ Cost-Based Anticipatory Action Selection for Human-Robot Fluency,” IEEE Trans. Rob., 23(5), pp. 952–961. [CrossRef]
Beetz, M. , Stulp, F. , Esden-Tempski, P. , Fedrizzi, A. , Klank, U. , Kresse, I. , Maldonado, A. , and Ruiz, F. , 2010, “ Generality and Legibility in Mobile Manipulation,” Auton. Rob., 28(1), pp. 21–44. [CrossRef]
Dragan, A. , Lee, K. , and Srinivasa, S. , 2013, “ Legibility and Predictability of Robot Motion,” Eighth ACM/IEEE International Conference on Human-Robot Interaction (HRI), Tokyo, Japan, Mar. 3–6, pp. 301–308.

Figures

Tables

Errata

Discussions

Related

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging and repositioning the boxes below.

Related eBook Content
Topic Collections

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In