top of page

Chapter 4 - Static Controller Design


This chapter provides a specific description of the implementation of the static controller. After introducing specific design goals in the previous section, it is shown how each of these goals was reached. The design details are presented in the following order –mechanical, electronics, and software. 

 


4.1    Mechanical design
 

A depiction of the concept design, created using SolidWorks CAD is shown in the following figure, alongside the resulting completed prototype in a comparable orientation. 

   

 

 

 

 

 

 

 

 

 

 


Figure 4.1.  The concept design in SolidWorks CAD (left), the completed prototype (right). The load cells in the

CAD are numbered one through four.

​

​

      The bracelet shown on the left side of the image of the prototype is shown in yellow on the CAD. When the user is wearing the bracelet and their arm is in close proximity to the rest of the fixture, magnets and pegs hold it in place, resting on the device. Then, the user grabs the angled front bar, with their thumb below the bar and remaining fingers above it. 
      One at a time, the previously stated mechanical design requirements are reiterated and the chosen solution for meeting that requirement is explained:

​

1.    ‘The device must be immovable while being acted upon.’ 

  • A large metal surface area was left on the bottom of the design to clamp it to the workstation’s table during operation. 

2.    The user’s arm should be able to be physically held in a manner that is comfortable and adjustable for different arm sizes.

  • The user puts their arm inside a removal bracelet with two adjustable straps.

  • The resting position of the arm during restraint was chosen to be a natural resting position.

  • Sliding is stopped by mechanical 3d-printed pegs attached to the bracelet which snap into the clamped side of the device in multiple locations.

  • A system of neodymium magnets and steel are embedded into the bracelet and the fixture to hold the two in place with magnetic forces.

 

 

 

 

 

 

 

 

 


Figure 4.2 The bracelet to be worn by the user during use of the static controller prototype.

​

​

3.    Force sensors placement should allow measurement along independent axes.

  • The force sensors are depicted in the CAD of Figure 4.1 as dark colored rectangular prisms. 

  • Sensor #1 measures horizontal forces of the forearm

  • Sensor #2 detects vertical forces of the forearm

  • Sensors #3 and #4 provide opposite and equal reading if the user rotates their wrist either clockwise or counter clockwise, while measuring forces in the same direction if the user flexes or extends their wrist.

4.    As a safety requirement, the user should be able to remove themselves from the device without any mental effort - using a mechanism that does not reduce the overall stiffness of the device. (i.e., If the user for whatever reason falls off the chair then their arm should be released immediately and all at once.)

  • The magnets which hold the bracelet attached to the device release all at once when a force of around 6N of separating force is exerted. An exploded view of this system is shown in the figure below: 

 

 

 

 

 

 

 

 

 

 
Figure 4.3 An exploded view of the two sides of the magnetic interface between the

bracelet component (top piece) and the rest of the fixture (bottom piece).


4.2    Circuitry and hardware
 

There was no need to use CAD in designing the hardware, as it is relatively simple. However, a schematic can be found as Appendix A. The hardware consists entirely of a microcontroller unit (MCU), four amplifiers, and the load cells, which operate using strain gauges. The entire circuit is powered through the MCU, which is in turn powered through a USB connection to the host computer. Similarly to the previous section, the requirements are reiterated and the corresponding implementation is explained.


1.    Force transducers’ analog voltage should be amplified and then digitized to a microcontroller.

  • The load cells are equipped with strain gauges. When the strain gauge is flexed a very small amount, the resistance across it changes dramatically. Using a Wheatstone bridge configuration, four wires come from each load cell. A voltage is sent across them and read by the amplifier circuit. 

  • The amplifiers are designed specifically for this particular load cell and do not require any calibration or adjustments to gain.

2.    The microcontroller input pins should have an update time of less than 100ms.

  • Each amplifier connects to the MCU through a clock and data pin. The MCU reports new readings at 10.2 Hz with all sensors connected, which just satisfies the requirement of an update time less than 100ms.

 

      The chosen microcontroller unit (MCU) is Arduino’s Teensy 3.2 for its compact design, high quantity of I/O pins, and relatively powerful 32-bit processor compared to other Arduino products. Each load cell associates with its own HX711 amplifier [24]. The amplifier is designed specifically for interaction with the chosen model of load cell and sends a digital signal to the microcontroller, rather than an analog one.
      Accessory electronics include a small vibrating coin motor and a green LED. The vibration motor, controlled by pulse width modulation (PWM) signals from the microcontroller’s analog output pins, vibrates during controller operation if there is force on Baxter’s grippers. This provides the user with feedback about whether or not the grippers are holding an object. Lastly, the LED indicates whether or not the device is powered.


4.3    Software
 

This project’s software has two major components - a Python program on Baxter and a C# application on a local computer. The software on the microcontroller is near trivial, as it simply reads new data whenever it is available, and provides that information over serial connection upon request. Additionally, two third party programs, Open Broadcast Software (OBS) and IP Webcam, were used to reduce programming associated with visual feedback. OBS allowed for simple duplication and transmission of a live video feed to the HMD, while IP Webcam allowed a simple android phone to operate as the remote camera and transmit the video over the network.

​

4.3.1    C# Application
 

The C# application consists of four classes, one for the connection to the MCU, the connection to Baxter, the graphical user interface (GUI), and the main class. What follows is three of the classes described in detail, with the main class ignored because it simply creates an instance of the other classes and runs the GUI in a loop.


1.    MCU Class: In this class a serial connection is instantiated and basic functions for its use are constructed. This class also does some of the initial processing on the sensor data. The initial reading is stored as an offset and is subtracted from all future readings. Additionally, a manual sensitivity value was determined through trial and error.
2.    Baxter Class: This class instantiates a connection with the robot by creating a new process of the command line program ‘plink’, similar to the console application ‘putty’. Arguments are sent to the robot’s system for user name, password, and the name of a bash script to be run after the connection is established. Additional functions within this class are those to read and write from standard output and standard error over the plink connection. Lastly, there is a function that sends a velocity command as a packet of joint and gripper data. Safeties are set in place so that unexpectedly large velocity commands are ignored and reported.
3.    GUI Class: This class uses Windows forms to provide real-time feedback of the sensor data and commands sent to the robot. It also contains logic for keypresses.

 

   There is one additional note regarding the post processing of the force data. Based on difficulties uncovered in an informal pilot study with six participants, a filter was implemented on the final velocity command values just before being sent to Baxter. The filter ran the velocity values through a scaled and slightly modified logistic curve. The function used is shown in the Equation 4.1, and its corresponding graph follows. ‘Y’ is the output, ‘x’ is the input, ‘s’ is the speed multiplier, while alpha and beta effect the curve shape. In the graphs below, the x-axis is the original velocity command while the y-axis is the outputted velocity command. Values of 0.02 and 0.03 were used for alpha and beta respectively.

​

​

 

            Equation (4.1)

        

​

      
 

 

 

 

 

 

 

 

 

 

Figure 4.4 Graphs depicting the filter used on the velocity commands to reduce smaller movements and exaggerate larger ones. The x-axis is the original velocity while the y-axis is the output velocity. 

​

      Alpha in Equation 4.1 is inversely proportional to the rate of growth and the concavity, while beta controls the same factors as alpha but also the skew of the curve. The addition of the beta value, the only difference between this and the logistic function, is to output slightly smaller values with small inputs. Lastly, there are two speed settings used. The graph on the left shows the lower speed of 0.13 while the right graph shows the speed of 0.25. The result of this filter is that small forces are diminished greatly, to help the user maintain control, while large forces are exaggerated but with a smooth transition to a maximum value.

Lastly, the chosen mapping for this controller is shown in Table 4.1. The mapping was constructed based on pilot test performances and on general rule of thumb to try and provide an intuitive interface.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Table 4.1 Mapping from static controller inputs to Baxter joints. Reference Figure 3.5 to see joint names.

 

 

4.3.2    Baxter Software
 

The initial bash script, started on the robot’s computer by the C# application, navigates to a series of Python scripts to initialize the robot and have it wait for velocity commands. Baxter runs on the Robotics Operating System (ROS) [23] which operates primarily on a publisher/subscriber node architecture. In order to operate Baxter, a kinematics node is first created, which allows interaction with joint positions, velocities, torques, and provides force data on the joints. When the Python script receives a command, it then reports the forces exerted on its gripper so that the C# application can use that data to vibrate the coin motor near the palm. 
An observation was made that if Baxter was not properly calibrated, sending position commands resulted in significant drift; due to the internal feedback loops used to reach the desired position. To overcome this, velocity commands were sent instead of position commands. However, many of the safety loops, usually in place during positional control, are no longer in use when in velocity control mode – so extra care was taken to make sure there would be no unintended collisions with the environment or other people.

Introduction
Literature Review
Analysis and Hypothesis
Static Design
Reference Controller
Methods
Results
Conclusion
References
Appendices
Top

Chapter 6 - Experimental Methods
 

This section draws upon previous research studies to determine valid metrics of performance for each controller and then goes into detail of the specific experimental procedure.


6.1    Metrics of controller performance

 

The majority of the metrics taken are from [25], which discusses various performance in general HMI. Additionally, inspiration is taken from the NASA TLX test [26]. The NASA TLX is a set of questions designed to measure how intensive a task is on the user in the context of mental, physical, temporal, performance, effort, and frustration.
Specific to manipulation, performance can be measured quantitatively by speed, accuracy/precision, and the number of contact errors. Speed is often measured by task completion time, while precision and accuracy can be determined based on proximity to a target. Each test in the procedure will focus on one of these three quantitative measures. 
    Additionally, there are certain characteristics considered biasing factors - in common between all of the controller types. Examples of these include user factors such as difficulty with distance perception through the HMD and network properties such as latency, bandwidth, and jitter. It is unsure whether or not these factors affect each control scheme equally, therefore it may be a source of error in the results. The network delay was reduced to roughly less than a quarter of a second, so the effects should be minimal.
      The questionnaire used is a record of user feedback and allows collection of qualitative metrics. One such qualitative measure is that of awareness. It is expected that the user’s awareness of their immediate surroundings during the testing will be diminished as a result of wearing the head mounted display. This is a favorable effect, as it lowers chance of distraction. It is expected that the user’s awareness of the robot’s surroundings will exceed that of their own. It is also favorable to achieve certain levels of user immersion, ease of understanding, ease of use, reliability, and comfort – as these are generally known to increase user performance. These measures are controller dependent and are thus asked for each individual controller. The exact questionnaire form can be viewed in Appendix B.

 

6.2    Experiment setup

 

The users were equipped with the Oculus Rift DK2 HMD. Additionally, a camera with a wide-angle lens was fixed to Baxter’s head. As demonstrated by the TELESAR V and DaVinci surgical system, use of a wide-angle lens is significant in promoting visual awareness of the robot’s environment. Thus, a wide-angle camera set up with video resolution of 320 x 240 was sent over the network at 92% accuracy - an intentional degradation in quality to reduce video delay.
      The participant, brought to a different room, was briefed on the experimental procedure and how to operate each controller. Instructions and the mapping tables, 4.1, 5.1, and 5.2 were provided to assist the user in learning to use the controllers. The experimenter demonstrated each controller before use, and the participant was allowed a few minutes to become familiar with each controller both with and without the HMD equipped. During the actual testing, the experimenter was in the same room as the robot near Baxter’s emergency stop button, while the experimenter’s assistant helped the participant in switching between controllers and answering any questions from the participant.

 

6.3    Detailed procedure

 

The experiment consisted of three tests, each performed with every controller type, and then a final challenge in which any controller can be used. The controllers were given to the user in the consistent order of: Xbox, Leap Motion, and then static. Then, after all tests were completed, the participants answered the questionnaire from Appendix B. The details of each test are as follows:
 

Test 1: In one minute, the participant must touch as many targets as possible. There are four targets, each near the edge of the robot’s range of motion. A colored overlay on their HMD indicates the current target, the time remaining, and their cumulated score. The next target is always random, but never the same as the one just touched. The change of target is triggered manually by the experimenter, who observes the test from the computer’s monitor.
Test 2: In five iterations, the participant must use the robot to place a magnetic pen cap as close as possible to the center of a circular target. The pen cap is held automatically by Baxter and, when it nears the target, it is released automatically. After each placement, the pen’s location is recorded on the target and the pen cap is reset on Baxter’s gripper. There is no time limit for this test.
Test 3: Given one minute, the participant must navigate the robot’s arm around two obstacles in the pattern of an infinity symbol (or horizontal figure 8). They are awarded five points for each completed lap, and deducted two points for each collision, regardless of the magnitude of the collision.

Final assessment: The participant uses a controller of their choice to interact with the experimenter through Baxter. They must take cubes from the experimenter’s hand and place them in a nearby box. The experimenter is expected to be as helpful as possible, with the ability to reach and place the block within Baxter’s gripper, but without the ability to move their feet and positioned to the opposite side of the robot, just outside the robot’s range of motion. Given one minute, the user is scored on the number of blocks taken from the experimenter and the number of blocks successfully dropped into the box.

​

Chapter 7 - Experiment Results

​

An initial pilot study was performed with six participants to help develop a usable prototype. Because of significant changes to the Leap Motion and static controllers’ software between each participant’s feedback contribution, the pilot study’s results are not included. Then, five participants were used in the actual study. This section provides the quantitative data from the test results, and then presents the user feedback from the questionnaire and personal comments. Lastly, observations are made regarding the results.

​

7.1 Test results

 

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

Figure 7.1 Test #1 average number of targets reached in one minute for each controller.

Standard deviation among participants is shown by the black bars for each controller.

 

 

 

 

 

 

 

 

 

 

 

 

​

 

Figure 7.2 Test #2 average distance from the center of the target for each controller.

Standard deviation among participants is shown by the black bars for each controller.

 

 

 

 

 

 

 

​

​

 

 

 

 

​

Figure 7.3 Test #2 average distance from the targets for each attempt number with each controller.

Dotted lines represent the overall trend of the averages.

 

 

 

 

 

 

 

 

 

 

​

 

Figure 7.4 Test #3 bar graphs depicting the average number of collisions (left), number of laps

(middle) and the average score. Standard deviations between participants are shown by the black

bars for each controller.

 

​

      Lastly, in the final challenge, given the choice of controller, all participants chose to use the Xbox gamepad. The average number of cubes taken was 5.8, while the average number of successful drops into the box was 4.6.

​

​

7.2 User feedback

 

Data from the questionnaire was on a scale from -7 to 7. This value was mapped linearly to between 0 and 100 instead. Thus, the value of 50 represents a neutral score, designated with a dotted green line. Additionally, prior to the experiment, target values per qualitative metric were chosen as goals. These target values are designated only for the static controller and are represented as dotted dark gray lines.

 

 

 

 

 

 

 

 

 

 

 

 

​

Figure 7.5 General qualitative metrics of immersion, reliability, comfort for HRI, and ease of understanding for each controller type. Standard deviation among participants is shown by the black bars.

 

 

 

 

 

 

 

 

 

 

​

 

Figure 7.6 The overall task load for each controller type. Standard deviation among participants is shown by the black bars.

 

 

 

 

 

 

 

 

 

​

​

​

​

​

Figure 7.7 Individual task load metrics of mental, physical, temporal, performance, effort, and frustration demands. Standard deviation among participants is shown by the black bars.

 

 

 

 

 

 

 

 

 

 

 

 

​

Figure 7.8 A scatter plot of participant’s experience vs performance per control scheme.

 

Additional comments were obtained verbally after the experiment. Most participants reported that they enjoyed using the static controller, but prefer to use the Xbox Gamepad. Two participants said the static controller “felt good to use, but it still needs a lot of work.” Lastly, a participant suggested that if they had more practice with the static controller then their performance would have been greatly improved.

​

​

7.3 Observations and analysis

 

There are no known similar studies which immobilize the user and use force based control in a telepresence robotic setting, so there are no benchmarks to which the results can be compared. Thus, general observations are made and possible reasons behind each observation are suggested. First the quantitative data is looked at, then the qualitative.

​

7.3.1 Analysis of quantitative data

 

From the graphs in the quantitative test data, the Xbox Gamepad had the overall best performance in all categories with the exception of the static controller in test two. In the first test, the Leap Motion had consistently higher scores than the Xbox and the static controllers. The standard deviation on the Xbox average score, designated by the black bar, was much higher than for the other controllers. This is believed to be because some participants did not make use of all of the gamepad’s controls - resulting in difficulty reaching some targets. This is counterintuitive to the fact that the users reported the Xbox gamepad as being both the easiest to understand in Figure 7.5.

      In the second test, the static controller had slightly better performance than the other two, with an average distance of one centimeter less than the Xbox gamepad’s average. There was significant variation for each of the controllers, so this centimeter lead by the static controller may not be meaningful. Furthermore, Figure 7.3 below shows the average distances in order of their attempts. There is a clear negative trend line, showing increased performance for later attempts than in the initial ones. This trend line suggests that there was some learning during the placement of the pen cap. Since the static controller was always the last controller used in this test, it is possible that the slightly better overall results on the static controller are indicative of a learning curve rather than better controller performance. Perhaps if the difference in performance were more drastic then such a claim could be made.

      The third test’s overall scores are subjective, depending on how much value one assigns to the collisions and number of laps completed. To reiterate, the scores calculated here are for +5 points per lap and -2 points per collision. Using this system of scoring, the Xbox gamepad had a significantly higher average score of 11.2, but also with significant variation. The static controller was consistently the most effective in the overall number of collisions, having an average of 1.2 collisions compared to the Xbox with 3.4 average collisions.

​

7.3.2 Analysis of qualitative data

 

The static controller prototype was reported to be almost as successful as the Xbox Gamepad in many aspects, nearly tying on levels of immersion. Qualitative feedback of the Leap Motion was significantly worse than of the other two controllers. This can be seen clearly from the overall task load graph, but also the generally larger standard deviations and poor scores on most of the tests

      General user feedback for each controller from the questionnaire is shown in Figure 7.5. Overall, the general feedback results of the static controller were very close to the goal values. Users found the Xbox gamepad the most immersive, reliable, comfortable, and easy to understand but the static controller was not far behind in the scores. The Leap Motion control system was reported to be neither immersive, reliable, or comfortable although it was somewhat easy to understand.

      As in Figure 7.6, the reported task load per controller showed that the Xbox gamepad task load was much lower than for the other two controllers. The Leap Motion was significantly harder to use, while the static controller was just under the neutral mark. Going into the specific aspects which were difficult by looking at Figure 7.7, for the Leap Motion nearly every aspect of the task load was above the neutral line except in the aspect of temporal demand. The static controller was reported slightly easy to use and was successfully below the goal value; although difficulties existed in the physical demand and the overall effort.

      Lastly, Figure 7.8 shows a positive correlation between previous experience with joystick based controllers and Xbox performance, as well as with camera-based controllers and the Leap Motion. Performance is determined as a quantitative value based solely on the test scores which were normalized relative to the average score for that test, and then the score for each test was summed to the performance value. The ‘other’ category in this graph refers to how much experience the user had with any controller that isn’t joystick or camera based, and isn’t necessarily specific to the static control scheme. However, it still showed a correlation between number of hours and performance.

​

7.3.3 General observations

 

Next, some of the observed challenges in using the static controller will be addressed. Firstly, it is difficult to maintain a resting position on the device, because there is a tendency to relax the arm to different amounts based on stress, attention, or other factors. This caused unintentional movement commands and was dealt with during testing by either using visual feedback to rediscover their defined resting position, or by manually resetting their resting position. To solve this, it is suggested that either the weight of the user’s arm can be measured and saved beforehand per use, or the system could benefit from integration with electromyography (EMG) to determine how much muscle force is applied during the initial setting of the offset and during device operation.

      An additional challenge that about half of the participants encountered, was that moving their wrist in an upward motion resulted in moving their forearms downwards. The significance of this observation is that the force sensor locations were not chosen in a way that allowed them to detect force independently.

​

Chapter 8 - Conclusions and Future Work

 

In this chapter the completed dissertation is summarized and points of interest are recognized. Additionally, conclusions and insights gained from this work are described. Finally, suggestions for future additions to the static control scheme are presented.

​

8.1 Summary

 

This dissertation explored the possibility of a new type of static control interface. After analyzing literature for telepresence robotics, the static interface was designed and a prototype was created. Experiments showed the static controller’s performance to be competitive with existing control schemes on qualitative metrics such as task load, immersion, reliability, etc. and quantitative metrics of speed, precision, and collision avoidance scores.

 

8.2 Conclusions

 

One of the major findings of this study, is that despite being entirely new to the static control scheme, users performed comparably to with the familiar Xbox gamepad. It is too soon to suggest that the static controller is better than other control schemes in any particular way, and further testing must be done with a larger set of participants. The significance of this study, is that a simple force sensing design has been shown to provide as a possible way to avoid complex controller dynamics found in traditional haptic feedback devices; also reducing the computation necessary for control.

      One benefit of this control scheme is that the workspace of interaction is essentially infinitely large. For example, if controlling a rotating motor, the user could use the static control interface to rotate the motor beyond the limits of their own wrist rotation – a task that is more challenging in traditional haptic devices with a limited workspace. This would make a static control system particularly useful in controlling systems that have non-anthropomorphic capabilities such as the mentioned example of continuous wrist rotation.

      Lastly, while traditional haptic controllers use force feedback to reflect forces from the robot’s environment, it has been shown that this is not the only useful force feedback implementation. In the static controller, the force reflected back at the user is simply a result of the Newton’s third law of physics, always equal and opposite to their applied force, directly giving force feedback of the velocity control signal being sent to the robot.

 

8.3 Future explorations

 

The primary motivation for exploring the idea of a static controller was to stimulate growth of a new branch of technologies in telepresence robotics. The joint-to-joint mapping system used in this dissertation has the potential to be extended to the entire body - allowing for incredibly large number of degrees of freedom on the input side, while maintaining an intuitive interface by relying on natural body movements. A full body application could be used for control of a non-anthropomorphic robot such as a quadruped, snake, or quadcopter. Another exciting possibility for this control scheme, is that it serves as a stable physical base for the addition of sensory stimulation with access to the entire surface area of the limb or limbs used for the control. Perhaps similar sensory modules to the ones used in the TELESAR V could be added to simulate heat/cold, texture, and pressure.

      Additionally, it would be interesting to explore the limits of the human control system in this context. A skilled puppeteer can control a large number of degrees of freedom in a non-intuitive manner with ease. This begs the questions of, can a single operator control multiple robots? Can multiple users operate different aspects of the same robot cooperatively? Lastly, force sensors were used in this implementation, but maybe electromyography, or a brain-computer interface would be better suited to interpret the intentions of an immobile user. As more immersive telepresence robotic technologies emerge, the proposed static control scheme may find its niche in high dimensional intuitive interfaces.

​

​

References

​

[1] F. P. Brooks, Jr., M. Ouh-Young, J. J. Batter and P. J. Kilpatrick, "Project GROPE - Haptic Displays for Scientific Visualization," Computer Graphics, vol. 24, no. 4, pp. 177-185, 1990.

[2] M. Minsky, "Telepresence," OMNI Magazine, 1980.

[3] T. Sheridan, "Teleoperation, telerobotics and telepresence: A progress report," Control Engineering Practice, vol. 3, no. 2, pp. 205-214, 1995.

[4] R. J. Stone, "Haptic Feedback: A Potted History, From Telepresence to Virtual Reality," in Haptic Human Computer Interaction, 2001, pp. 1-16.

[5] R. C. Goertz, "Remote-Control Manipulator". United States Patent 133,440, 16 December 1949.

[6] C. R. Carinan and D. L. Akin, "Using Robots for Astronaut Training," IEEE Control Systems Magazine, vol. 03, pp. 46-59, 2003.

[7] D. I. Akin, M. L. Minsky and E. D. Thiel, Space Application of Automation, Robotics and Machine Intelligence Systems (ARAMIS) - Phase II, Cambridge, Massachussetts: NASA Scientific and Technical Information Branch, 1983.

[8] T. Massie and J. K. Salisbury, "The PHANTOM haptic interface: A device for probing virtual objects," in ASME Winter Annual Meeting: Symposium on Haptic Interfaces for Virtual environment and Teleoperator Systems.

[9] T. Sheridan, "Human-Robot Interaction: Status and Challenges," Human Factors, vol. 58, no. 4, pp. 525-532, 2016.

[10] "Robotics Telepresence," Telepresence Options, pp. 50-57, July 2013.

[11] A. Keay, "Robotic Telepresence," Telepresence Options, pp. 30-34, 28 July 2014.

[12] M. Desai, K. M. Tsui, H. A. Yanco and C. Uhlik, "Essential Features of Telepresence Robots," in IEEE International Conference on Technologies for Practical Robot Applications, Woburn, MA, 2011.

[13] A. Kristoffersson, S. Coradeschi and A. Loutfi, "A Review of Mobile Robotic Telepresence," Advances in Human-Computer Interaction, vol. 2013, no. 902316, pp. 1-17, 2013.

[14] G. H. Ballantyne and F. Moll, "The Da Vinci Telerobotic Surgical System: The Virtual Operative Field and Telepresence Surgery," Surgical Clinics of North America, no. 83, pp. 1293-1304, 2003.

[15] C. L. Fernando, M. Furukawa, T. Kurogi, S. Kamuro, K. Sato, K. Minamizawa and S. Tachi, "Design of TELESAR V for Transferring Bodily Conciousness in Telexistence," in International Conference on Intelligent Robots and Systems, Vilamoura, Algarve, Portugal, 2012.

[16] F. Weichert, D. Bachmann, B. Rudak and D. Fisseler, "Analysis of the Accuracy and Robustness of the Leap Motion Controller," Sensors, vol. 13, no. 1424-8220, pp. 6380-6393, 2013.

[17] J. Kofman, X. Wu, T. Luu and S. Verma, "Teleoperation of a Robot Manipulator Using a Vision-Based Human-Robot Interface," IEEE Transactions on Industrial Electronics, vol. 52, no. 5, pp. 1206-1219, 2005.

[18] C. R. Carignan and K. R. Cleary, "Closed-Loop Force Control for Haptic Simulation of Virtual Environments," Haptics-e, vol. 1, no. 2, pp. 1-14, 2000.

[19] SensAble Technologies, "Specifications for the PHANTOM® Desktop™ and PHANTOM Omni® haptic devices," 2009. [Online]. Available: http://www.dentsable.com/documents/documents/STI_Jan2009_DesktopOmniComparison_print.pdf. [Accessed September 2016].

[20] E. A. Caspar, A. Cleeremans and P. Haggard, "The relationship between human agency and embodiment," Conciousness and Cognition, no. 33, pp. 226-236, 2015.

[21] J. W. Moore and S. S. Obhi, "Intentional Binding and the Sense of Agency: A review," in Consciousness & Cognition, London, 2012, pp. 1-38.

[22] Engineered Arts, "RoboThespian Technical Specifications," 11 July 2016. [Online]. Available: http://wiki.engineeredarts.co.uk/RoboThespian_Technical_Spec. [Accessed September 2016].

[23] Rethink Robotics, "Hardware Specifications," [Online]. Available: http://sdk.rethinkrobotics.com/wiki/Hardware_Specifications. [Accessed September 2016].

[24] AVIA Semiconductor, "24-Bit Analog-to-Digital Converter (ADC) for Weigh Scales," [Online]. Available: https://cdn.sparkfun.com/datasheets/Sensors/ForceFlex/hx711_english.pdf.

[25] A. Steinfeld, T. Fong, D. Kaber, M. Lewis, J. Scholtz, A. Schultz and M. Goodrich, "Common Metrics for Human-Robot Interaction," in HRI, New York, 2016.

[26] NASA Ames Research Center, "Task Load Index (NASA-TLX)," Human Performance Research Group, 1986.

​

Appendices

​

Appendix A - Circuit Schematic

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

Appendix B - Questionnaire

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

Comments

Thank you for reading! Please leave a question or a comment.

Chapter 2 - Literature review

​

This chapter provides research cases which highlight the engineering and the psychological aspects of telepresence robotics. These concepts will be built upon to form the hypothesis in the next section. Then, existing telepresence robotics systems are reviewed to create a broader picture of the current state-of-the art technologies in the field. The purpose of the chapter is to construct a framework on which the remainder of this dissertation will be based upon.

​

​

2.1 Characteristics of telepresence robotics

​

Telepresence robotic systems are described by Akin [7] as having two essential requirements.

​

1. The manipulators have the dexterity to allow the operator to perform normal human functions. This refers to the robot’s ability to mechanically match the desired motion of the user within a reasonable amount of time. This is dependent on the number of degrees of freedom of the robot and mechanical limitations, as well as on characteristics of the control interface used.

​

2. The operator receives sensory feedback of sufficient quantity and quality to provide a feeling of actual presence at the worksite. This refers to how embodied the user feels as the robot and may depend on use of various types of feedback; e.g., visual, haptic, or temperature.

​

​

The two above requirements depend on a diverse set of factors. The visual feedback alone depends on camera height, monoscopic vs stereoscopic vision, field of view, and of course resolution among many others. This multitude of factors presents one of the more significant challenges with telepresence robotics - finding a balance between permissible delay and the amount of the information sent between the control system and the remote robot. This is especially relevant because the connection can often be over long distances, producing an inherent delay from the latency of the network, independent of the actual system specifications.

 

Additional complexity may arise in a telerobotic system because it uses the interactions of multiple complex entities simultaneously in real-time. [13] suggests three foundational relationships specific to a ‘social’ telepresence robotic system. Based on their, ontological observation, a congruent generalization can delineate three primary relationships:

​

1. Human-robot interaction between operator and robot, through the controller

2. Robot-environment interaction between robot and its surroundings

3. Human-environment interaction between operator and the robot’s surroundings

​

As mentioned previously, telepresence robotics systems tend to have an emphasis on either social interaction, remote manipulation, or sensory immersion. Some examples of social systems include an array of commercially available screen-on-wheels systems. The system can either be a standalone product such as the VGo and Ava 500, or it may require the user to “add a tablet” – known as a BYOD (bring-your-own-device) system. [11]. These social robotic systems often rely on a Monitor Mouse Keyboard (MMK) control interface for simplicity. On the other hand, telepresence robotics with a focus on remote manipulator control usually rely on complex haptic devices, using a ‘master-slave’ hardware architecture. Haptic devices are generally the most reliable and intuitive, justified by their use in surgical operations, because they can provide the operator with detailed force feedback of anthropomorphic robot arms [4] [14]. Lastly, systems such as the TELESAR V [15], which emphasize sensory immersion, tend to use either haptic or camera based control schemes. In general camera based control interfaces are not promoted commercially for telepresence robotics, but are commonly used in research [16] [17].

 

2.2 Haptic Devices

 

Haptic devices allow both tactile and force sensations to be sent to the user while reading their position and input forces. These devices, often used in training simulators, have been crucial for telepresence robotics for decades. Additionally, they are fascinating from a control system’s perspective. Thus, there is a significant amount of documentation regarding general haptic device design and mechanics.

As explained by the creators of the PHANToM haptic device in [8], there are three criteria in design of haptic devices:

​

1. Free space must feel free.

2. Solid virtual objects must feel stiff.

3. Virtual constraints must not be easily saturated.

​

[18] then suggests that the degree to which these criteria are met has a strong relation to the level of immersion of the operator. In other words, if one of these criteria is not met, then there is a significant loss of immersion. The most prominent factor in achieving these criteria, is the controller’s physical dynamics. These dynamics include any forces or torques independent from those created by the user’s interaction with the device.  The PHANToM device, whose specifications [19] are shown in Table 1.1, will provide an example in characterizing these dynamics.

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

 

Figure 2.1 The PHANTOM Desktop and PHANTOM Omni devices.

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

 

Table 2.1 Selected specifications of two PHANToM devices. [19]

​

​

The dynamics of the PHANToM include inertia, or the apparent weight of the stylus. With respect to this first criterion, it is desirable for the apparent weight of the point of interaction to be as close to zero as possible. If there is noticeable inertia, then it detriments the performance by making the user apply more force to move the stylus, hence not feeling like free space.

With regard to the second criterion, a haptic device is characterized by how quickly it can provide the sensation of stiffness when the stylus is in motion. This is characterized by the units of newton per millimeter, which describes the amount of deceleration that can occur per unit distance. In addition to how quickly it can decelerate, because the amount of force feedback is dependent on a measured quantity (i.e., the user’s force), there is often a non-linearization in the amount of force feedback - depending on system’s update time between measurements. Lastly, it can be generalized that interface stiffness increases bandwidth and system stability.

The third criterion is related to the maximal force which the device can exert. This is defined by the motors’ maximum torques and operating voltages. When the maximal torque is exceeded by the user, the motor gives way and the user experiences the undesirable sensation of moving through a virtual object.

Furthermore, there are two recognized types of haptic feedback devices, impedance controlled and admittance controlled – each with their benefits in regard to the criteria, power consumption, and overall hardware required. Impedance controllers measure the motion of the operator and regulate the force exerted on the point of contact whereas admittance controllers detect the force of the operator and control the displacement of the point of contact. In other words, the impedance controller operates with force while the admittance controller operates with position.

To conclude, the controller dynamics are minimized to make the interface as transparent as possible, with the worst case scenario being a controller with highly nonlinear dynamics. There are many challenges involved in meeting these criteria and the haptic devices themselves tend to be relatively expensive compared to other control schemes. Despite this, haptic feedback devices still are merited with the highest performance for remote manipulation when compared to other control schemes. [18]

 

 

 

2.3 Rubber hand illusion variation

 

 

This section explores the psychological factors behind telepresence by analyzing a case study on a variation of the famous Rubber Hand Illusion (RHI) performed by [20]. The telepresence phenomenon can be thought of as having two psychological attributes, the first being embodiment into another entity. To elaborate, embodiment involves a sense of personification, in which the human’s sense of self is extended. Second, there is a feeling of agency, or ownership over the robot’s actions – as can be found in use of simple mechanical tools. Agency can be understood as a cause-and-effect in which the user’s intentions are the catalyst for actuation. It is generally accepted that embodiment and agency increase the user’s sensorimotor understanding of the robot, providing the operator with greater dexterity and acuity.

In the original RHI, a person is tricked into believing that a fake hand, which roughly matches theirs visually, is actually their own hand. This is accomplished by stimulating the person’s hand, at the same time as they perceive the visually fake hand being stimulated. The original RHI is a simple example of embodiment. Figure 2.2 depicts the described procedure.

​

​

​

​

​

​

​

​

​

​

​

​

​

​

 

Figure 2.2 Depiction of the conventional rubber hand illusion setup.

 

In the selected case study [20], a variation on the original RHI is performed in which a mobile robotic hand was used in place of an immobile rubber hand. The purpose of the study was to find a relationship between agency and embodiment (to which they referred to as ‘ownership’). In their setup, when the user moved their own finger, the robotic finger moved in unison but with varying delays of 300, 500, and 700 milliseconds.

The researchers measured the amount of agency felt by the participant using a psychological effect called “intentional binding”. The intentional binding effect declares that the time between voluntary actions and their consequences is perceived as shorter than the time between involuntary actions and their consequences. Thus, agency was measured by asking the user to report which of the three durations they perceived as the delay between their finger movement and the robot’s movement. If the user chose a time shorter than the actual, it was concluded that there was a sense of agency. Furthermore, the study defined three types of robot and user action relationships:

​

Active congruent: the robot’s action matches that of the user’s action (i.e., user moves
                 their index finger and the robot moves its index finger).

​

Active incongruent: the robot’s action is different from the user’s action (i.e., user
                 moves their index finger but the robot moves its little finger).

​

Passive congruent: Neither the user, nor the robot move. This is the same as the
                 original rubber hand illusion.

​

It was found that in the active incongruent case there was significantly less embodiment, but a sense of agency was retained. This was reflected both in their measured results and in their questionnaire. The questionnaire results are shown in the figure below.

​

​

​

​

​

​

​

​

​

​

​

 

 

 

 

 

 

 

 

Figure 2.3 A graph and its subtext from the variation of the RHI [20], depicting the results of their questionnaire. From the original figure text: “Questionnaires were scored from -3 («strongly disagree») to +3 («strongly agree»). Pink columns represent the mean score of the four items assessing agency, and brown columns represent the mean score for ownership. Error bars refer to standard errors. “

 

In the discussion of their research paper, they suggest that agency may be dependent on the user’s mental model of statistical associations between their actions and the consequences. This suggestion was also made by [21], which goes in depth regarding the mechanisms of agency in the context of the intentional binding effect. The level of agency in the context of active incongruent action relationship between the user and the robot will be used in the formation of this dissertation’s hypothesis.

​

​

2.4 Modern telepresence robotics systems

​

​

This section will review the state of the art telepresence robotics systems that exist. To begin, the RoboThespian is an advanced social robot designed by company Engineered Arts. It is designed specifically for realistic social interaction, unlike the relatively simple ‘screen on wheels’ approach found in MRP systems. The robot itself is made of mostly aluminium and is actuated with pneumatics, powered from an external air compressor. Originally, when the project began in January of 2005, it was intended as an animatronic machine, but in 2007 the developers added feedback sensors on all its movement axes to allow for real-time manual control.

One of the software applications developed for the RoboThespian is a telepresence mode, in which a user can use an MMK interface. The robot’s head movements match that of a wearable headset, which is additionally equipped with a microphone and speakers to permit bi-directional verbal communication with humans near the robot. Furthermore, the operator can switch easily between autonomous pre-recorded sequences and manual control using a lightweight web browser interface called Virtual Robot. Currently, Engineered Arts hopes to add the ability for the robot to walk around instead of having to be operated from an immobile platform. The current price of the RoboThespian is around 55,000 pounds. For more information, a wiki page monitored closely by the company itself contains all the technical specifications and details [22].

 

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

 

Figure 3.1 The RoboThespian from Engineered Arts [22].

 

The DaVinci system [14], of the company Intuitive Surgical, is the leading telepresence robotic system for remote surgical operations. The robot side, referred to as the ‘patient-side cart’ includes three or four robotic arms. It autonomously uses multiple levels of safety checks to ensure that the surgeon is in direct control. On the surgeon’s side, the interface consists of a stereoscopic wide angle vision system, and a multi-axis haptic interface allowing for control of two robotic limbs simultaneously.

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

Figure 3.2 A close up view of the surgeon-side interface in operation.

 

 The product, costing around two million dollars, includes multiple training applications that teach all the necessary skills for performing the remote surgical operation. Using an assortment of extensions that are attached to the robot’s end effectors by an on-site nurse, the robot is capable of performing a large array of medical tasks. Although the device was approved by the Food and Drug Administration (FDA) for use in 2005, there are still many legal procedures that must be followed for proper use of the device – including presence of a representative of the company. Figures depicting the surgeon-side console, the patient-side cart, and some of the attachable wrist instruments are shown below. [14]

 

   

​

​

​

​

​

​

​

​

​

​

​

​

​

​

Figure 3.3, Three components of the DaVinci surgical system: surgeon-side interface (left), the patient-side cart (middle), various attachable wrist instruments (right).

 

Lastly, the TELESAR V robot [15], created in Japan, is the result of nearly 40 years of innovation, starting with the TELESAR I in 1988. The researchers use the term ‘telexistence’ synonymously to the stated definition of telepresence. The TELESAR V is unique in that it simultaneously allows for the user to have social interaction, remote manipulation, and sensory immersion.

Notable features of the TELESAR V system include:

  • Robot with 52 degrees of freedom

  • Unique haptic mechanism able to simulate both shearing forces and pressure

  • Vision-based cutaneous finger sensors for texture sensing

  • Wide-angle camera and corresponding wide-angle head mounted display (HMD)

  • Speakers and stereo microphones for bi-directional verbal communication

  • ​

The TELESAR is exceptional in the aspect that it implements so many features in a single device. Using a camera based system, advanced control systems and inverse kinematics per arm, the system provided adequate levels of control for remote tasks of writing Japanese calligraphy and playing Japanese chess (shogi).  [15]

 

​

​

​

​

​

​

​

​

​

​

​

​

​

​

 

Figure 3.4, A. The TELESAR V being operated to paint calligraphy. B. The resulting piece.

 

The robot used in this thesis is not as sophisticated as the three previously mentioned in this section, but it is designed specifically for safe interaction with humans. A finished product, the Baxter robot from Rethink Robotics [23], will be used so that the focus can be entirely on the HMI. For reference later, Baxter has the following limbs:

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

 

Figure 3.5 Baxter’s left arm with the joint names labelled. ‘S’ denotes shoulder, ‘E’ elbow, and ‘W’ wrist. The right arm is identical. [23]

Abstract

     The viability of a new control scheme is explored. Literature pertinent to the characteristics of telepresence robotics is reviewed and analyzed in order to design a static controller - an interface in which the user is physically immobile. A hypothesis is formed based on criteria for traditional haptic controllers and from research on the psychology of agency. After construction of a prototype, the design is tested against pre-existing control schemes using quantitative and qualitative metrics of performance. The static controller’s overall performance, found comparable to joystick control schemes, suggests that it is a viable approach in telepresence robotics. The success of this new control scheme creates questions of where its particular niche may exist, and shows that there is much more to be explored.

​

Acknowledgements

     Very special thanks to Lola Breaux for assisting in collecting data during all of the experiments. Additional gratitude is in order for Kevin Warwick, who met with me on multiple occasions to provide feedback and inspiration. Lastly, a big thank you to technician Peter Tolson for generously giving a significant amount of his time during the construction of the prototype.         

 

                                                

Chapter 1 - Introduction

​

Telepresence robotics, loosely defined as immersive control of a remote robot, has become significantly more mainstream in the past few years. The idea emerged from early science fiction stories such as Waldo (Robert A. Heinlein, 1942), and it has been recently explored in the popular film Avatar (James Cameron, 2009) and the sitcom Big Bang Theory (CBS, 2007-). Leaders in the field of the telepresence robotics are now releasing useful products to a diverse customer base. To ensure a more optimal performance in the long run, it is imperative to explore alternative approaches in the implementation of telepresence robotics - specifically of the characteristics between the operator and the robot. This thesis explores the characteristics of the Human-Machine Interface (HMI), reviews some state of the art telepresence robotic systems, proposes a new type of control scheme, and finally, compares the new scheme’s performance relative to that of traditional ones.

​

​

1.1 Telepresence robotics definitions

​

‘Teleoperation’ is defined as control of remote sensors and actuators. ‘Remote’ can mean size of scale in addition to dislocation, as in the example of the GROPE IIIb Project, which operates at the nanoscale [1]. When the controlled sensors and actuators are of a semi-autonomous system, it is called ‘telerobotics’. Alternatively, the phenomenon of ‘telepresence’ [2] may occur if the user has the illusion of being within the robot’s location and size scale. [3] Thus, the phrase ‘telepresence robotics’ can be defined as technologies which allow control of a remote robot, while providing the user with the illusion of being present at the robot’s environment. The concept of telepresence is considered an ideal, strictly to be strived for and referred to as the “Holy Grail” of the field. [4] It is currently possible to simulate pressure, temperature, and even texture using haptic devices – but true immersion is practically unattainable. Both research and commercial telepresence robotics systems, with some exceptions, tend to focus on only one of three classes: social interaction, remote manipulation, or sensory immersion.

​

​

1.2 Brief history and modern applications

​

Telepresence has its roots in teleoperation. The point of transition between remote manipulation and telepresence is somewhat ambiguous, as the level of immersion felt by the user is subjective. Therefore, what follows is a brief history of remote manipulators, tracking their stages of progress as they became more integrated with virtual reality technology, so that the reader may decide wherein that distinction lies. 

Around the year 1947, scientists at Argonne National Laboratories created the first remote controlled mechanical hands, for safer handling of hazardous chemicals. In 1954, Ray Goertz added electric motors to provide force feedback to the operator. Goertz later filed a patent for the ‘master-slave’ manipulator and is considered an early pioneer of telepresence robotics [5]. Because of his contributions, the American Nuclear Society presently honours the ‘Ray Goertz Award’ to members who have made substantial contributions to remote technologies.

​

​

 

 

 

 

 

 

 

 

 

 

 


 
Figure 1.1 Figure used in Goertz’s patent on a ‘master-slave’ manipulator. [5]

​


     One notable instance of visual feedback in a robotic context was the work of Philco’s engineer, Steve Mouton, the creator of one of the first remote viewing systems. The system used a TV camera which rotated in sync with a worn helmet. After that, there was a near 20-year gap with no major telepresence robotics developments, which is attributed to declining funding and increased costs. [2]

    The Three Mile Island nuclear incident of 1979 in Pennsylvania stimulated development of remote manipulation, as the clean-up required crew members to be exposed to a year’s allowable level of radiation in just a few minutes. Then, at last, the word “telepresence” itself was coined by Marvin Minsky in 1980 [2]. Minsky later served as principal investigator for the NASA telepresence robotics research at the Massachusetts Institute of Technology (MIT) [6] and has been a pioneer in this field. The original application of his research was for space robots, but the published results of MIT’s research provide a foundation for much of the terminology and ideologies used in modern telepresence robotics [7].

     The next couple of decades, saw an increase of development in telepresence robotics for applications such as deep water robots, chemistry/biology, surgery, landmine removal, and nuclear waste removal. Haptic feedback devices such as bilateral master-slave manipulators (MSMs) found use in space, undersea, and land vehicular remote applications. MSMs are systems in which a master control arm is a mechanical reproduction of a remote slave arm. One other common type of haptic device is a servomanipulator, which uses servos to both sense position and provide force feedback to the operator [4]. One notable haptic device which emerged was the PHANToM [8], which uses a parallel linkage on a vertical rotation axis to both sense and provide feedback while being relatively transparent to the operator. The PHANToM has been used extensively in surgical simulation, stroke rehabilitation, dental operations and training, in addition to the design of toys and footwear.

     Since 2006, the IEEE has hosted a specialists’ symposium on HRI annually [9]. Then, in 2013, an annual report on telepresence robotics suggested that the market for the technology must overcome significant difficulties before being commercialized [10]. However, the next year’s report states that telepresence robotics is now “at an inflection point”. Dozens of telepresence robots have since entered the market and have been embraced by an incredibly diverse customer base in education, military, surgery, customer service, tourism, and film among others. [11]

   The general public is exposed to telepresence through products of companies such as iRobot, RoboDynamics, VGo Communications and Willow Garage [12]. All of these commercial products focus on one specific branch of telepresence robotics called mobile robotic telepresence (MRP). MRP systems are characterized by a video conferencing system mounted on a mobile robotic base and are operated remotely over the internet [13]. These systems typically use either joysticks/keyboards, touch screen devices, or motion sensing systems - employing machine vision or inertial measurement units. Additionally, the company Cisco is currently developing robotic telepresence systems targeted at small and medium sized businesses with hopes that they can save the time and money which would otherwise be spent on long distance business trips. Alternatively, for medical use there is the DaVinci surgical system, which performs remote surgery with both high precision and reliability [14]. 
 

 

1.3 Aims and objectives

​

This thesis explores the possibility for a new type of HMI designed specifically for intuitive and immersive control. The proposed controller will be tested against existing controllers to form a primary assessment on whether or not the design should be developed further. The aims of this thesis project are to:​

  • Analyze pertinent literature to develop a hypothesis for a new control scheme

  • Construct and test a useable prototype

  • Implement an HMI for the new control scheme and two existing control schemes

  • Quantify user performance with each scheme through experiments and compile
         qualitative feedback with a questionnaire

  • Analyze results to assess the new control scheme

  • Suggest future work to further develop the new control scheme

 

 

1.4Structure of the dissertation

​

The chapters of this dissertation are:

  1. Introduction – definition declarations, history, modern applications, and goals.

  2. Literature Review – recent research related to this project’s exploration.

  3. Literature Analysis– project design requirements and hypothesis.

  4. Static Controller Design – detailed description of the new controller.

  5. Experimental Setup – description of implementation of existing controller types, as well as additional measures taken to create a telepresence system.

  6. Experimental Procedure – methods used for collecting data.

  7. Experimental Results – objective presentation of the data collected.

  8. Conclusion – dissertation summary, discussion of experimental data, and suggestions for the future.

“We are not the avatars we create, we are not the pictures on the filmstock, we are the light that shines through. All else is smoke and mirrors, distracting – but not truly compelling”   ~Jim Carrey

Chapter 3 - Analysis and Hypothesis


To form a hypothesis, observations are made regarding general haptic controller design and the psychological aspects of agency and embodiment. Through analysis, a theory is crafted from the literature. Then, a hypothesis is formed and specific design requirements for a novel control scheme are established.

​


3.1    Formation of hypothesis
 

As mentioned in section 2.2, there are the three criteria generally used for haptic controller design and there are non-trivial challenges in meeting these criteria. If the first criterion of ‘free space feeling free’ is ignored, then satisfying the next two criteria, both related to stiffness, can be achieved simply by physically stopping the user from moving entirely. One potential problem with this approach, is that the level of immersion may diminish significantly because of the violation of the first criterion. An additional problem may result from the force feedback losing its meaning, because it will always be a reflection of the user’s force rather than forces from the remote environment.
      However, in section 2.3 it was shown that a sense of agency can be achieved independent of embodiment in the case of an active incongruent motion. If a user is controlling a telepresence robot, and sees the robotic version of their arm in front of them through a HMD, then the forces they exert on the restraining fixture would be active incongruent. As shown in Figure 2.3, during an active incongruent motion the sense of agency is reduced - but it is only slightly lower than the sense of agency in the active congruent motion. As their research suggests, so long as there is a reliable causal relationship between the user’s action and the outcome it should be possible to maintain this agency.
      Therefore, if the user were restrained by a stiff enclosure and the forces exerted by their arm were measured, it should be possible to retain agency with a diminished sense of embodiment. Perhaps in future work, beyond the scope of this dissertation, techniques of sensory substitution may allow compensation for this reduced sense of embodiment, even while the first haptic device design criterion is being violated. If this is possible, then a controller can provide both agency and embodiment while greatly simplifying the design requirements. This new control scheme stemming from traditional haptic devices will be called a ‘static control scheme’. The resulting hypothesis is the following: In the context of telepresence robotics, free space does not need to feel free for a haptic control interface to be effective. This hypothesis will be tested by creating a prototype for a static controller and then by comparing its performance to traditional control schemes.


3.2    Design requirements
 

Based upon the proposed hypothesis, the constructed device must have certain characteristics. Firstly, if a mechanism exists to measure forces exerted by the user, then sensor placement should be chosen in a way that will allow for an intuitive mapping between the robot’s movements and the user’s forces; i.e., contraction of the bicep should be able to be associated with the robot flexing its elbow. For choice of limb, as the majority of the human population is right handed, the operator’s right arm and wrist are chosen as the user’s means of operation.  
     Additionally, it is a nontrivial matter, to create a confinement mechanism which both minimizes movement and does not block the user’s blood circulation. Additionally, a device which locks a person in place can be intimidating and can easily become dangerous if not designed correctly. For example, if a person locks just their right arm up to their elbow, then they become vulnerable to injuring their arm if the rest of their body is moved dramatically because of an accident such as falling off the chair. 

      The following design requirements have been chosen for the mechanical and software interfaces.       Additionally, certain practical features are necessary to ensure the safety of the user during use.


Mechanical design requirements:


1.   The device must be immovable while being acted upon.
2.   The user’s arm should be able to be physically held in a manner that is comfortable and adjustable

for different arm sizes.
3.   Force sensors placement should allow measurement along independent axes.
4.   As a safety requirement, the user should be able to remove themselves from the device without any mental effort - using a mechanism that does not reduce the overall stiffness of the device. (i.e., If the user for whatever reason falls off the chair then their arm should be released immediately and all at once.)


Electronic design requirements:


1.   Force transducers’ analog voltage should be amplified and then digitized to a microcontroller.
2.   The microcontroller input pins should have an update time of less than 100ms.


Software design requirement:


1. A communication framework must be implemented to allow force data to be sent to a host application upon request.
2.  A corresponding host application must be created to perform processing of the force data and manage the connection to the remote robot, including sending movement commands and any safety checks.

Chapter 5 - Reference Controller Implementations

 

This section outlines input to output mappings for the chosen pre-existing controllers used in the experiment. Given a brief introduction, selected inputs from the controllers are mapped to Baxter’s joint movements. A C# application was created for each of these controllers independently of the static controller’s, but they had a near identical class structure, and will not be explained in detail.


5.1    Xbox Controller
 

5.1.1    Background
 

The game console Xbox developed by Microsoft has game controllers, shown in figure X, which fit the role of a generic joystick controller.

 

 

 

 

 

 

 

 

 

 

 

 


 
Figure 5.1 An Xbox game console controller, labelled to depict relevant features. A close up view of a joystick hovers at the left of the controller to show the definition of axes for both joysticks.

 

5.1.2    Mapping
 

Designing an intuitive mapping from the Xbox gamepad to Baxter is challenging because the controller lacks a congruent physical relationship to Baxter’s arm. To counteract this, it is instead designed to be as simplistic as possible - organizing the mappings in a way such that the user can easily create a mental model of the device’s operation. To summarize the information displayed in Table 5.1, the left joystick controls the shoulder, the D-pad controls the elbow, and the right joystick controls the wrist. To allow increased precision, the left bumper reduces Baxter’s movement speed to one third of the normal value. Conversely, the right bumper increases the movement speed by a factor of two. These functions control joints S1, E1, and W1 of Baxter’s arm. Lastly, the right trigger closes the gripper attached to Baxter’s end effector, which otherwise remains open. To assist in understanding when the grip is exerting force on an object, force sensors from within the Baxter robot are read by the application to vibrate the controller using its internal vibration motors proportionally to the force exerted by the grippers.

​

​

​

​

​

​

​

 

 

 

 

 

 

 

​

Table 5.1 Mapping from Xbox controller inputs to Baxter joints. Reference Figure 3.5 to see joint names. 


5.2    Leap Motion Device
 

5.2.1    Background
The Leap Motion is a commercially available USB device which tracks hand and arm positions and orientations with sufficient performance for gesture recognition. The hardware’s operation is similar to that of the Microsoft Kinect, in that it uses an array of infrared sensors. It has a view radius of 150 degrees and can sense at a resolution of up to 0.2mm. It collects data at a rate of 200 Hz and processes the information on the device itself, in order to save time by only transmitting meaningful data regarding hand and arm position. [16]
 

 

 

 

 

 

 

 


 

 

Figure 5.2 A Leap Motion device with the three axes in the positive directions of X, Y, and Z designated by colored arrows. 

 

5.2.2    Mapping
 

Previous approaches such as [17] map the leap motion to a remote manipulator with inverse kinematics to match the end effector position to that of the human hand. Because this approach requires a vast amount of calculations to be completed in a short amount of time, it proved not to be usable in my setup and was not pursued beyond a superficial initial assessment. It was found that even moving the tracked hand relatively slowly resulted in an inverse kinematic trajectory solution to be circuitous and indirect – resulting in a large amount of unintended joint motions. In the future with more powerful computers, inverse kinematics may be the more viable approach. Instead, a much quicker response was achieved by mapping hand movement along the device’s axes to specific joint movements on Baxter.
      The leap motion API provides positional data - so in order to measure movement of the user, a discrete derivate of the position values is obtained. This is accomplished in the typical manner of measuring the difference in the new reading’s position from that of the previous reading. Then, the delta value was used as the control input.
      The Leap motion does not immediately have as many degrees of freedom as the Xbox, but the ability to recognize gestures potentially allows for more types of input. Thus, the constructed mapping controls only the joints necessary to achieve basic remote manipulation function. To make for an intuitive design, the user’s arm moving up results in the end effector moving up and likewise for the other directions. Originally, to move robot arm forwards, multiple joints were mapped to user movement along the Leap Motion’s Z axis. The pilot tests showed that this was too difficult for the users and the feature was removed for simplicity. Additionally, user’s wrist roll and pitch are used to command Baxter’s wrist with the respective motions. Finally, using existing functions from the Leap Motion’s API, the user’s hand grip position is observed and used as input to Baxter’s gripper. Pilot tests revealed that it was necessary to apply a filter similar to in the static controller to reduce the effect of natural arm shaking.

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

Table 5.2 Mapping from Leap Motion inputs to Baxter joints. Reference Figure 3.5 to see joint names.

bottom of page