Red pioneered nuclear robots for accident cleanup at TMI, dismantlement at Argonne, tank cleanup, waste packaging and site survey. He founded RedZone Robotics. He developed dozens of robots, breaking new ground in mining, construction, agriculture and space exploration. His ground vehicles have driven thousands of autonomous miles. Whittaker won DARPA’s $2 million Urban Challenge race. Red is a member of the National Academy of Engineering, a recipient of the Columbia Medal, the Joseph Engelberger Award for outstanding achievement in robotics, the Ramos Medal for systems excellence, the Perlis Award for research excellence, and the Feigenbaum Prize for artificial intelligence.
NASA Robotics Development: Where are we going?
Dr. Kimberly Hambuchen is currently the NASA Space Technology Mission Directorate’s (STMD) Principal Technologist for Robotics. As Principal Technologist, she serves as the STMD technical expert and advocate for robotics across all NASA centers for STMD programs. She works with STMD managers and field center leads to maintain and update the directorate’s portfolio of robotics projects across the range of Technology Readiness Levels. She has spent the last 20 years developing software and applications to advance the intelligence, usefulness and operational intuitiveness of robots. As a robotics engineer in the Robotics Systems Technology branch of the Software, Robotics and Simulation division of engineering at NASA Johnson Space Center, Dr. Hambuchen developed expertise in novel methods for remote supervision of space robots over intermediate time delays and has proven the validity of these methods on various NASA robots, including JSC’s Robonaut and Centaur robots. She participated in the development of NASA’s Space Exploration Vehicle (SEV) and bipedal humanoid, Valkyrie (R5), to which she extended her work developing human interfaces for robot operations. Dr. Hambuchen is currently a member of the International Space Exploration Coordination Group’s (ISECG) Telerobotics Gap Assessment team, providing gap analysis in the field of operating space robots for the international space community, and in 2016 was named “One of the 25 Women in Robotics to Know” by RoboHub.
Computational Models and Measures of Human Robot Teaming for Space Exploration and Beyond
Dr. Karen M. Feigh is an Associate Professor at Georgia Tech’s Daniel Guggenheim School of Aerospace Engineering. As the director of the Georgia Tech Cognitive Engineering Center, she leads a research and education program focused on the computational cognitive modeling and design of cognitive work support systems and technologies to improve the performance of socio-technical systems with particular emphasis on aerospace systems. She is responsible for undergraduate and graduate level instruction in the areas of flight dynamics, human reliability analysis methods, human factors, human-automation interaction and cognitive engineering. Feigh has over 10 years of relevant research and design experience in fast-time air traffic simulation, ethnographic studies, airline operation control centers, synthetic vision systems for helicopters, expert systems for air traffic control towers, human extra-vehicular activities in space, and the impact of context on undersea warfighters.
Dr. Feigh has served as both Co-PI and PI on a number of FAA, NIA, ONR, NSF and NASA sponsored projects. As part of her research, Dr. Feigh has published 30 scholarly papers in the field of Cognitive Engineering with primary emphasis on the aviation industry. She serves as an Associate Editor for IEEE Transactions on Human Machine Systems, the Journal of Cognitive Engineering and Decision Making and the Journal of the American Helicopter Society. She previously served as an associate editor on the AIAA Journal of Aerospace Information Systems. She currently serves as the Chair to the Human Factor and Ergonomics Society’s Cognitive Engineering and Decision Making Technical Group, sits on AIAA’s Air Transportation Systems Technical Committee, and serves on the National Research Council’s Aeronautics and Space Engineering Board (ASEB).
Vision Systems for Planetary Landers: Progress and Challenges
The first onboard vision system used in a lander for planetary exploration was the Descent Image Motion Estimation System (DIMES) developed at JPL for the Mars Exploration Rover (MER) landings in January of 2004. DIMES used monocular imagery, radar altimetry, and an IMU to estimate the horizontal velocity of the descent system in the last 2 kilometers of descent. The horizontal velocity estimates were used in retrorocket firing logic to reduce horizontal velocity before the airbag impact on the ground. Research since then has focused on using descent imagery for terrain relative navigation (TRN), as an input to precision landing, as well as on landing hazard detection with lidar and other sensors. The Mars 2020 rover mission plans to use TRN to target landing at locations that are identified as hazard-free by analysis of orbital reconnaissance imagery prior to arrival at Mars. Research is in progress to generalize these capabilities to other planetary besides Mars. Some of these bodies have far different environments, such as dense, hazy atmospheres, which necessitate significantly different technical approaches. This talk will give an overview of the progress to date in this area and challenges for future applications to bodies like Europa, Titan, and Venus.