The Center for Computational Robotics (CCR) at the University of South Carolina facilitates research, education, and outreach in robotics. Our mission is to solve complex scientific problems in perception, autonomy, and interaction for robots that operate in unstructured environments. CCR projects span theoretical foundational and fielded applications. The center was established in 1983 as the Center for Machine Intelligence and assumed its current name, reflecting a renewed focus on robotics, in 2015.
Abstract: Our lives are being transformed by large, mobile, "sophisticated robots" with increasingly higher levels of autonomy, intelligence, and interconnectivity among themselves. For example, driverless automobiles are likely to become commercially available within a decade. Many people who suffer physical injuries from these robots will seek legal redress for their injury, and regulatory schemes are likely to impose requirements on the field to reduce the number and severity of injuries.
This talk addresses the issue of whether the current liability and regulatory systems provide a fair, efficient method for balancing the concern for physical safety against the need to incentivize the innovation that is necessary to develop the robots. The talk provides context for analysis by reviewing innovation and robots' increasing size, mobility, autonomy, intelligence, and interconnections in terms of safety - particularly in terms of physical interaction with humans - and by summarizing the current legal framework for addressing personal injuries in terms of doctrine, application, and underlying policies. This talk argues that the legal system's method of addressing physical injury from robotic machines that interact closely with humans provides an appropriate balance of innovation and liability for personal injury. It critiques claims that the system is flawed and needs fundamental change and concludes that the legal system will continue to fairly and efficiently foster the innovation of reasonably safe sophisticated robots.
Bio: Professor Hubbard has been a member of the University of South Carolina School of Law since 1973. He retired from full time teaching in 2015. He currently teaches Legal Theory and Land Use Planning. In recent years, he also taught Torts, Products Liability, Evidence, and Criminal Law. Before joining the faculty, Professor Hubbard was an associate at Mudge, Rose, Guthrie, and Alexander (New York City) and was a staff attorney with Community Legal Services Program (Austin, TX). He graduated Phi Beta Kappa from Davidson College. He received a JD from New York University School of Law and a LLM from Yale Law School. Professor Hubbard has written books on tort law and criminal law and has published dozens of articles and book chapters on criminal law, legal theory, torts, and land use planning. As a legal realist, he actively related his scholarship to the world outside the law school. For example, his interest in land use planning includes working on a drafting committee for recent amendments to the South Carolina zoning enabling act, serving as chair of the Columbia Planning Commission in the 1990s and as vice-chair of the Board of Zoning Appeals currently, and working with a taskforce revising the Columbia Zoning Code, and assisting neighborhood organizations in zoning matters. Professor Hubbard has been a visiting professor of law at University of Southampton U.K., at University of Birmingham, U.K., and at Florida Coastal School of Law. Professor Hubbard and his wife have been happily married since 1968. They have two sons, both of whom are married, and have five grandchildren.
Abstract: This talk will address the deployment of robotic systems for data collection. This includes task specification, gait learning and data analysis. As a concrete example I will discuss the automated analysis of video data, and specifically video data collected underwater with an amphibious vehicle (the Aqua 2 hexapod). Automated systems can collect data at prodigious rates and the timely analysis of this data is a growing challenge, especially when there are bandwidth constraints between the data source and the people who must examine the data. We are specifically interested in the real-time summarization and detection of the most interesting events in a video sequence, for use by humans who will analyze the data either in real time, or offline. To do this, we are developing methods that adapt to video data streams in real time to collect salient events and using them in the context of a group of vehicles that fly, swim and float.
Bio: Gregory Dudek is the Director of the School of Computer Science, a James McGill Professor, member of the McGill Research Centre for Intelligent Machines (CIM) and an Associate member of the Dept. of Electrical Engineering at McGill University. He is the former Director of McGill's Research Center for Intelligent Machines, a 25 year old inter-faculty research facility. In 2010 he was awarded the Fessenden Professorship in Science Innovation and also received the prix J. Armand Bombardier for Technological Innovation Robotics from ACFAS, the Association francophone pour le savoir (the French learned society). He is also the recipient of the Canadian Image Processing and Pattern Recognition Award for Research Excellence and the award for Service to the Community at the Conference on Computer and Robot Vision. He directs the McGill Mobile Robotics Laboratory.
He has been on the organizing and/or program committees of Robotics: Systems and Science, the IEEE International Conference on Robotics and Automation (ICRA), the IEEE/RSJ International Conference on Intelligent Robotics and Systems (IROS), the International Joint Conference on Artificial Intelligence (IJCAI), Computer and Robot Vision, IEEE International Conference on Mechatronics and International Conference on Hands-on Intelligent Mechatronics and Automation among other bodies. He is president of CIPPRS, the Canadian Information Processing and Pattern Recognition Society, an ICPR national affiliate.
He was on leave in 2000-2001 as Visiting Associate Professor at the Department of Computer Science at Stanford University and at Xerox Palo Alto Research Center (PARC). During his sabbatical in 2007-2008 he visited the Massachusetts Institute of technology and co-founded the company Independent Robotics Inc. He obtained his PhD in computer science (computational vision) from the University of Toronto, his MSc in computer science (systems) at the University of Toronto and his BSc in computer science and physics at Queen's University.
He has published over 200 research papers on subjects including visual object description and recognition, robotic navigation and map construction, distributed system design and biological perception. This includes a book entitled "Computational Principles of Mobile Robotics" co-authored with Michael Jenkin and published by Cambridge University Press. He has chaired and been otherwise involved in numerous national and international conferences and professional activities concerned with Robotics, Machine Sensing and Computer Vision. His research interests include perception for mobile robotics, navigation and position estimation, environment and shape modelling, computational vision and collaborative filtering.
Abstract: Maintaining one's independence is a primary goal of older adults and a key component to successful aging and aging-in-place. Technology has the potential to help older adults maintain their independence. In this presentation, Dr. Beer will discuss current and future technology aids, such as robotics, home sensors, and smart homes. For assistive technology to be successful, it is important that the older adult user finds the technology to be simple, user friendly, and useful - a field of study called user-centered design! We will discuss what makes technology user-friendly, how technology might be integrated into the home or healthcare setting, and where the field is headed.
This workshop, part of the Voyages into the Technology Frontier series, organized by the Center for Teaching Excellence, will explore the current state of robotic technology and its applications for a broad audience. Additional details are available at the CTE site.
Abstract: There are lots of practical reasons why one might attach a tether to a mobile robot (providing power from off-board sources, high-speed communication to a base-station, etc.) but, since the tether constrains the motion of the robot, doing so makes the problem of moving the robot trickier than it would be otherwise. This talk will explore the motion planning problem for a planar robot connected via a cable to a fixed point in R^2. I'll describe how to visualize the configuration space manifold for such a robot, showing that it has regularity which can be used to produce a neat representation. This representation describes the manifold via (1) a discrete structure that characterizes the cable's position (2) an element within a single continuous cell. Further, when the tether has a constraint on its curvature, I'll show how Dubins’s theory of curves can be combined with work on planning with topological constraints to concisely represent the configuration space manifold, resulting in a data-structure that facilitates search for optimal paths.
Bio: Dylan Shell is a computer scientist with broad interests. He's an Associate Professor in the Department of Computer Science and Engineering at Texas A&M University, where he runs a laboratory focused on robotics and artificial intelligence. His research group aims to synthesize and analyze complex, intelligent behavior in distributed systems that exploit their physical embedding to interact with the physical world in a variety of ways. He has published papers on multi-robot task allocation, robotics for emergency scenarios, biologically inspired multiple robot systems, multi-robot routing, estimation of group-level swarm properties, minimalist manipulation, rigid-body simulation and contact models, human-robot interaction, and robotic theatre. His work has been funded by DARPA and the NSF; and he has been the recipient of the Montague Teaching award, the George Bekey Service award, and the NSF Career.
Abstract: The Center for Computational Robotics is the outgrowth of two previous research Centers, all with the goal of advancing the capabilities of physical and information systems. The activites of the previous centers can provide a context and perspectives that set the directions for the new Center. This seminar will describe these directions and outline the disciplines needed: from mechanics to ethics.
The COMP-ROB mailing list is a low-traffic list for announcements and updates related to the center.
Use this link to subscribe.