by Dr Antoine Cully (Department of Computing, Imperial College London, UK)
This seminar is co-organised with the QMUL Institute of Applied Data Science.
When: Thu, Dec 13, 15:00-16:00, coffee/tea reception after the seminar 16:00-16:20
Where: People's palace, PP2, Mile End campus (building 16, campus map, https://www.qmul.ac.uk/docs/about/26065.pdf)
Title: Quality-Diversity Optimisation: Learning large sets of diverse and high-performing solutions in robotics
Abstract. The optimisation of functions to find the best solution according to one or several objectives has a central role in many engineering and research fields. Recently, a new family of optimisation algorithms, named Quality-Diversity optimisation, has been introduced, and contrasts with classic algorithms. Instead of searching for a single solution, Quality-Diversity algorithms are searching for a large collection of both diverse and high-performing solutions. The role of this collection is to cover the range of possible solution types as much as possible, and to contain the best solution for each type. During this seminar, I will show how these algorithms can enable robots to learn large behavioural repertoires and to adapt to unforeseen situations, like mechanical damages.
Bio. Antoine Cully is Lecturer at Imperial College London (United Kingdom). His research is at the intersection between artificial intelligence and robotics. He applies machine learning approaches, like deep learning and evolutionary algorithms, on robots to increase their versatility and their adaptation capabilities. He received the M.Sc. and the Ph.D. degrees in robotics and artificial intelligence from the Sorbonne University of Paris (previously called UPMC), France, in 2012 and 2015, respectively, and the engineer degree from the School of Engineering Polytech’Sorbonne, in 2012. His Ph.D. dissertation has received three Best-Thesis awards. He has published several journal papers in prestigious journals including Nature, IEEE Transaction in Evolutionary Computation, and the International Journal of Robotics Research. His work was featured on the cover of Nature (Cully et al., 2015) and received the "Outstanding Paper of 2015" award from the Society for Artificial Life (2016), and the French "La Recherche" award (2016).
Robotic surgery is one of the most appealing fields of modern robotics. With over 3 decades history, more than 3.800 systems installed worldwide and over 600.000 robot-assisted interventions conducted per year, the field of robotic surgery is considered well-established. Despite these impressive figures and increasing popularity in research labs all over the world, the list of technological advances that made it into the operating room (OR) during this last decade is fairly limited. Long expected techniques such as 3D reconstruction, motion compensation, virtual guidance, haptic feedback, under study in many labs all over the planet did not make their appearance into the market yet.
CRAS seeks to give a clear view on the status and recent trends of assistive surgical robotic technologies. It aims to support and propose concrete measures to accelerate research and innovation in this field. CRAS originates from efforts to collaborate among European groups to achieve a critical mass in surgical robotics. As such the workshop continue on discussions started at ERF in Lyon and at ICRA in Karlsruhe, and previous meetings at Verona and Genoa. More in particular CRAS attempts to identify the steps necessary to stimulate cooperation between research and industry, across national borders and different surgical robotic projects to take advantage of the growing attention and support for research and exploitation in this interesting and growing field.
Fri, June 22, 2018, 14:00-15:00 QMUL Mile End campus, People's Palace building, room LG01 (basement)
Abstract: This talk will present research on flexible surgical robotics of KAIST center for future Medical robotics. We believe that surgical robots should be developed considering the benefits to surgeons with easy and intuitive control, to patients with minimum invasiveness and fast recovery, and to hospitals with affordable cost and reduction of surgery time. In order to meet these requirements, firstly, EasyEndo has been developed for solo-endoscopic procedures. By attaching a motor pack to a conventional endoscope, EasyEndo allows easy and intuitive endoscopy without assistants. Second, Portable Endoscopic Tool Handler (PETH) has been developed for more advanced procedures with additional surgical arms attached to the conventional endoscope. Several ex-vivo experiments have shown the improved performance of conventional endoscope and the feasibility of PETH. Third, K-FLEX has been developed that can perform dexterous robotic surgery through a flexible pathway by adding small robot arms to the flexible endoscope. An attractive feature of these robot arms is that they can exert a great deal of force to lift organs and tissues with specially designed constraint joint mechanism. This endoscopic surgical robot system will provide minimal invasiveness for patients, and widen the robotic surgery area with more accessibility. With these robot technologies, we believe that surgeons and endoscopists can conduct a challenging surgery that has not been tried before. Based on our research experience over the last 20 years, we are planning to commercialize our research outputs. Since the current market of endoscopes is much larger than that of laparoscopic surgical robots, we will pursue to commercialize our flexible robot technologies that will extend endoscope application from conventional endoscopy procedure to robotic surgery.
Biography: Dong-Soo Kwon is a Professor in the Department of Mechanical Engineering at the Korea Advanced Institute of Science and Technology (KAIST), Director of the Human-Robot Interaction Research Center, Director of the Center for Future Medical Robotics. He is serving the IEEE Robotics and Automation Society (RAS) as a member of the Administrative Committee (AdCom). In addition, He is the founder CEO of EasyEndo Surgical Inc., Chairman of the board of directors of Korea Institute of Robot and convergence (KIRO), and a member of National Academy of Engineering of Korea (NAEK). His research deals with Medical Robotics, Haptics, and Human-Robot Interaction. He has contributed to the advancement of several robot venture companies by technology transfer. Recently, he has established a start-up company based on his medical robot research results. He had worked as the Research Staff in the Telerobotics section at Oak Ridge National Laboratory from 1991 to 1995. He was a Graduate Research Assistant in Flexible Automation Lab. at Georgia Institute of Technology from 1985 to 1991, and the Section Chief, Manager at R&D Group of Kanglim Co., Ltd from 1982 to 1985. He received the Ph.D. in the Department of M.E. at Georgia Institute of Technology in 1991, M.S. in the M.E. at KAIST in 1982, and B.S. in the M.E. at Seoul National University in Korea in 1980.
When: March 21, 14:00-19:00
Where: GO Jones building, ground floor Lecture theatre, Queen Mary Univerisity of London, Mile End Campus (Stepney green or Mile End road stations)
Robotics and assistive technologies can improve mobility for persons with a disability. Such technologies are diverse: powered wheelchairs, prosthetic limbs, wearable exoskeletons, etc. Safe and efficient design of robotic assistance remains a challenging issue. In this workshop, the invited speakers will address such issues by demonstrating recent research progress and successful application cases. This workshop is a part of the QMUL Cybathlon 2020 team activity. The QMUL Cybathlon 2020 student team will present their progress during the workshop.
Registration is required (FREE): Eventbrite link
Contact: Dr Ildar Farkhatdinov, email@example.com
The workshop is supported by Westfield fund of Queen Mary Univerisity of London.
When: Thursday, 22nd of February, 2018
Where: Arts One: ALT, Queen Mary University of London
Registration: registration website.
Synchronising with the pace of IEEE Robotics and Automation Society, the UK & Ireland RAS chapter addresses the advancement of the theory and practice of robotics and automation engineering and science and of the allied arts and sciences, and for the maintenance of high professional standards among its members. Today, Robotics and Automation Technologies have been widely applied in various areas, such as Manufacturing, Medical Surgery, Healthcare, Agriculture, Urban Search and Rescue in extreme environments, Surveillance, etc.
The aim of the annual IEEE UK&I RAS conference to improve the communication of its members and other researchers, young students and industrial professionals who are interested in the activities of RAS in research, development and education, share the knowledge, latest research achievements and technologies in RAS, explore the furture trend of RAS, and promote collaboration and knowledge transformation. At the conference, we will call for creating young student group.
Conference general chair: Prof John Gray, Chair of IEEE UK & Ireland RAS chapter, University of Manchester
Programme chair: Prof Kaspar Althoefer, Head of the Centre for Advanced Robotics @ Queen Mary (ARQ).
1. RAS on Extreme Environments
2. RAS on Social and Health Care
3. RAS on Manufacturing and Food Automation
4. RAI Beyond 2020
5. Student research poster competition (Landscape, A1 size) - submit your abstracts here.
Contact: Dr Mary He (firstname.lastname@example.org), Dr Jelizaveta Konstantinova (email@example.com)
To mark the completion of the FourByThree project, Queen Mary University London has organised an event entitled “Next Generation Robots for the Factory of the Future” that has attracted audience of hundred participants and took place in Royal Society, London on the 17th of November.
During this workshop there were presentations of results from FourByThree partners (KCL, QMUL, DFKI, IFF), as well as invited talks from industries (Shadow Robot, OCADO technologies) and academia (Imperial College, Royal College of Arts). The main purpose of this event was to share the outcomes and results of the project, as well as its next steps (exploitation strategy). In addition, as the event targeted both academia and industry, the discussions on possible future collaboration were held.
When: Friday, 17th of November, 17:00-21:30
Where: The Royal Society, 6-9 Carlton House Terrace, London, SW1Y 5AG
17:00 Kaspar Althoefer (Head of Advanced Robotics @ Queen Mary (ARQ))
17:15 Marc Ronthaler (Managing Director of Ground Truth Robotics GmbH - A DFKI Spin Off Company)- "Highly customizable robotic solutions for effective and safe human robot collaboration in manufacturing applications"
17:30 Ali Shafti (Brain and Behaviour Lab, Dept. of Computing and Dept. of Bioengineering, Imperial College London)
17:45 Christian Vogel (Fraunhofer IFF) - "Projection and camera-based technology for workspace surveillance in human-robot cooperation scenarios"
18:30 Rich Walker (CEO Shadow Robot Company) - "What are these new robots going to do for us?"
18:45 Yiannis Demiris (Director of Personal Robotics Laboratory, Imperial College London) - “Multi-scale User Modelling for Human Robot Interaction”
19:00 Graham Deacon (OCADO Technology)
by Professor Vincent Hayward (University Pierre et Marie Curie, Paris, France; Centre for Advanced Studies, University of London, UK)
When: Thu, Oct 12, 14:00-15:00
Where: QMUL, Mile End campus, graduate centre, GC101
Title: Tactile perception in and outside our body
Abstract. The mechanics of contact and friction is to touch what sound waves are to audition, and what light waves are to vision. The complex physics of contact and its consequences inside our sensitive tissues, however, differ in fundamental ways from the physics of acoustics and optics. The astonishing variety of phenomena resulting from the contact between fingers and objects is likely to have fashioned our somatosensory system at all its levels of it organisation, from early mechanics to cognition. The talk will illustrate this idea through a variety of specific examples that show how surface physics shape the messages that are sent to the brain, providing completely new opportunities for applications of human machines interfaces.
Bio. Vincent Hayward is a professor (on leave) at the Université Pierre et Marie Curie (UPMC) in Paris. Before, he was with the Department of Electrical and Computer Engineering at McGill University, Montréal, Canada, where I became a full Professor in 2006 and was the Director of the McGill Centre for Intelligent Machines from 2001 to 2004. Hayward is interested in haptic device design, human perception, and robotics; and I am a Fellow of the IEEE. He was a European Research Council Grantee from 2010 to 2016. Since January 2017, Hayward is a Professor of Tactile Perception and Technology at the School of Advanced Studies of the University of London, supported by a Leverhulme Trust Fellowship.
Title: How humans communicate through touch
by Dr Atsushi Takagi, Human Robotics group, Bioengineering, Imperial College London
When: Friday, Feb. 3, 2017, 15:00-16:00
Where: Room 4.02, Computer science building (Bancroft Road Teaching Rooms), 4th floor (card free access from Bancroft road entrance)
Abstract: We interact with humans on a daily basis using several senses. The least studied of these is touch, like when parents assist a child to take their first steps, and during tango dancing. In this talk, I will shed some light on the mechanism of physical coordination through experiments and by simulating human behaviour computationally. First, I highlight the need to control the cognitive biases that affect the behaviour of interacting pairs. Then, I provide evidence that humans infer a partner's intentions through touch, and show how a robot imbued with this ability could assist patients undertaking physiotherapy.
Bio: Atsushi Takagi received his MSc in Physics in 2011 from Imperial College London, where he also received his PhD in 2016 on the "Mechanism of interpersonal sensorimotor interaction" which examined how pairs, like during Tango dancing, coordinate their actions. He uncovered the mechanism that enables physically interacting partners to exchange certain information through haptics or touch.
by Professor Etienne Burdet, Imperial College London
When: Tue, Jan 29, 16:00-17:00
Where: Engineering building, Eng 3.24 (Mile End campus)
Abstract: In this talk I will present how we use robotics to investigate human sensorimotor control, and create robots to help humans. In particular, I will describe how we translated human-like adaptation to to let robot interact skilfully with the environment in industrial tasks and for robot-assisted neurorehabilitation. I will also describe our recent results on how humans communicate during physical interaction, and how this can lead to versatile and reactive robotic behaviours.
Bio: Dr. Etienne Burdet is Chair of Human Robotics at the Imperial College of Science, Technology and Medicine in UK. He is also a visiting Professor at Nanyang Technology in Singapore and at University College London. He holds an MSc in Mathematics (1990), an MSc in Physics (1991), and a PhD in Robotics (1996), all from ETH-Zürich. He was a postdoctoral fellow with TE Milner from McGill University, Canada, JE Colgate from Northwestern University, USA and Mitsuo Kawato of ATR in Japan. Professor Burdet’s group uses an integrative approach of neuroscience and robotics to: i) investigate human motor control, and ii) design efficient systems for training and rehabilitation, which are tested in clinical trials.
by Dr Thiago Boaventura, Sao Paulo University, Brazil
When: Wed, April 17, 16:00-17:00
Where: Computer Science building (entrance from Bancroft road), room 3.02, 3rd floor
Abstract: Very often robots have to physically interact with the environment, people, tools, or other objects. To properly control such interactions, it is important to be able to control both the forces applied by the robot as well as its displacement. In this talk, a few perspectives on how to control such physical interactions will be presented for two different robots: a quadruped and an exoskeleton. Relevant aspects such as the stability and certification of these controllers will also be discussed.
Bio: Thiago Boaventura received the B.Sc. and M.Sc. degrees in mechatronic engineering from the Federal University of Santa Catarina, Florianopolis, Brazil, in 2009. From 2010 to 2013 he worked at the Italian Institute of Technology, Genoa, Italy, on the control of the hydraulic quadruped robot HyQ. In 2013 he got his the Ph.D. degree in robotics, cognition, and interaction technologies from University of Genoa, Italy. Then, he was a postdoctoral researcher for 4 years with Agile and Dexterous Robotics Laboratory, ETH Zurich, Switzerland, involved mainly in the EU FP7 BALANCE project with a focus on the collaborative impedance control of exoskeletons. Since 2017 Thiago is an Assistant Professor at the University of Sao Paulo. His research interests include impedance and admittance control, model-based control, legged robotics, optimal and learning control, and wearable robotics.
by Philip Noman, Ross Robotics, UK
and Dr Emmanouil Benetos, Queen Mary University of London, UK
When: Fri, Nov 22, 12:30-15:00
Where: ArtsOne, Room 1.28
Interactive Agri-Robotics: Philip Norman will present a case study of modular robot for poultry farming, with potential implications for other sectors. The talk will focus on interactivity in data collection, analytics and better informed decision-making for improved animal welfare/commercial outcomes and animal/robot interactions to modify individual and group behaviour for improved welfare/commercial outcomes.
Machine Learning for Machine Listening: Audio analysis -also called machine listening- involves the development of algorithms capable of extracting meaningful information from audio signals such as speech, music, or environmental sounds, typically drawing knowledge from the fields of digital signal processing and artificial intelligence. Machine listening applications are numerous, including but not limited to smart homes/smart cities, ambient assisted living, biodiversity assessment, security/surveillance and audio archive management amongst others. The talk will outline recent research carried out at QMUL that focuses on sound recognition in complex acoustic environments, inspired by and proposing new methods in the area of machine learning. Topics covered will include designing new learning objectives for audio analysis, domain and context adaptation for audio, methods for interpretability in machine listening, and studies on the robustness of machine listening methods.