What is the Institute for Safe Autonomy, and is it as scary as it sounds?
Just a few yards away from the Piazza on Campus East stands a boxy building with an ominous title in big black letters on its front wall, accompanied by an inscrutable logo. The title ‘Institute for Safe Autonomy’ often mystifies students, conjuring up an image of scientists in white jackets monitoring sentient robots, and raises the slightly concerning question of what unsafe autonomy would look like.
Really, this image isn’t too far off. The institute, which officially opened on 20th February 2024, is a collaborative space for experts and enterprises “to safely explore how robotics and connected autonomous systems can benefit people and the planet.” Promotional material is covered in images of shiny white robots, mechanical arms, drones (both aquatic and airborne) and, of course, the exposed ceiling wiring that devotees of the Spring Lane Building will be so familiar with.
The BBC reported that the building itself cost £45 million, although its website only boasts of £15 million facilities. These include an 18,000 litre water tank, a six metre high indoor test space, rooftop testing capabilities and eleven specialist labs. It also, slightly disconcertingly, has “automated doors, wide corridors and a high-capacity lift to enable robots and autonomous vehicles to move freely throughout the building.”
The institute claims to “take a safety critical approach”, and defines its aims under four main categories: “design and verification”, “assurance”, “communications”, and “society and ethics”. While much of this is self-explanatory, two of the four stand out as less technologically focussed than one might expect. The institute takes a much more holistic approach to AI than might be assumed, with law and politics experts operating alongside the usual physicists and engineers. They define their assurance focus as “overcoming barriers to regulating and assuring the safety of autonomous systems”, and society and ethics as “examining whether these new technologies are beneficial, fair, and trustworthy.”
The space was designed as a “living lab” which, cutting through quite a few buzz words, means that it operates as a collaborative “ecosystem”. It generates involvement from four key sectors of society (government, academics, the private sector and citizens), and facilitates testing in the real-life settings that products will eventually be used in. It is also designed to allow gradation of testing from controlled labs to shared indoor office spaces and eventually into the outdoors.
The building is expected to run off the solar farm that is to be constructed on Campus East, and to achieve net zero energy by 2025. The solar arrays will also function as part of the “living lab”, being used to develop and test robots and systems designed to maintain such farms.
The institute is home to the similarly named “Centre for Assuring Autonomy”, which focuses in particular on regulation, and reassuring users that technologies are entirely safe. The centre is a £10 million partnership between the University and Lloyd’s Register Foundation, and is very open about its findings, which are freely available on its website, and “intentionally developed for use in any domain”. It has in particular focused its research on the automotive industry, health and social care, maritime systems, agriculture, manufacturing and aviation.
Overall, the Institute for Safe Autonomy lives up to its imposing title: it is at the forefront of Artificial Intelligence research and genuinely has cutting edge robots and mechanical arms dotted around its labs. What is reassuring however is its emphasis on responsible and ethical development, and its holistic approach to a growing and often anxiety-inducing technological wave.
While the mysterious building by Piazza is developing potentially life-altering systems and equipment, it is also carefully studying the political and ethical impacts of its work.
Edit this block to edit the article content or add new blocks...
Post a comment