When cars drive themselves: the future of digital accessibility
Currently, making a digital system accessible usually means adapting the user interface so it works for people with a disability. We do a similar thing with cars, modifying the controls—steering wheel, accelerator, brake—so that someone with a physical impairment can drive. Over time, such adaptive controls have become increasingly sophisticated.
But what about driverless cars? We will no longer have to use physical controls to get from A to B. The interface will be transparent, operating in the same ways regardless of a user’s physical impairments. “Driving” a car will mean something different in the future.
This is an example where we are seeing a change in the control paradigm, from an “adaptive” mindset (let’s make a car usable for people with disabilities) to a “universal” mindset (let’s make the car usable for everyone).
Similarly, we need to change the way we think about digital accessibility. We should focus less on refining the steering wheels of our online systems and look more to the opportunities offered by new technologies and how they can move us towards universal design.
Here are just some of the “driverless cars” of digital technology that are changing what accessibility means.
Artificial Intelligence has given us tools such as Siri or Google Assistant to help us navigate through online information using natural language recognition, as well as advanced pattern recognition and personsalisation that helps machines predict what we want and need. They are improving all the time, and are already rebalancing reliance on assistive or adaptive technologies towards a universal design paradigm.
These more natural ways of communicating mean we spend less time using website menus and mouseclicks clicks to switch between pages as we hunt for the information we need. Instead we are understood, and taken to what is going to help us.
Back in the car, we won’t have to find our own route, enter an address or select a map. The car asks us where we want to go, understands our answer and takes us there.
Grassroots innovation by makers
Adaptive and assistive technology has always been the domain of medical device makers and sophisticated software developers. Recently, however, we have started to see the community being enabled to become makers, using consumer-level tools to take ideas out of their head and turn them into reality.
When Reddit user Rhine57, a small business proprietor who does not have the use of his arms and creates assistive aids, decided to enable himself to eat soup with a spoon, he created a clever self-feeding device. It is essentially a 3D printed stand with a magnet on the top that can hold a metal spoon, filled with soup by the user holding it in their mouth and dipping it into the soup bowl. The spoon can then be spun on the magnet and the soup consumed.
Similarly, Leo McCarth, a 12 year old, one-handed boy in the US whose father could not afford a hand prosthesis, now has a sophisticated prosthetic hand that can be used for drawing, picking up food and holding glass. He has it because his father watched maker videos on YouTube, bought a cheap 3D printer, and printed/assembled the components of a prosthetic hand himself.
As the ability to cheaply and easily make and share assistive aids becomes more available to people in the community, we will continue to see a growing proliferation of innovative assistive tools coming out of the creativity of people who are driven by personal need, and whose solutions can be then further replicated by others.
Like something out of science fiction, the ability to control machines through thought alone is coming closer to reality. Basic Brain-Computer Interfaces are emerging from research laboratories into Silicon Valley start-ups, and we are perhaps a decade or two away from connecting silicone and neurons together to allow meaningful transfer of commands and feedback between people and machines.
This however remains a “future-tech challenge”. What’s happening right now is perhaps less dramatic but no less transformative.
Take the bionic eye. Right now, and depending on the reasons for sight loss, legally blind people can have a visual prosthesis fitted to their retina that allows them to see basic shapes (rendered as pinpoints of light). Although they are not yet sophisticated enough to be equivalent to natural sight, the images are useful for navigation, and become recognisable when dynamically image processed.
As these and other prosthesis breakthroughs become more commonplace, they will fundamentally change our approach to accessibility. Right now, we connect a disability interface, such as a braille display or screen reader, to the existing user interface. The user interacts with the disability interface to achieve their task.
Bionic eyes, cochlear implants and other technologies allow users to skip this indirect link to interact more optimally with the interface. Of course, they bring in new considerations for accessible designers, and designers will need to become familiar with these opportunities and constraints to design well for the new wave of users .
Government can lead the response to the future of accessibility
Our current approach to digital accessibility is to adapt interfaces so they are easier to use for someone with an impairment. We need to think beyond that. Evolving technologies can make one interface usable by all. They can create new adaptive solutions that we’ve never considered. They may even remove the need for an interface all together. This is the future of digital accessibility, and it’s an opportunity for Government to get ahead of the game (and the often sluggish private sector) to create a future-aware inclusive environment for all.