How health-centered design can save lives across the world.
Immersed in daily health discussions, thanks to my mother, grandmother, and aunt, who were nurses, I once thought medicine was not for me. However, as a UX designer, I can’t stand by indifferent when I know that the surrounding us technology has the potential to save numerous lives.
I’ve recently collapsed and temporarily lost sight without a known reason. I was able to speak, but I couldn’t move as if something paralyzed my muscles. I’m not sure how long it lasted because when something happens, you completely lose track of time. Fortunately, I wasn’t the only person at home when it happened. After I somehow managed to stand up, I entered my mom’s bedroom and woke her up saying: “There’s something happening with me. I fainted and can’t see well.” She nervously put me down on her bed and checked my blood pressure. It kept dropping, so she immediately called an ambulance.
I’m writing this story not to scare anyone or walk you through my medical history. My goal is to inspire other designers and decision makers in technology companies to take action, which — I’m confident to say after having worked in the industry for a while — can be taken. I hope my story will be seen by those of you who have the power to make a change. If a feature saves even one life, it is worth all the extra effort. Why? Because that one person matters — it can be any of us.
Like all other technology enthusiasts and people working in technology, I follow all the tech news and watch conferences held by many companies every year. At the beginning of September, I watched the most recent Apple event. It took place four days after my accident and before I started writing this article. I didn’t expect my issue to be resolved just yet, but I decided to defer from publishing my thoughts ahead of the conference, hoping that maybe it would actually get resolved. Well, not yet, but looking at the recent trends, we’re slowly getting there.
It was extremely captivating to see how Apple kicked off by showing a video shot with seven people saved by their technology — four people rescued by the Apple Watch, and three people by iPhone’s Emergency SOS and crash detection, which is also my favorite feature introduced in the recent years.
Expectations vs. reality
Yet, my experience has shown a crucial defect and a missed opportunity to save even more lives — of those of us who aren’t able to move and reach their phones when their life is in danger. It’s not just about people with mobility difficulties, but all of us. Many people — just like me — may get lucky to have someone around who can call for help. But many don’t even have that chance. Let’s think of all single households as an example. My grandmother, who lived alone, collapsed in 2020. That’s when I started thinking:
What can I do as a designer to support seniors living alone?
Luckily, she was holding her phone a moment before it happened. Even though she couldn’t move when she fell on the floor because of being in pain, the other person on the call immediately called an ambulance. Although my grandmother didn’t recover from the illness, she was hospitalized very quickly, had the best care possible, and her doctors managed to extend her life by a few more weeks.
The recent Census data shows that in 2022, 30% of people living in the UK were one-person households, and that number is projected to increase by further 160,000 people per year (source). When it comes to the elderly population, 51% of all people aged 75 and over live alone (source). Yet, if we compare the data between different countries (source), there are many regions where the number of one-person households is even more significant. The highest percentage live in Norway (45.8%), Denmark (44.1%) and Finland (43%).
Because so many people — especially seniors — live alone, many of them don’t have anyone checking on them every day and can’t rely on any extra help in daily situations.
On the other hand, thinking of ageing populations around the world, we should discuss how technology can help us deliver emergency care to those in need when accidents happen and there is no one around who can help.
Can voice assistants save our lives?
In a situation when you fall down and can’t reach your phone, or when you experience a sudden vision loss, it would be much easier to ask for help a voice assistant, which at the moment can be found in countless devices around us — from smartphones to refrigerators. Their microphones have also become extremely sensitive, which could provide us with a sense of security when we know that there is a device with a built-in voice assistant in the room where the accident happens.
Nowadays, we’re surrounded by an extremely wide range of electronic devices in our homes. For the purpose of this article, let me particularly focus on Apple, as the company is famous for setting new industry standards and addressing human health, just like we saw in the recent conference. We can activate their voice assistant — Siri — on an iPhone, Mac, Apple Watch, iPad, AirPods, and the HomePod speakers.
If we go back to my case, I had all these devices except for a smartwatch. Yet, none of them helped me when I collapsed, even though I knew my phone was somewhere close. I wish I could have activated Siri on my iPhone, where I knew I had it deactivated from the day I bought it, or a HomePod, where Siri is always on and waiting for requests. Technically, it shouldn’t be an issue to ask a smart speaker such as the HomePod to call an ambulance. Yet, Siri only works in 21 languages, and none of them were spoken in the country where I was and where my family lives. That’s also why my mom doesn’t even have a HomePod and has had Siri turned off in all her Apple devices — simply because she doesn’t speak English.
If you’re a native speaker of a language spoken by hundreds of millions of people, you may almost miss the fact that Siri still doesn’t work in numerous languages. In fact, it only works in 21 languages. On the contrary, Google Assistant supports over 39, and Amazon Alexa understands 9 languages in total. Considering that there are over 7000 languages spoken in the world, the number of languages supported by voice assistants isn’t that impressive.
Why building a multilingual voice assistant is troublesome
A full, efficient, and error-free voice recognition is complex to build and test because we all speak differently. We use various dialects, tones, intonations, speeds, and grammatical tendencies that often aren’t correct. On the other hand, it’s natural that many of us have speech impairments or impediments that make our interactions with voice assistants even more difficult. For example, we may stutter, suffer from dysarthria that makes us unable to produce certain sounds, or a speech sound disorder stopping us from pronouncing certain consonants such as /r/ or /s/. It’s estimated that only 5 to 10% of the population don’t have any speech disorder.
On the other hand, a few years ago, I was helping LG test their voice-controlled remote controls. That’s when I learned that testing voice commands can be quite tricky because there can be numerous ways of asking a device for a simple action. For example, everyone had to think of ten different ways to say “increase the volume.” It may sound very straightforward, but in reality, you run out of ideas after listing six or seven options. Eventually, you start experimenting with various words and structures that aren’t necessarily correct. On the flip side, we all struggle with finding the correct word sometimes, and it can happen even more often when you speak several languages and attempt to find a synonym in your second or third language rather than your mother tongue.
Lack of support in many global languages
Even though, my mother tongue — Polish — is spoken by over 50 million people globally, it still isn’t supported by most voice assistants. From the economic perspective, it may make sense why it’s always excluded from software updates. Polish happens to be one of the ten most difficult languages in various linguistic studies and rankings, has an extremely complex grammar structure, difficult to pronounce letter combinations full of double, triple, and even quadrant consonants, and can be even harder to understand for a voice assistant if someone has a speech impairment or isn’t a native speaker.
However, a full language support isn’t necessarily needed to save lives of voice assistant users. In fact, a few key commands should provide enough support to help in case of emergencies.
My strong suggestion that would make the voice assistants we use more user-centered and helpful in case of emergencies is to introduce a simple set of voice commands working in most — if not all — languages, letting us call emergency services when needed. Moreover, allowing smart devices to dial an ambulance or police when the device users can’t reach their phones should become a universal feature working internationally if we aim to mitigate the risk of losing our close ones while and help the ageing populations feel more secure.
We already have the technology that can write a whole book, create a painting from scratch mimicking the style of Vincent Van Gogh or Pablo Picasso, or show how our faces will change in 20 years. Yet, we don’t have the technology that would help anyone who doesn’t speak a widely spoken language dial an ambulance in their mother tongue while using a voice assistant, even though we can find them today even inside refrigerators.
It’s time for designers to step in and think about what we can collectively do to make technology not just human-centered but particularly health-centered — focused on the health of seniors and everyone else.