AI? Artificial Intelligence? Accessible Intelligence!

What We Can Look Forward To in the New AI World

The Journey

For the uninitiated, the concept of AI could seem new, but I can remember learning the term in the late 1980s when I took a course in LISP. LISP is a computer language originally developed over sixty years ago at MIT and was the birth of artificial intelligence (AI) and still remains prevalent in AI research today.

The problem back then was that there was no practical AI-based application that we could easily understand and embrace. What is new today is that AI is finally mainstream and being given front-and-center attention because of its practical applications that are really helping people.

Like anything in life, there is a good and a bad side, a positive and negative to everything. There is a lot of negative press in the media on the bad side of AI and how if not properly managed can hurt society. I am not discounting many of these reports, but for this article, I am going to comment on the positive side of AI’s contributions to people’s lives and, more specifically, those who live with daily challenges, sometimes known as disabilities.

In this post, I will look at:

  • How AI is already helping people with disabilities
  • The future of AI for people living with disabilities and how it can help
  • My wish list for the future: AI-based applications to improve the lives of people who are blind
  • My quick interaction with ChatGPT

How AI is Already Helping People With Disabilities

While artificial intelligence can enhance everyone’s lives, given we properly manage its progression, it has an especially important role to play in the lives of people with disabilities. AI helps the hard of hearing communicate, and low vision and blind to see or receive visual information in a format they can understand.

For many years AI has already been a big player in the image description business of automatically generating alternative text for images. Apple, Microsoft, and Google all have algorithms that can provide a high-level description of an image with ever-improving accuracy. Microsoft’s Seeing AI app on iOS was a revolutionary tool when it launched in 2018 and to this day continues to positively impact many people’s lives who are challenged with vision problems. It continues to improve its functionality thanks to Microsoft’s continued support of the free product.

Google’s Live Translate tool on the Android platform has been around for more than five years and in real-time can caption a live conversation for the deaf. It is just one of many Google AI-based products positively impacting the Deaf and hard-of-hearing community. In many cases, innovative technology develops for the disability community many years before it reaches the masses and is a common topic of discussion in the mainstream media. Audiobooks were developed for the blind in the 1930s, decades before the general public adopted the audiobook as a mainstream tool for enjoying reading without looking at a page.

AI is just the next step in the journey of embracing even more technology into our lives. This technology, while it allows us to think less in some ways, also requires us to be smarter about the world around us if we want to keep up with all the change.

AI is a tool. And like any tool it requires you to learn about it, respect it, and know when and when not to use it. It is no different than social media or the onset of the personal computer or the internet. Each progression through this journey required us to get smarter and use the tools as intended all the while watching out for negative bumps in the technological road of life.

The future of AI-based applications to assist the blind

If you have been paying attention to the conversation in the media recently on the topic of AI you are already aware of how it is achieving things like:

  • Making people smarter by extending people’s brain power to collect, organize and present research in a convenient form (ChatGPT)
  • Helping people to do things they couldn’t do previously by putting talents at your fingertips that were never there before
  • Solving some of the world’s biggest problems like climate change
  • Making learning easier
  • Advancing medical breakthroughs

There are so many different challenges AI can help people with visual disabilities to overcome, thereby positively impacting how we live life. Below are just some ideas; I am sure there are many more things we haven’t even imagined yet. Let’s take a look at some that have practical uses for people like me who live with a vision disability but can also apply to the sighted community, too.

Patrick’s AI Wish List:

  1. Autonomous Cars: First on my list is to one day have a self-driving car. I haven’t given up on this dream although I do realize it will take time. AI will play a big part in making autonomous driving a reality.
  2. Document Creation: I do a lot of writing for the sighted world, just like this blog post. Many documents I create are highly formatted for their visual appearance so I normally turn them over to a co-worker for final inspection (with an eye for cosmetic opportunities to improve). A good-looking, well-formatted document that is visually appealing to the eye is something AI should be able to help with. AI can learn about my document style preferences for fonts, colors, headings, images, tables and either point out deviations or missing formatting. It should automatically correct problems like unexpected font changes or suggest making something a heading to improve navigation. I bet sighted people would love having a visual format checker working with them like their spell checker.
  3. Picture This: Many of us aren’t artists and even simple sketches are beyond some people’s ability, no matter what their level of vision is. While AI-based systems are already out there that can draw for you based on a specific art style, what we need are AI-based drawing and sketching systems that take our written text or verbal input to create the drawing. This would allow people who are blind that have an idea to easily talk to their computer in natural language and have a piece of simple art or sketched idea created for them. This would also allow for better communication when needing to visualize a concept for a person with sight.
  4. Advanced AI-Based Video Analysis: There is so much AI can do to help make advanced live video stream processing reality for both at home and on the go now that 5G connectivity is beginning to give us the power to move large quantities of data over the cellular network. Some examples include:
    1. Personal trainer: For those of you who have used a personal trainer, you know among many things they do for you is watch your form, how you execute an exercise, and provide you queues on how to improve the overall movement to receive the most benefit from the activity while staying safe and avoiding injuries. This normally involves body positioning and how your body moves through space. An example of this is the squat.  Proper form dictates the path your body moves in and if you don’t do it correctly over time you can hurt yourself. Software can learn to do the same thing the trainer does. Some work is already being done in this area by major players in fitness and I am hopeful it will be inclusive of people who are blind desiring advanced training assistance.
    2. Room with a View: Here is a dream that is less work-oriented but still very important for that connection to the outside world that some sighted people take for granted. We have image detectors that can tell us what a photo looks like at a high level, but I want more! I want AI to watch a live video feed and tell me what is going on outside my window. I want to hear about what the sky looks like, if the leaves are turning colors, if my flowers are blooming, if there are birds flying around and if so what kind. If the grass is not as green as it should be or if it needs to be cut, I should know this. AI can do all this once we start using to help people with day-to-day activities.
    3. The Great Outdoors: Speaking of outdoor environments, AI should also help us be more secure in many ways while we are at home. Live video feeds interfacing with AI-based systems can detect potential danger in our environment. Here are some examples of where AI smart cameras can alert us to potential danger:
        • People approaching that are not recognized. Camera-based doorbells already can start to do some of this but we can do more. Systems can learn who should be near us and who may be unwelcome guests. Multiple cameras providing a perimeter video capture can work together to determine movement and alert as needed
        • Smoke or fire approaching your home could be an alert even before your smoke detector inside notices it
        • Animals that may cause harm like bears, mountain lions, snakes, or even unknown dogs could alert the homeowner to a potentially dangerous situation outside their door

      Anytime a watchful eye (or second set of eyes) is needed, AI-based solutions could help out both the blind and sighted consumer.

  5. Getting Personal: As I mentioned earlier, for many years we have had image detection systems that can analyze a photo, tell you the rough age, facial expression, presence of glasses, etc. of a person. Current technology can even image match and tell you if the person is in your contact list. What it can’t do yet is provide a detailed analysis of the person including eye color, skin tone, body type, clothing, posture, jewelry, etc.Continuing down the advanced image detection path, here is another opportunity to provide an AI-based product that would help the blind but also really help everyone.
  6. Some Assembly Required? This next image detection scenario would be helpful for both the visually challenged and the sighted. I can’t tell you how many times I have purchased something requiring assembly that I have to figure out without the aid of instructions. Normally, I can achieve the end result without assistance, but it does take a lot longer and more brain power to go from a bag of parts to the end product. Taking a picture of a diagram or scanning a QR Code to get verbal instructions on how to assemble is the goal and anyone would appreciate that level of service.
  7. Free Movement: Now for my most practical request, the ability to navigate a new indoor space independently without assistance from a human, or relying on GPS or Bluetooth beacon technology. While right now I can do this with the AIRA or Be My Eyes application, I am reliant on a person to be available and responsible for associated costs, if applicable. Image detection should be able to identify objects, signs, and dangerous situations. It would be smart enough to pull data from different sources along with advanced vision analysis to give me the feedback I need to move around independently, find what I need, and tell me about any people or obstacles nearby.

If you look at the above list, every idea presented would be appreciated by a person with vision issues and also by someone with perfect vision. I am a firm believer that good digital accessibility helps everyone, you may just not realize it. Many of the above items on my wish list will help everyone if you look at them with an open mind and think about what the possibilities can bring us.

The ChatGPT Challenge

With all the talk recently in the media on ChatGPT, I will admit I, too, was curious as to what all the hype was about. I will admit until only a week ago I didn’t even have a ChatGPT account and I have only asked it for two reports so I am a newcomer to this latest tech. I wondered what ChatGPT could do for me. Could it write an article I could publish, thus saving me all that time and work? Based upon the title of this blog posting and the topic that motivated me to develop this blog I asked ChatGPT to create a report. I am including it below and I will let you be the judge as to which blog post you enjoyed reading more: my commentary or the wonderfully crafted grammar perfect, factually correct (?) report I received from the AI engine. You be the judge.

Beginning of Chat GPT Interaction

Question Asked: write me a paper on artificial intelligence and digital accessibility inclusion for disabled

ChatGPT Response:

Introduction:

Artificial Intelligence (AI) is revolutionizing the way we interact with technology. With the power of AI, we can automate repetitive tasks, predict outcomes, and improve decision-making. However, the advancements in AI have not been evenly distributed across all populations. People with disabilities have faced significant barriers to access digital technologies due to lack of accessibility features. The purpose of this paper is to explore the role of AI in digital accessibility and how it can be used to promote inclusion for people with disabilities.

Background:

Accessibility in digital technologies refers to the design and development of software, hardware, and websites to make them usable for people with disabilities. The term “disability” covers a wide range of impairments, including visual, auditory, physical, and cognitive. Accessibility features are essential for people with disabilities to participate in society and access information, education, and job opportunities.

However, creating accessible digital technologies can be challenging, and many developers are not aware of the needs of people with disabilities. As a result, people with disabilities are often excluded from using digital technologies or face significant barriers to access them.

Role of AI in Digital Accessibility:

AI has the potential to revolutionize digital accessibility by automating the creation of accessibility features and making them more personalized. Here are some ways AI can promote accessibility:

Automatic Captioning and Audio Description: AI-powered captioning and audio description can make videos and podcasts accessible to people with hearing impairments or visual impairments. The AI algorithms can analyze the audio and video content and create captions and descriptions automatically, without the need for human intervention.

Voice Recognition: AI-powered voice recognition can make it easier for people with physical disabilities to interact with digital technologies. People with motor impairments can use their voice to control their devices, browse the internet, and send messages.

Natural Language Processing (NLP): NLP is a branch of AI that focuses on the interaction between computers and humans using natural language. NLP can be used to create accessible interfaces for people with cognitive impairments. For example, chatbots and virtual assistants can use NLP to understand and respond to users’ questions in a way that is easy to understand.

Personalization: AI can make digital technologies more personalized for people with disabilities. For example, an AI-powered website can adapt its design and functionality to the user’s needs based on their disability. A website can increase the font size, use high-contrast colors, or provide alternative navigation methods for people with visual impairments.

Challenges:

Despite the potential of AI to promote digital accessibility, there are still significant challenges to overcome. Here are some of the challenges:

Data Bias: AI algorithms are only as good as the data they are trained on. If the data is biased, the AI algorithms will also be biased. For example, if an AI-powered captioning system is trained on a dataset that does not include captions for people with regional accents or dialects, the system will not be able to generate accurate captions for those users.

Lack of Awareness: Many developers are not aware of the needs of people with disabilities and the importance of creating accessible digital technologies. This lack of awareness can lead to the exclusion of people with disabilities from using digital technologies.

Cost: Creating accessible digital technologies can be costly, and many organizations may not have the resources to invest in accessibility features.

Conclusion:

AI has the potential to revolutionize digital accessibility and promote inclusion for people with disabilities. However, to achieve this, we need to address the challenges and ensure that AI is used in an ethical and inclusive manner. By creating accessible digital technologies, we can ensure that people with disabilities have equal access to information, education, and job opportunities.

End of Chat GPT interaction

Some questions to think about:

  • Does the above ChatGPT report provide value? Yes
  • Is the report easy to understand and well organized? Yes
  • Does the report provide relevant information you can use? Yes
  • Could you put this report together after spending an afternoon Googling and reading articles? Yes
  • Does the report provide you with the perspective of another human who has unique experiences, ideas and aspirations? Probably not. You be the judge and tell me. I would enjoy your comments and would love to know which blog posting you found to be more enjoyable, mine or ChatGPT’s.

Final Thoughts

The future of AI is a bright one if you ask me. Digital accessibility is the gap between mainstream technology and assistive technology. AI is helping and will continue to help close that gap until there is no friction left between technologies and everything will be accessible to people with disabilities. This presumes we are not left out of the conversation when it comes to the future development of all AI-based systems. AI that presents interfaces that aren’t inclusive of all people’s abilities will cause problems.

AI-based tools that generate code for developers must be taught to create WCAG-compliant code and be manually tested to ensure the user experience is good, all so no one is left behind.  There is a lot of work to do in this space even when AI tools are automatically developing systems for use by humans. The real question is whether or not all of us will be at the AI table providing input so this next frontier of technology development benefits everyone. Let’s all keep an eye on the latest AI developments, continue learning, and ensure your voice is heard at the AI table as we continue growing in this wonderful space.