AI chatbots and virtual assistants are offering advanced capabilities – Eyes On Eyecare

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Alex Delaney-Gesing
Over the last decade, artificial intelligence (AI) has transformed the technology landscape for digital device users across the globe.
From ChatGPT to virtual visual assistants, the capabilities of these innovations continue to impact the daily lives of all—and perhaps most significantly, those with visual impairments.
A recent survey by Pew Research noted that the majority of the U.S. (55%) of Americans are regularly using AI vs 44% who say they’re not.
The most common ways they’re using AI in the workplace:
In fact, a whopping 80% of retail executives are expected to have their companies adopt AI-powered intelligent automation by 2027.
The development of such technology has been a game-changer for the estimated 12 million people (aged 40+) with vision impairment in the U.S.—not to mention the 43 million living with blindness and 295 million with moderate-to-severe visual impairment across the globe.
What they may need: AI-enabled, navigational, assistive technology that enables them to take control of their life and perform daily activities on their own.
We’d be remiss if we didn’t first mention Google Assistant (launched in 2016)—a virtual assistant software app used primarily for mobile and home devices—as well as its recent upgrade to Assistant with Bard, which combines the Assistant’s abilities with generative AI on a smartphone.
Then there’s Google Gemini, which launched in February 2024.
What it is: The company’s latest large-language model (LLM) replaced Google’s Bard (an interactive AI chatbot launched in March 2023) to offer two capabilities in one.
What it does: Functions as a multimodal tool on Android or iOS that can answer basic questions, summarize text, and create images as well as plug into other Google services.
Yes! Allow us to introduce you to Project Astra which is still in its early stages of development—an AI chatbot with multimodal functionality for text, images, video, and audio, and marketed as a “universal AI agent that is helpful in everyday life.”
How it works: Operating as an AI assistant, the chatbot is designed to process multimodal information, understand context and take actions, work in real-time, and remember previous conversations.
Watch the demo video below.
Good question! While Microsoft-backed OpenAi launched ChatGPT in November 2022, it has quickly risen to lead the generative AI landscape among a growing number of competitors.
Some background: ChatGPT is a free-to-use generative AI—a chatbot based on LLM and designed to mimic human speech patterns in response to prompts via natural language processing and machine learning.
It is! OpenAI partnered with Be My Eyes to develop the first-ever digital visual assistant designed for use by visually impaired people and operated using OpenAI’s GPT-4 language model.
Quick refresh: Launched in 2015, Be My Eyes is a Denmark-based mobile app that connects people who are blind or have low vision with volunteers (with no visual impairments) or company reps.
Also: The latest version of OpenAI’s GPT-4 model (GPT-4o) is noted as being the “fastest and most affordable flagship model” versus previous versions.
Dubbed “Be My AI”—and formerly known as “Virtual Volunteer”—this feature can be integrated into an existing app with a new image-to-text generator (courtesy of OpenAI’s GPT-4).
Users send images (through the app) to an AI-supported Virtual Volunteer that will answer any questions about that specific image and provide conversational information.
Watch this video for an example of how it works.
Per the companies: context. Be My AI is designed to have a deeper understanding and conversational skills not currently available in other virtual assistants.
A user may send a photo of a refrigerator and its contents. From there, the tool should not just correctly identify what’s in the space, but also “extrapolate and analyze what can be prepared with those ingredients.”
Apparently, it’s designed to automatically give users the option to connect with a sighted volunteer via the app for additional assistance.
While Be My AI was originally launched in March 2023, it’s still currently in the beta phase of development.
The tool is continuing to undergo testing among a small group of users, with plans to make it more widely available later in 2024.
Indeed you can: Click here.
Eyes On Eyecare is currently distributing our 2025 media kit and Eyes On event prospectuses. Contact us to learn more about available opportunities – spaces are limited.
Do you work in the eyecare industry? Check out our open positions!