Image of doctor using computer at desk
Credit: Jose Luis Pelaez Inc/Getty Images

Aging services organizations already are using artificial intelligence, whether or not they realize it, a panel of experts noted Thursday.

It’s important for long-term care providers to be informed about the tools that are available and how they could affect care and operations — for better or worse, added the panelists, representatives of providers and industry partners, during the McKnight’s Meeting of the Minds virtual event.

Many providers don’t know that AI, even if just simple AI, has been used in senior care for a while, pointed out Scott Code, vice president of the LeadingAge Center for Aging Services Technologies. It’s embedded in electronic health records systems, and it’s built into safety technology that offers fall detection or prevention and gait analysis, he explained. Even marketing teams are using it, he added.

“A lot of providers don’t really understand or grasp the idea of AI, because it’s such a huge, broad topic,” Code said. “On the other side, there is intensive marketing around AI — everything is AI-enabled.”

AI models, he said, can bring together data on what residents are doing — via dining, social engagement, safety, clinical records — to open doors to various tools that can allow staff members to cue up a concierge service for residents. 

“When you infuse advanced language models into advanced products, seniors will be able to communicate with them seamlessly, and then we will have context for operators,” said panelist Brian Geyser, APRN-BC, MSN, vice president of health and wellness at Maplewood Senior Living and Inspir Senior Living. “It will understand the company, the culture, the people, and it will be able to communicate with staff and seniors. And it’s coming very soon.”

LeadingAge CAST is working on resources to pique the interest of providers, said Code, who added that he is developing use cases based on various roles within aging services organizations.

One way to expose providers to large language models — deep learning algorithms that can recognize, summarize, translate, predict and generate content using very large datasets — is to start to explore how they can be used in ways that have a “transformative impact” on individual daily tasks, he said. It is general-purpose technology, Code said, so there is no manual — people learn by using it and determining how it can or cannot benefit an organization. AI is good for some purposes but not for others, he said.

“A lot is happening in a lot of different areas,” Code said. “It’s not necessarily new, but the evolution of large language models has pushed everyone to pay more attention to this space.”

AI in senior living

Geyser said that his organization recognizes that the frontier models are creating super intelligent systems with a goal of achieving artificial general intelligence, or AGI. Such systems, he said, understand and apply knowledge across a wide range of tasks at a human level or beyond. With the pace of getting to AGI accelerating rapidly, Geyser said, companies need to think about what it means for them. 

“One of the things we want to do is get to a place where our organization is super intelligent,” Geyser said, adding that most organizations are smart with systems serving up insights, but such systems still require humans to generate and interpret reports. Infusing AI into the mix, he added, could “supercharge” an organization. 

One of the foundational elements of having a completely unified data ecosystem, he said, involves extracting data from all systems used by various departments within an organization — marketing, human resources, accounting, etc. — putting the data into a centralized infrastructure and layering AI into that to get to a place where humans can interact with the systems much more intelligently.

“At the highest level, we’re trying to achieve a really super intelligent organization that combines enterprise intelligence, AI and human intelligence to get to a fully integrated, very intelligent organization of the future,” Geyser said, adding that his company is figuring out use cases for AI in every department — and that can be frightening. “There is a level of ignorance about what AI is and what it’s going to do in the future. There are legit fears about that.”

Juniper Communities Chief Operating Officer Donald F. Breneman said that his company saw those fears first-hand when it introduced service robots into its dining rooms. Residents cried, upset over the potential for robots to replace staff members. With education and reassurance, he added, residents came to understand that the robots provided staff members more time to engage and interact with them right at the tableside and that their use was reducing wear and tear on staff members.

Their presence also can confer a competitive advantage, Breneman said. Juniper differentiates itself in the field by adopting a high-tech, high-touch approach, he said, adding that the organization is interested in expanding use of the technology through its Catalyst program — a membership-based health and well-being program — by offering an AI-supported universal assessment to create a hyper-personalized experience for residents. 

“We’re very invested in the resident experience,” Geyser said, adding that the company uses data to create a “lifestyle prescription” for residents. “AI is part of that, and it has allowed us to start to capture what I consider important revenue for services that may have been missed but also to enhance the entire resident experience.”

Where to begin

Geyser said it is important for companies to develop a governance structure, including building standards, security measures, best practices and access controls to protect data. Organizations, he added, must balance having a culture of giving staff members the power to explore, experiment and be curious while protecting proprietary or sensitive information. 

When long-term care providers select potential technology partners, it’s important for those vendors to be transparent about what data they are using and how they are using them, said Blue Purpose founder and CEO Don Glidewel. He suggested that firms conduct annual third-party audits.

Human conscience still vital

Code said that it’s important to understand that AI models deliver content that is built on inherent cultural biases and that the AI is trying to produce a response that is pleasing to the user. The problem, however, is that the response might not actually be the truth. The unpredictable nature of AI means that individuals have to be the “human in the loop” and be responsible and ethical, he said. 

Most pushes for technology adoption start at the top of an organization, Code said, but large language models aren’t typical technology. Those models are freely available, he added, and more people are starting to use them on their own, so it’s important for companies to set guidelines and guardrails for when and how to use them appropriately. 

Geyser recommended that every organization have a responsible use policy for AI systems. Generative AI tools are available to anyone, and it would be easy for an employee to load proprietary or protected company documents, images and spreadsheets, putting an organization at risk. 

AI can be helpful at generating ideas, he added, but there must be guidelines, with specific examples, to let people know what’s appropriate and what’s not.