• we

Trust, human-centric artificial intelligence and collaboration are the focus of the first RAISE Health Symposium News Center |

AI experts discuss how to integrate robust AI into healthcare, why interdisciplinary collaboration is critical, and the potential of generative AI in research.
Feifei Li and Lloyd Minor gave opening remarks at the inaugural RAISE Health Symposium at Stanford University School of Medicine on May 14. Steve Fish
Most people captured by artificial intelligence have had some sort of “aha” moment, opening their minds to a world of possibilities. At the inaugural RAISE Health Symposium on May 14, Lloyd Minor, MD, dean of Stanford University School of Medicine and vice president for medical affairs at Stanford University, shared his perspective.
When one curious teenager was asked to summarize his findings regarding the inner ear, he turned to generative artificial intelligence. “I asked, ‘What is superior canal dehiscence syndrome?’ Minor told nearly 4,000 symposium participants. In a matter of seconds, several paragraphs appeared.
“They’re good, really good,” he said. “That this information was compiled into a concise, generally accurate and clearly prioritized description of the disease. This is quite remarkable.”
Many shared Minor’s excitement for the half-day event, which was an outgrowth of the RAISE Health initiative, a project launched by Stanford University School of Medicine and the Stanford Institute for Human-Centered Artificial Intelligence (HAI) to guide the responsible use of artificial intelligence. intelligence in biomedical research, education, and patient care. The speakers examined what it means to implement artificial intelligence in medicine in a way that is not only useful for doctors and scientists, but also transparent, fair and equitable for patients.
“We believe this is a technology that enhances human capabilities,” said Fei-Fei Li, professor of computer science at the Stanford School of Engineering, director of the RAISE Health with Minor project and co-director of HAI. generation after generation, new technologies may emerge: from new molecular sequences of antibiotics to mapping biodiversity and revealing hidden parts of fundamental biology, AI is accelerating scientific discovery. But not all of this is beneficial. “All of these applications can have unintended consequences, and we need computer scientists who develop and implement [artificial intelligence] responsibly, working with a variety of stakeholders, from doctors and ethicists… to security experts and beyond,” says she. “Initiatives like RAISE Health demonstrate our commitment to this.”
The consolidation of three divisions of Stanford Medicine—the School of Medicine, Stanford Health Care and the Stanford University School of Child Health Medicine—and its connections to other parts of Stanford University have put it in a position where experts are grappling with the development of artificial intelligence. management and integration issues in the field of healthcare and medicine. Medicine, the song went.
“We are well positioned to be a pioneer in the development and responsible implementation of artificial intelligence, from fundamental biological discoveries to improving drug development and making clinical trial processes more efficient, right through to the actual delivery of healthcare services. healthcare. The way the healthcare system is set up,” he said.
Several speakers emphasized a simple concept: focus on the user (in this case, the patient or physician) and everything else will follow. “It puts the patient at the center of everything we do,” said Dr. Lisa Lehmann, director of bioethics at Brigham and Women’s Hospital. “We need to consider their needs and priorities.”
From left to right: STAT News anchor Mohana Ravindranath; Jessica Peter Lee of Microsoft Research; Sylvia Plevritis, professor of biomedical data science, discusses the role of artificial intelligence in medical research. Steve Fish
Speakers on the panel, which included Lehmann, Stanford University medical bioethicist Mildred Cho, MD, and Google Chief Clinical Officer Michael Howell, MD, noted the complexity of hospital systems, emphasizing the need to understand their purpose before any intervention. Implement it and ensure that all systems developed are inclusive and listen to the people they are designed to help.
One key is transparency: it makes it clear where the data used to train the algorithm comes from, what the original purpose of the algorithm is, and whether future patient data will continue to help the algorithm learn, among other factors.
“Trying to predict ethical problems before they become serious [means] finding the perfect sweet spot where you know enough about the technology to have some confidence in it, but not before [the problem] spreads further and solve it sooner.” , Denton Char said. Candidate of Medical Sciences, Associate Professor of the Department of Pediatric Anesthesiology, Perioperative Medicine and Pain Medicine. One key step, he says, is identifying all the stakeholders who might be affected by the technology and determining how they themselves would like to answer those questions.
Jesse Ehrenfeld, MD, president of the American Medical Association, discusses four factors that drive adoption of any digital health tool, including those powered by artificial intelligence. Is it effective? Will this work in my institution? Who pays? Who is responsible?
Michael Pfeffer, MD, chief information officer of Stanford Health Care, cited a recent example in which many of the issues were tested among nurses at Stanford hospitals. Clinicians are supported by large language models that provide initial annotations for incoming patient messages. Although the project is not perfect, doctors who helped develop the technology report that the model eases their workload.
“We always focus on three important things: safety, efficiency and inclusion. We are doctors. We take an oath to “do no harm,” said Nina Vasan, MD, clinical assistant professor of psychiatry and behavioral sciences, who joined Char and Pfeffer joined the group. “This should be the first way to evaluate these tools.”
Nigam Shah, MBBS, Ph.D., Professor of Medicine and Biomedical Data Science, began the discussion with a shocking statistic despite fair warning to the audience. “I talk in general terms and numbers, and sometimes they tend to be very direct,” he said.
According to Shah, the success of AI depends on our ability to scale it. “Doing proper scientific research on a model takes about 10 years, and if each of the 123 fellowship and residency programs wanted to test and deploy the model to that level of rigor, it would be very difficult to do the correct science as we currently organize our efforts and [test]] It would cost $138 billion to make sure every one of our sites works correctly,” Shah said. “We can’t afford this. So we need to find a way to expand, and we need to expand and do good science. The rigor skills are in one place and the scaling skills are in another, so we’re going to need that type of partnership.”
Associate Dean Yuan Ashley and Mildred Cho (Reception) attended the RAISE Health Workshop. Steve Fish
Some speakers at the symposium said this could be achieved through public-private partnerships, such as the recent White House Executive Order on the Secure, Secure and Trustworthy Development and Use of Artificial Intelligence and the Consortium for Healthcare Artificial Intelligence (CHAI). ) .
“The public-private partnership with the greatest potential is one between academia, the private sector and the public sector,” said Laura Adams, senior adviser to the National Academy of Medicine. She noted that the government can ensure public trust, and academic medical centers can. provide legitimacy, and technical expertise and computer time can be provided by the private sector. “We are all better than any of us, and we recognize that… we cannot pray to realize the potential of [artificial intelligence] unless we understand how to interact with each other.”
Several speakers said AI is also having an impact on research, whether scientists use it to explore biological dogma, predict new sequences and structures of synthetic molecules to support new treatments, or even help them summarize or write scientific papers.
“This is an opportunity to see the unknown,” said Jessica Mega, MD, a cardiologist at Stanford University School of Medicine and co-founder of Alphabet’s Verily. Mega mentioned hyperspectral imaging, which captures image features invisible to the human eye. The idea is to use artificial intelligence to detect patterns in pathology slides that humans don’t see that indicate disease. “I encourage people to embrace the unknown. I think everyone here knows someone with some kind of medical condition who needs something beyond what we can provide today,” Mejia said.
The panelists also agreed that artificial intelligence systems will provide new ways to identify and combat biased decision making, whether made by humans or artificial intelligence, with the ability to identify the source of the bias.
“Health is more than just medical care,” several panelists agreed. Speakers emphasized that researchers often overlook social determinants of health, such as socioeconomic status, zip code, education level, and race and ethnicity, when collecting inclusive data and recruiting participants for studies. “AI is only as effective as the data on which the model is trained,” said Michelle Williams, a professor of epidemiology at Harvard University and an associate professor of epidemiology and population health at Stanford University School of Medicine. “If we do what we strive to do. improve health outcomes and eliminate inequalities, we must ensure we collect high-quality data on human behavior and the social and natural environment.”
Natalie Pageler, MD, clinical professor of pediatrics and medicine, said aggregated cancer data often excludes data on pregnant women, creating inevitable biases in models and exacerbating existing disparities in health care.
Dr. David Magnus, a professor of pediatrics and medicine, said that like any new technology, artificial intelligence can either make things better in many ways or make them worse. The risk, Magnus said, is that artificial intelligence systems will learn about inequitable health outcomes driven by social determinants of health and reinforce those outcomes through their output. “Artificial intelligence is a mirror that reflects the society we live in,” he said. “I hope that every time we have the opportunity to shine a light on an issue—to hold a mirror up to ourselves—it will serve as motivation to improve the situation.”
If you were unable to attend the RAISE Health workshop, a recording of the session can be found here.
Stanford University School of Medicine is an integrated academic health care system consisting of the Stanford University School of Medicine and the adult and pediatric health care delivery systems. Together they realize the full potential of biomedicine through collaborative research, education and clinical patient care. For more information, visit med.stanford.edu.
A new artificial intelligence model is helping doctors and nurses at Stanford Hospital work together to improve patient care.


Post time: Jul-19-2024