Federated learning could solve medicine’s biggest AI obstacle

Medicine has always made sense as a logical field for AI disruption. The ability to read images more accurately, process data more expeditiously, track and incorporate new research and practices from scholarly journals as they’re published… if you think about what AI can do and how it might best help humanity, medicine was a great prediction for where some of the most good could be done, as well as where it could be done first. But there were a number of obstacles (as with adoption of any new technology) to its widespread and successful adoption. With federated learning, Google may have solved that problem.

Medicine’s obstacle to AI

One of the biggest obstacles to implementation for meaningful AI in the medical field is privacy. In order for algorithms and computer learning systems to best learn, it requires a lot of data. In fact, the more data it has, the more effective the algorithm and/or resulting conclusions will be. But acquiring enough medical data to be relevant/useful has proven difficult (as one can imagine) for the tech companies with the AI resources to make use of it. To wit:

“In order to meet that threshold, as well as have it represent enough diversity, hospitals and research institutions need to pool their data reserves. But, especially in the US and UK, the idea of centralizing reams of sensitive medical information into the hands of tech companies has repeatedly—and unsurprisingly—proven intensely unpopular.

“As a result, research studies on AI’s ability to detect and diagnose conditions have stayed narrow in their scope and applicability. You can’t deploy a breast cancer detection model around the world when it’s only been trained on a few thousand patients from the same hospital.

So even though the computational capacity, imagination and resources may be there to apply novel AI approaches to some of medicine’s biggest questions, the data required might exist, but its hard to centralize it in a safe and trusted way in order to let the computers learn from it.

Enter federated learning.

Federated learning

Federated learning is a term Google scientists coined when testing/developing decentralized data learning models. In 2017, Google published a blog post on the DL about it. What they ended up terming federated learning essentially allowed the neural network to learn from a series of distributed data sources instead of requiring all the data to be centralized. For Google, the invention allowed them to “train its predictive text model on all of the messages typed by Android users—without actually ever reading or removing them from their phones.”

At the time, it didn’t gain much traction with the AI research community, but it has coming roaring back in a big way for how it could help the medical field.

 

Why federated learning is right for medicine

According to the MIT Technology Review, “‘There is a false dichotomy between the privacy of patient data and the utility of their data to society,’ says Ramesh Raskar, an MIT associate professor of computer science whose research focuses on AI in health. ‘People don’t realize the sand is shifting under their feet and that we can now in fact achieve privacy and utility at the same time.’”

We want and need more health data to make our algorithms better — that’s the utility. And if we can achieve that without threatening privacy, then we’re in the full win-win-win zone.

How will this actually work? Well, to quote MIT TR again, “The technique can train a model using data stored across multiple different hospitals without that data ever leaving their premises or touching a tech company’s servers. It does this by first training separate models at each hospital with the local data available, then sending those models to a central server to be combined together into a master. As each hospital acquires more data over time, it can download the latest master model, update it with new data, and send it back to the central server. Throughout the process, raw data is never exchanged—only the models, which cannot be reverse engineered to reveal that data.”

Medicine really could be the first frontier of artificial intelligence drastically improving our lives, and federated learning might just be the key that unlocks that potential.



Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

eighteen − twelve =

Jeff Francis

Jeff Francis is a veteran entrepreneur and founder of Dallas-based digital product studio ENO8. Jeff founded ENO8 to empower companies of all sizes to design, develop and deliver innovative, impactful digital products. With more than 18 years working with early-stage startups, Jeff has a passion for creating and growing new businesses from the ground up, and has honed a unique ability to assist companies with aligning their technology product initiatives with real business outcomes.

Get In The Know

Sign up for power-packed emails to get critical insights into why software fails and how you can succeed!

EXPERTISE, ENTHUSIASM & ENO8: AT YOUR SERVICE

Whether you have your ducks in a row or just an idea, we’ll help you create software your customers will Love.

LET'S TALK

When Will Your Software Need to Be Rebuilt?

When the software starts hobbling and engineers are spending more time fixing bugs than making improvements, you may find yourself asking, “Is it time to rebuild our software?” Take this quiz to find out if and when to rebuild.

 

is it time to rebuild our software?