Google’s DeepMind is an artificial intelligence technology designed to learn and think like humans but on a grand scale and in even better ways to solve problems facing human beings. Tech kingpin Google has been turning its artificial eyes toward healthcare in an attempt to help clinicians solve vexing health problems.
Google has partnered with the U.K.’s London Moorfields eye hospital, for instance, to study 1 million eye scans in an effort to train DeepMind to hunt for possible sight problems. Another example: The U.K.’s National Health Service has granted Google’s DeepMind access to the anonymized records of 1.6 million patients at three hospitals run by London’s Royal Free Trust. The goal of this effort is to create an app, dubbed Streams, that would seek out patients at risk of acute kidney injury and notify physicians accordingly.
While some physicians are keen to see technology put to these kinds of uses that ultimately can help patients and healthcare organizations, others are concerned about such large-scale sharing of patient data. Making matters worse, the BBC reports that in some cases, data shared with DeepMind has not been anonymized, which has the potential to rankle even consumers with the most laid-back attitudes toward health data privacy.
“DeepMind in healthcare is a very interesting initiative – from clinical, quality and big data perspectives, this seems like a great way to gather and analyze key information that can be used not only to aid in diagnosis but also look at outcomes,” said Barry Caplin, vice president and chief information security official at Fairview Health Services. “But there are critical privacy and security questions that need to be asked.”
Organizations that engage in the kind of research being conducted by Google’s DeepMind typically follow protocols such as having an institutional research board that reviews and approves research plans and methods, Caplin explained.
“This includes understanding how information will be gathered, used and destroyed, and how participants can opt-in or opt-out,” he added. “There also are questions about inference. We’ve seen instances where personal information can be inferred from even de-identified information. These are key issues that need to be addressed before healthcare organizations could buy in.”
Google aggregating personal health information alongside the search data, location data and mobile data Google already collects could lead to unintended consequences, Caplin added.
But there are ways an organization can run artificial intelligence algorithms so that algorithm developers never see data, said Mohit Tiwari, assistant professor of electrical and computer engineering at the University of Texas at Austin and chief technology officer at data security vendor Privasera.
“Google already claims to do so for Gmail ads, for example,” Tiwari said. “So, in the case of DeepMind in healthcare, if diagnoses stay private to doctors and patients, I don’t see why we should stop healthcare advances due to privacy concerns. That said, health data can hardly be un-breached – so the pressure on getting security right is really high.”
When all is said and done, however, artificial intelligence cannot yet directly aid patient care, DeepMind co-founder Mustafa Suleyman told the BBC.
“Finding a fit between an algorithm and training data is difficult. People expect the algorithm to do too much,” Suleyman told the BBC. “The system is crying out for more innovation and hopefully that is something we can pioneer. We looked at nano-materials, synthetic biology, renewable energy, transport, trying to figure out how tech could make a difference, and I realized that healthcare – if we could get it right – then the margin for beneficial impact was enormous.”
[Source:-Healthcare IT]