According to a new book by Dr. Isaac Kohane, a doctor and computer scientist at Harvard University, the latest AI language model, GPT-4, can diagnose medical conditions with a stunning success rate. The book delves into the intersection of medicine and AI, demonstrating how GPT-4 outperforms previous ChatGPT AI models in correctly answering US medical exam licencing questions.
According to Kohane, GPT-4 will be available to paying subscribers in March 2023 and will be capable of diagnosing medical conditions with greater than 90% accuracy. It can also translate discharge information for non-English speaking patients and quickly summarise lengthy reports or studies.
GPT-4 has limitations despite its remarkable abilities. It is based on data patterns and does not require true comprehension or intentionality. While it can successfully mimic how doctors diagnose conditions, it is still imperfect and can make mistakes.
How GPT-4 Diagnoses Like a Doctor
In his book, Dr. Kohane describes a clinical thought experiment based on a real-life case he treated several years ago. He gave GPT-4 a few key details about the baby, and the machine was able to diagnose congenital adrenal hyperplasia, a 1 in 100,000 condition, just as he would with all his years of study and experience. Dr. Kohane was both impressed and horrified by the machine’s capabilities, realising that millions of families would soon have access to this impressive medical expertise, but determining how to ensure or certify its safety and effectiveness remains a challenge.
GPT-4 Isn’t Always Right
The book also emphasises GPT-4’s limitations. The system is capable of making simple clerical errors, math errors, and occasionally “hallucinating,” which means making up answers or disobeying requests. While GPT-4 can help free up valuable time and resources in the clinic, the authors warn of a future in which machines outperform humans and emphasise the importance of carefully considering the ethical implications.
GPT-4 is an incredible tool for the medical industry, but it still requires further development to ensure that it is reliable and effective for healthcare providers and patients.
Ensuring Safe and Effective Use of GPT-4
As AI language models such as GPT-4 advance and become more widely used in healthcare, it is critical to establish regulations and standards to ensure their safety and effectiveness. The book suggests several measures that can be taken to reduce the risk of errors, such as starting a new session with GPT-4 and having it verify its own work with a fresh set of eyes, as well as human-style verification of the machine’s work to catch any errors.
Furthermore, healthcare providers should receive adequate training in order to effectively use GPT-4 and other AI tools. They must understand how to apply the machine’s output and recommendations while maintaining clinical judgement and ethical responsibility. GPT-4 is a fantastic tool, but it should not be used in place of the human touch and personal connection that is so important in the patient-doctor relationship.
Conclusion
GPT-4 is a fantastic AI language model with the potential to transform the medical industry. It has the ability to diagnose medical conditions, translate languages, and summarise reports in seconds. However, as with any new technology, there are numerous ethical concerns and considerations that must be addressed. We can use GPT-4 and other AI tools safely and effectively to improve patient outcomes by establishing clear standards and regulations, providing adequate training to healthcare providers, and ensuring transparency and accountability.