Avoid loading your medical reports on artificial intelligence chats. The Privacy Guarantor alerts users on the risks of sharing the personal data regarding information on their health with the AI platforms.
An increasingly frequent phenomenon that led the authority to launch the alarm on this type of practice, also for the consequences that can be brought from relying on the responses generated without the necessary opinion of a doctor.
Medical reports on artificial intelligence chats
The habit of looking for Google and search engines Symptoms and ailments, to connect them to pathologies and obtain a homemade diagnosis, was already widely spread before the advent of artificial intelligence.
This highly not recommended practice has become even more insidious with the use of GPT and related chat, which give an easy and immediate result, in addition to the semblance of a more exact response.
A practice defined as “alarming” by the Privacy Guarantor, both for the dissemination without control of health data, to the mercy of artificial generative intelligence and service suppliers, and for the risk of receiving wrong indications from an unreliable tool from a medical point of view, which could cause consequences on the state of health of the user.
The alarm of the Privacy Guarantor
For this reason, the authority has drawn attention to loading clinical analysis, radiographs and more generally medical reports on the platforms, without having read and including the privacy information that artificial intelligence developers are obliged to publish.
A accortness necessary to understand six proper health data, entered in the chats in order to receive advice and responses on their ailments from the AI, are deleted after being processed by artificial intelligence or how they are used to train generative models.
The guarantor underlines the importance of this precaution, given that in most cases the platforms allow users the opportunity to decide how to treat and keep the documents uploaded online in compliance with the right to confidentiality.
Health dangers
To this aspect is also added the ability to discern and interpret the information received from the AI in the field of health.
In the statement of the Authority, it is underlined that the “human supervision” is essential to avoid the direct risks on the health of the person who consults the artificial intelligence platform, which should be guaranteed in every phase of life of the algorithm, from development to training, to tests and validation, before its entry on the market or in its use.
Protection also referred to by the Superior Health Council in the IA regulation “Artificial intelligence systems as a diagnostic support tool”.









