The UI tool for the healthcare system has generated a number of false diagnoses for a patient – fastbn

The UI tool for the healthcare system has generated a number of false diagnoses for a patient



The use of AI in healthcare has the potential to save time, money and life. But when technology known Occasionally lies is introduced into patient care and also increases serious risks.

A patient based in London recently experienced how serious these risks can be after receiving a letter in which he was invited to a diabetic eye examination-a standard annual examination For people with diabetes in Great Britain. The problem: he has never been diagnosed with diabetes or there has been signs of the disease.

After the patient, a healthy man in the mid -1920s, opened the appointment letter in the late evening, a healthy man told in the mid -1920s Assets He had briefly feared that he had accidentally diagnosed the disease before the letter only had to be an administrator. The next day, a nurse questioned the diagnosis during a planned routine blood examination, and when the patient confirmed that he was not diabetic, the couple checked his medical history.

“He showed me the notes on the system and they were summaries of A-generated. At that time I realized that something strange was going on” Assets.

After the patient had requested and checked his medical documents, the patient noticed that the entry that had introduced the diabetes diagnosis was listed as a summary that was “generated by Annie Ai”. The recording appeared at about the same time when he had visited the hospital because of a serious case of tonsillitis. However, the recording in question did not mention tonsillitis. Instead, it was said that he had introduced breast pain and shortness of breath that were attributed to an “probably angina due to an illness of the coronary arteries”. In reality he had none of these symptoms.

The records that were checked by Assets, Type -2 diabetes also noticed in the patient at the end of last year and currently had a number of medication. It also contained dosage and administration details for the medication. However, according to the patient and several other medical documents that were checked, none of these details was correct Assets.

‘Health Hospital’ in ‘Health City’ ‘

The recording of the address of the medical document for a fictional “Health Hospital” in “456 Care Road” in “Health City” was even strange. The address also contained an invented postcode.

A representative of the NHS, Dr. Matthew NoblePresent told Assets The GP practice responsible for the supervision uses a “limited use of monitored AI” and the error was a “unique case of human error”. He said that a medical summary initially discovered the error in the patient’s record, but had been distracted and “accidentally saved the original version instead of the updated version (she).

However, the fictitious A-generated recording has to participate in the patient’s invitation to take part in a diabetic eye-screening date, which is probably based on the faulty summary.

While most of the AI tools used in healthcare are monitored by strict human supervision, another NHS employee told Assets that the jump from the original symptoms – toneillitis – to which was returned – in front of percepted angina due to a disease of the coronary arteries – alarm bells.

“These human error errors are quite inevitable if you have a AI system that creates completely inaccurate summaries,” said the NHS employee. “Many older or less educated patients may not even know that there was a problem.”

The company behind the technology, Anima Health, did not react Fortune’s Questions about the topic. Dr. However, Noble said: “Anima is a document management system approved by NHS that supports the practice in processing detailed documents and the action of the necessary tasks.”

“No documents are ever processed by the AI, Anima only suggests codes and a summary of a human expert to improve security and efficiency. In each document, they need to need a check by a person before they are actioned and submitted,” he added.

Ais restless rollout in the health sector

The incident is somewhat emblematic for the growing pain of the AI rollout in healthcare. Since hospitals and family practices ride automation tools, the promises to make workload easier and reduce costs, you should also deal with the challenge of integrating matting technologies in environments with high operations.

The pressure to be innovative and possibly save life with technology is high, but also the need for a strict supervision, especially since tools that are once considered “assistant” influence real patient care.

The company behind the technology, Anima Health, promises to save relatives of the health professions to save “hours a day through automation”. The company offers services, including automatically: “Patient communication, clinical notes, administrator inquiries and paperwork with which doctors have to do every day”.

Annie, Ai -Tool from Anima, is registered as Medicines and Healthcare Products Regulatory Agency (MHRA) as a medical device in class I. This means Exam lights or bandagesInstead of automating medical decisions.

Ki tools in this category require that expenses are checked by a clinician before measures are taken or elements are entered in the patient file. In this case of the false -diagnosed patient, however, the practice did not seem to be adequately addressed before they were added to the patient’s records.

The incident takes place in the middle of an increased examination in the British health service of the use and categorization of AI technology. Last month, the bosses of the GPS and hospitals have warned that some current uses of AI software could have violated data protection rules and endangered patients.

First registered in an e -mail From Sky News and confirmed by AssetsNHS England warned that unauthorized AI software that violates the minimum standards could risk the risk of patients with the damage. The letter was specially submitted to the use of ambient voting technology or “AVT” by some doctors.

The main problem in transcribing or summary of information is the manipulation of the original text Assets.

“Instead of just taking up passive, he has a purpose for medical equipment,” said Delaney. However, the latest guidelines of the NHS have led to some companies and practices play regulatory catching up.

“Most devices that have now been used together have a class of the class (categorization),” said Delaney. “I know at least one, but probably many others are trying to start their class 2a now because they should have it.”

Whether a device should be defined as a medical device in class 2a essentially depends on its intended purpose and the extent of the clinical risk. If the rules of the medical device are dependent on the British rules of the tool to determine care decisions, it can require a new classification as a medical device in class 2a, a category that is subject to stricter regulatory controls.

Anima Health is together with other health companies based in Great Britain in Great Britain Class 2a is currently pursuing.

The AI UK for health boost

The British government uses the possibilities of AI in healthcare and hopes that it can improve the country’s tense national health system.

In one recent “10-year health plan”, “ The British government said that it aims to make the NHS the most capable AI-capable supply system worldwide, with technology supporting the administrative burden, preventive care and strengthening patients through technology.

The introduction of this technology in a way that corresponds to the current rules within the company is complex. Even the Great Britain Minister of Health seemed to be proposed at the beginning of this year that some doctors may exceed the limits when it comes to integrating AI technology into patient care.

“I heard anecdotically in the pub, really in the pub, that some clinicians are ahead of the game and already use surrounding AI to get record notes and things, even if their practice or trust has not yet been obtained,” said Wes Streeting in Sky News.

“Well, many problems there – do not encourage – but it tells me that in contrast” Oh, people don’t want to change, the employees are very happy and they are really resistant to changes “, it is the opposite. People cry for this stuff,” he added.

Ai Tech certainly has great opportunities to drastically improve the speed, accuracy and access to care, especially in areas such as diagnostics, medical records and patients in lower resources or remote environments. However, it is difficult to reach the limit between the potential and risks of technology, in sectors such as healthcare that deals with sensitive data and could cause considerable damage.

The patient thought about his experiences, said the patient Assets: “In general, I think that we should use AI tools to support the NHS. It has a massive potential to save money and time. However, LLMs are still very experimental, so they should be used with strict supervision. I would hate it as an excuse, so as not to pursue innovation, but should be used to show where caution and surveillance is required.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *