I was asked for:
… some advice?
I have anecdotally heard of paramedics dictating their patient care notes into open source large language models (LLMs) such as ChatGPT to produce their clinician records. I believe this practice breaches several privacy and professional confidentiality standards given LLMs are a learning, algorithmic platform.
I also believe that patients who are attended to are not able or given the opportunity to consent to their information being used in third party software, regardless of if this software is being used under “privacy” or “temporary” settings.
There has been one case to my knowledge of a social worker being dismissed for using LLMs to produce their case records, I’m believe there would likely be others that haven’t made the media.
I was wondering if you could provide a legal perspective on the use of AI to produce clinician notes whether by paid AI subscription service or open LLMs like ChatGPT?
The answer is ‘no, I cannot provide relevant advice’. I don’t understand AI or large language models, how to use them or how they work. One reason to leave the University sector was because academics need to teach today’s students how to use these tools and I have no interest in learning. It seems to me AI is writing human kind out of history; let alone the energy and water demands.
Lawyers are getting into trouble using these tools. I have rejected offers for input from ‘ghost writers’ because this blog is my opinion and my research, and for the same reason I try to avoid any AI input. I don’t ask AI to do my research nor analyse the case law for me.
All I can say to paramedics who want to use AI as part of their practice is that they need to have regard to specialist advice in this area – a good place to start might be the Australian Commission on Safety and Quality in Health Care’s ‘AI Clinical Use Guide; Guidance for clinicians’ available at https://www.safetyandquality.gov.au/sites/default/files/2025-08/ai-clinical-use-guide.pdf. I note this paper address issues of privacy, consent etc.
This blog is a general discussion of legal principles only. It is not legal advice. Do not rely on the information here to make decisions regarding your legal position or to make decisions that affect your legal rights or responsibilities. For advice on your particular circumstances always consult an admitted legal practitioner in your state or territory.
This is an interesting topic and one that will be of increasing interest and relevance to clinicians.
In respect to the use of AI tools I note the Tasmanian Department of Premier and Cabinet has published a document titled “Guidance for the use of artificial intelligence in Tasmanian Government“. A 1 page summary document can be found here. The guidance is said to align with the “National framework for the assurance of artificial intelligence in government“.
Relevant passages in the Tasmanian government guidance includes the following:
“Ensure that AI solutions and initiatives are compliant with the Personal Information Protection Act 2004” (p.12)
“Ensure any inputs into ‘open’ or public AI tools (such as ChatGPT) will not include or reveal sensitive, classified, or personal information.” (p.12)
“Protected or sensitive information must not be entered into these tools under any circumstances” (p.13).
“Protected or sensitive information must not be entered into open’ or public AI platforms or tools under any circumstances.” – Summary document.
The guidance document provides a considerable amount of additional information in relation to engaging with AI tools as government workers, including risks and responsibilities.
I imagine that other states and territories will have similar guidance in development or already published.
I note this information was only published relatively recently and I suspect knowledge of the responsibilities of government employees will take time to filter through.