A U.S. lawyer who used the AI chatbot ChatGPT to cite non-existent cases in a legal brief against the airline company Avianca may face legal action. The lawyer admitted to using ChatGPT to supplement his legal research, but said he did not realise that the chatbot was generating false information. He has since apologised for his actions. ChatGPT is prone to a phenomenon known as "hallucination," in which it generates results that sound realistic and accurate, but are revealed to be fiction upon verification. The judge in the Avianca case has called the lawyer's actions "unprecedented" and is considering sanctions against him.