Navigating Ethical Integrity in Journalism: Tackling The Challenges of AI

Julia Angwin, the keynote speaker at the event, Journalism Ethics & the AI Challenge, created a hypothesis to test criminal risk scores using AI. As the world becomes more accustomed to Artificial Intelligence, its uses in both personal and professional usage carry concerns. There are three main groups of AI users; the doomers, the boomers and…

Julia Angwin, the keynote speaker at the event, Journalism Ethics & the AI Challenge, created a hypothesis to test criminal risk scores using AI. As the world becomes more accustomed to Artificial Intelligence, its uses in both personal and professional usage carry concerns. There are three main groups of AI users; the doomers, the boomers and the AI ethics. The ethical concerns for AI — specifically used in the news — are greater than the day before. As AI becomes more realistic in its responses and visual production, it’s increasingly difficult to detect what has become AI-generated. While sounds, words and phrases can be generated by AI, it is still difficult to fully identify. The film industry is currently using artificial intelligence in post-production editing to remove filler words such as; like and um. The question for the media and film industry is; what limits will be implemented to AI usage in this medium? There are no boundaries currently for AI usage in this use case and most boundaries currently come from the companies or production team themselves. Once you introduce AI to fake content, it becomes increasingly difficult for the audience to believe what is real and what has been artificially produced. It becomes a collective problem for both society and the AI program itself, attempting to determine what information is real or fake. While AI’s ability to learn and reproduce content, the tones and emotional inflections in its ability to clone voices is not as believable. In 2023, Angwin reported on a man who recorded audio back in 2003, yet several years ago his voice was able to be reproduced by AI voice cloning. The man’s original recording was to be used for IBM, and IBM allowed the use of his voice for AI cloning and began to sell and release the AI-created content. This poses a major ethical question for the industry and the trust between corporations and the public. It is always important to learn, investigate and talk with those who created and are involved in the AI models. More importantly, to hold those accountable when AI use crosses ethical or legal boundaries.

Leave a comment