Last Updated on 05/14/2023 by てんしょく飯
AI-authored books are being published on the e-commerce website Amazon. The explosion of chat GPTs has made it easier for anyone to create books and articles, but many problems have already been seen.
AI writes similar books to books that took humans a year to write.
Will AI take away human jobs? This question has become a big theme after the huge popularity of ChatGPT, developed by Open AI in the US, but there are already a number of AI-authored books on Amazon.
Recently, for example, it was suspected that a publisher called inKstall might be listing a large number of AI-authored books on Amazon. The Washington Post reported in May that one of the publisher’s books was on the exact same topic as a how-to book written by a software developer in Oregon, but there were signs of using ChatGPT and other generative AI.
The author was listed as Marie Karpos, but there is not a single reference to this person online. The Mumbai-based education company that published the book had sold dozens more similar technology books on Amazon, with disparate authors, but all had five-star ratings in multiple reviews from India. The author, Chris Cowell, has had more than a year’s worth of writing stolen.
Amazon removed the book in question, along with other books sold by the publisher, after The Washington Post requested comment. However, the use of AI itself does not violate Amazon’s terms and conditions, and some books are sold with the explicit statement that ChatGPT was used.
AI-based misinformation is spreading online.
Cowell’s case is just the tip of the iceberg. AI is threatening to spread across all online content, not just books.
According to NewsGuard, a company that measures the reliability of online news sources, in April alone there were 49 websites offering content that appeared to have been largely or entirely created by AI.
NewsGuard contacted several of these sites and found that they had not acknowledged the use of AI. NewsGuard contacted several of these sites, but only two admitted to using AI, and most could not be contacted. And there was a lot of misinformation in the articles written by AI.
The mistake of using AI to disseminate misinformation can also be made by major media outlets, with technology news website CNET coming under fire when it was found to be providing AI-generated articles without disclosure. Following the reports, CNET investigated and found evidence of AI use in 77 articles and found that many of the sentences contained errors. On the other hand, some media outlets, such as Buzzfeed in the US, have explicitly stated that they are using AI to create content.
In February, Google revealed that it would allow AI-generated content to appear in its search results. The company said its algorithm focuses on “the quality of the content, not how it is produced”.
The online world is already rife with misinformation, but many experts warn that the introduction of AI will allow the spread of misinformation to reach incredible levels. Meanwhile, Francesco Nucci, Applications Research Director at Italy-based Engineering Group, says: “AI is fraught with many logical problems. But sometimes it can also be a solution. AI can be used in unethical ways, for example to create and spread fake news, but it can also be used to do good, for example to counter misinformation,” he told Horizon Europe, a programme by the European Commission.
The emergence of lifestyle-changing technologies, such as television and smartphones, has had its positives but also its negatives. How can we ensure that humans themselves do not decline in proportion to technological developments? The debate on big themes continues.
コメント