The spread of websites designed to look like news sites, but which trade in low quality or false information, could further damage media trust, some experts say.
The media watchdog NewsGuard has seen an increase in websites filled with what it says is AI-generated content.
"These are websites that are using artificial intelligence to sometimes generate content in mass [amounts], and it does not appear as content and it does not appear as though they have human editorial oversight," said Jack Brewster, NewsGuard's enterprise editor. "They are masquerading as news sites and act as advertising click bait."
NewsGuard defines the sites as Unreliable AI-Generated News Websites, or UAINS.
When it started identifying them in May 2023, it found 49. By February 2024, its AI Tracker listed more than 700.
The Tracker looks for evidence that a site has mostly AI-generated content; little to no human oversight; is designed to look like journalists or writers produced the content; and does not clearly disclose that the material is AI produced.
Most sites operate under names such as "Ireland Top News" and "Daily Times Update." And in at least one instance, the domain name of a former established outlet — Hong Kong's Apple Daily — has been taken over and filled with AI-generated content.
A Serbian businessman bought the domain after the media outlet shuttered in 2021. Authorities charged its publisher, Jimmy Lai, under the national security law, detained staff and froze assets.
SEE ALSO: Last Edition: Hong Kong’s Apple Daily Signs Off With Million-Copy RunNow, instead of news, Apple Daily is filled with "SEO-bait" headlines, according to digital news site Wired.
Some media experts say these sites damage legitimate journalism and hurt its credibility.
"I think in recent years we've already seen a strong decline in the trust in the media and local news sources," said McKenzie Sadeghi, who is the news verification editor at NewsGuard.
"When we see the proliferation of these AI-generated websites that are presenting themselves as the average local trusted publication with human journalists, that creates an issue because it further reduces trust," added Sadeghi.
The issue spans several languages too, with NewsGuard identifying AI-generated websites in more than a dozen languages including English, Arabic, Chinese and Turkish.
One of the main goals, says NewsGuard, is to generate ad revenue.
NewsGuard found that Google is behind 90 percent of the ads on these sites. The watchdog says it believes the content violates Google's policies.
Google policy communications manager Michael Aciman told VOA via email that its policies do not allow the placement of advertisements "alongside harmful or spammy content."
But said Aciman, "NewsGuard has not shared the full list of sites in question, so we cannot review them nor confirm whether or not they violate our policies."
Some experts say they fear the spread of AI-generated fake news sites could affect elections. About 40 countries are scheduled to hold significant elections this year, and Brewster says disinformation looms over them.
"I don't think people realize that right now I can set up a website that is completely automated from start to finish and set up targets for certain key words like stolen election or the vaccine is dangerous," said Brewster.
"Think of any false claim you want to, and this web scraper would search the entire internet, and find the content they think will be the most viral and produce it automatically," he added.
Most of the sites the researchers identified are populated with content titled "Who Was the Most Decorated Soldier in the Third Reich" and "Is Oatmeal Healthy or Not?"
But NewsGuard also found one site pushing a false claim about U.S. presidential hopeful Nikki Haley and Palestinian refugees. Another site, named Celebritydeaths, falsely claimed President Joe Biden died and Vice President Kamala Harris had taken over his duties.
UCLA researcher Shazeda Ahmed says inaccurate information is potentially harmful if people act upon it.
"I also worry about what this will mean for media literacy," said Ahmed, a UCLA Chancellor's Postdoctoral Fellow who focuses on AI safety.
"When a person reads one of these articles that is AI generated, do they realize it? And if not, there's no recourse and there's not really anyone that's being held accountable for that," she said.
Part of the problem in accountability is the difficulty in tracing the owners or producers of the sites.
NewsGuard says many owners use privacy services to hide their identities.
VOA attempted to contact four owners of some of the sites NewsGuard identified via their contact us online form but received only one response.
That response referred VOA to a company named Byohosting.com. When contacted, Byohosting did not respond to the request for interview or comment. Instead, it offered its content services.