Wikipedia’s co-founder assails ChatGPT and competitors, Google’s new weather forecasting model is apparently 90 per cent better than traditional systems, and TikTok and Twitter come under scrutiny for the litany of hateful content pertaining to the Israel-Hamas war on their respective platforms.
These and more top tech stories on Hashtag Trending
I’m your host, James Roy
Jimmy Wales, the co-founder of Wikipedia said at the Web Summit that ChatGPT and competitors are “actually pretty bad” and “still a long way from being a reliable source.”
He says that AI services should deliver citations with claimed facts, in order to go beyond “plausible-sounding nonsense,” adding that humans have an edge on AI for at least the next 20 or 30 years.
He also complained that large language models were not very useful in improving Wikipedia — such as brainstorming ways to fill gaps on the site.
Wales also bashed Elon Musk-owned X, which he qualified as “not a great source of truth.”
You must have come across a Wikipedia page, requesting the reader to stop scrolling and contribute to the Wikimedia Foundation. Wales made the same request to Big Tech, welcoming them to chip in. He expressed his determination to avoid being “beholden to five tech companies” through data licensing deals.
Forecasters won’t likely have 20-30 years before being overthrown by AI.
Google’s DeepMind announced a new weather forecasting model, called GraphCast that reportedly beats traditional systems more than 90 per cent of the time.
Unlike traditional forecast models, GraphCast places emphasis on historical data. So for instance, it would start with the current state of Earth’s weather, and data about the weather six hours ago. Then, it makes a prediction about what the weather will look like six hours from now.
GraphCast then feeds those predictions back into the model, performs the same calculation, and spits out longer-term forecasts.
Apparently, GraphCast also had great success predicting extreme weather events like tropical cyclones and freak temperature changes, even though it wasn’t specifically trained to handle them.
Google researchers said that GraphCast’s potential marks a turning point in weather forecasting, but urged people to refrain from seeing it as a replacement for traditional weather forecasters.
With generative AI, it’s become far too easy to create realistic-looking videos showing scenarios that never happened or people saying fictional things.
Hence, starting next year, YouTube will force creators to disclose their use of generative AI.
Users can also request for YouTube to take down “content that simulates an identifiable individual, including their face or voice.” However, not all requests will be honoured, especially when it’s a satirical or parody video.
Music artists will have their own separate process to request the removal of content that mimics their singing voice.
YouTube will also do its due diligence and disclose its own use of generative AI to viewers.
Intriguingly, it seems like it’s up to the creators to check the box when their video employs generative AI, but YouTube maintains that doing so is not optional. Failing to disclose that information could lead to content removal and other penalties.
Nepal is the latest country to ban TikTok, alleging that its content “was detrimental to social harmony.”
Nepal joins India, one of the only countries to implement a nationwide ban on the Chinese-owned app, over fears that it spreads malicious content.
The country’s government said that “the ban would come into effect immediately and telecom authorities have been directed to implement the decision”.
American politicians are also determined to ban TikTok, claiming that the app is pushing pro-Palestine content over pro-Israel content, amid the ongoing Israel-Hamas war.
TikTok has refuted these claims, which Senator Josh Hawley says is propagandizing Americans.
Hawley cited a thread by former Tinder executive Jeff Morris Jr. who compared the total views for pro-Palestinian hashtags on TikTok to those of pro-Israel hashtags. He concluded that Israel is losing the TikTok war and that users are incentivized to post “anti-Israel content” in order to gain engagement and increase their following.
TikTok said that this analysis is “unsound”, and that the platform does not promote one side of an issue over another.
The company also said that users in regions like the Middle East and South East Asia account for a “significant portion” of views on pro-Palestine hashtags, and that young people were sympathetic toward Palestine long before TikTok existed.
TikTok added, “It’s critical to understand that hashtags on the platform are created and added to videos by content creators, not TikTok … It’s easy to cherry pick hashtags to support a false narrative about the platform.”
Source: Tech Crunch
X also came under fire for failing to crack down on anti-Semitic content amid the Israel-Hamas war, but yesterday, it published some figures and updates on how it’s been coping.
X Safety said it has “actioned” more than 325,000 pieces of content that violate the company’s rules on violent speech and hateful conduct. “Actioned” means taking down a post, suspending the account or restricting the reach of a post.
The company also said that 3,000 accounts have been removed, including accounts connected to Hamas and that it has been working to “automatically remediate against antisemitic content” and “provide its agents worldwide with a refresher course on antisemitism.”
The company detailed many more figures and updates of what it did to speed up the content moderation process.
However, these are X’s figures and there’s no way to verify their veracity.
At the same time, the Center for Countering Digital Hate (the CCDH) released a new report on Tuesday that suggests X (formerly Twitter) is failing to remove posts regarding misinformation, antisemitism, Islamophobia, and other hate speech.
Researchers in the CCDH study reported 200 “hateful” posts about the Israel-Hamas war that breached platform rules, but 98 percent of the posts still remained live after allowing seven days to process the reports.
X was made aware of the CCDH’s report and directed X users to read its own figures and updates.
A UK resident took it to Reddit after he ordered Apple’s latest flagship iPhone, from the Apple Store, and received an Android device in disguise.
Opening the box upon delivery, raised some eyebrows. The phone was covered in a screen protector, which was not the right one for the handset.
Then when it was turned on, the black areas of the screen, which appear as perfect blacks on an OLED display like the one on the iPhone 15 Pro, were lit up in a way that suggested this was an LCD. It also had a thicker bezel on the bottom of the device that was unlike Apple’s handset.
Confirmation came that this was an Android device in a skin, that came with a poor setup process and the Android popups.
Facebook, YouTube, and TikTok were already installed on the handset, the OS was glitchy, the camera would crash, and the battery settings showed the device had been used before.
The question remains, how would a user receive this device when it was ordered directly from Apple. The most likely explanation is that the phone was intercepted during transit and swapped for a fake on the way to the buyer.
The Redditor contacted Apple support and would likely get his iPhone 15 Pro Max. And Apple would likely try to block the stolen device from being used, but then the thief will probably have sold it by that point.
Source: Tech Spot
And that’s the top tech news for today.
Hashtag Trending goes to air 5 days a week with a special weekend interview show we call “the Weekend Edition.”
You can get us anywhere you get audio podcasts and there is a copy of the show notes at itworldcanada.com/podcasts
For those interested in cybersecurity you can also check out our hit cybersecurity podcast featuring Howard Solomon and called CybersecurityToday. It’s rated as one of North America’s top 10 tech podcasts.
I’m your host, James Roy – have a Wonderful Wednesday!