ITBusiness.ca

Hashtag Trending Apr.13- Reddit moderators face army of ChatGPT-powered bots; Scientists determine what knocked down Starlink satellites; Can regulation kill open-source?

Reddit braces for an AI generated spam attack, the northern lights hold the explanation to why Starlink satellites were knocked out and will regulation kill open source?

 

These stories and more on Hashtag Trending for Thursday, April 13th

I’m your host Jim Love, CIO of IT World Canada and TechNewsDay in the US – here’s today’s top tech news stories.

One of the greatest barriers to using open-source software is – awareness. Often, you’re not aware that there are open-source alternatives in the first place. And even if you are aware, you often have a lot of questions.  What are the alternatives? Which one is best? Are there known vulnerabilities in any of these choices? 

California-based Endor Labs has introduced a new AI tool to help companies choose the best open-source software. They call it DroidGPT

DroidGPT combines the capabilities of OpenAI’s ChatGPT with Endor Labs’ proprietary risk data to help users research open-source software packages – and now they can do that in a conversational manner, asking questions.

Some of the questions you can ask DroidGPT include:

What are the best logging packages for Java? What packages in Go have a similar function as log4? Or “Which Go packages have the least known vulnerabilities?”

The new service is currently in beta but Endor Labs have a demo video that shows the results generated by the bot, overlaid with risk scores and detailing the quality, popularity, trustworthiness and security of each package.

Source: Endor Labs

Last month, sky observers marveled at the northern lights, even those in latitudes which are further south where you might not normally see them, the sky “danced electric.”

That phenomenom is caused by solar activityThe sun ejects a beam of plasma at 690 kilometers or 430 miles per second. That beam smashes into the magnetosphere of the earth creating a geomagnetic storm and creating what we call the northern, or even southern lights. 

These storms are particularly vivid and even far reaching now because the sun, as every meteorologist knows, is at a very active part of the solar cycle – they call it the solar maximum.  These geomagnetic storms setup large circulating magnetic electronic currents between the upper atmosphere and the surface of the earth.

The last time it was this active, it played havoc with sensitive equipment and even knocked out power to parts of North America, 

And high above the planet, they create something known as space weather – a warming of a layer called the thermosphere.  What’s the impact of that?

Starlink found out the hard way in Feb. 2022, after it launched 49 of its satellites to an altitude only 200 kilometres, 130 miles above Earth’s surface, and found out their satellites

experienced atmospheric drag in the warmer thermosphere, deorbited and burnt up on the way down.

But most satellite companies do pay attention to warnings from solar physicists. If they are warned early enough, they can take action, astronauts can take shelter while power companies can take whatever action needed in the event of such a strong event.

The company launched its satellites even though the space weather community warned about the effects of a geomagnetic storm.  The solar maximum is a well known phenomenon. 

Ignoring these conditions cost Starlink millions in lost equipment.

Source: Inverse

Reddit moderators have a new spam attack, this time from AI powered bots

One moderator said the problem is “pretty bad” right now – several hundred accounts have been removed from the site and more are discovered daily.  

Sarah Gilbert, one of the forum’s moderators and a postdoctoral associate at Cornell University said, “They are pretty easy to spot, they’re not in-depth, they’re not comprehensive, and they often contain false information.”

For instance, the two-million-strong AskHistorians forum faced a slew of ChatGPT generated posts, after the tool launched. Gilbert says that the frequency has staggered now, possibly as a result of how rigorously moderators dealt with AI-produced content.

But in February, AskHistorians and several other subreddits were hit by a coordinated bot attack using ChatGPT.  They were inputting questions and spitting out responses at a fast pace, through an army of shill accounts.

At the height of the attack, the forum was banning 75 accounts per day.

But unfortunately removals are often being done manually because Reddit doesn’t have an automated way to deal with AI generated spam accounts.

The purpose of the attack remains unclear. Some say it is aimed at testing the mods to see what users can get away with. Others argue it is part of astroturfing and spamming campaigns, or “karma farming,” where accounts are set up to accumulate upvotes over time. 

Some moderators even noticed advertisements were sneaked in to some posts.

Ask/Philosophy moderator told Vice, “It’s only a matter of time before someone else tries it, and presumably they’re going to get better at evading our quality control efforts. Either that, or they’re getting better at fooling us.”

A study of produced before the current generative AI hype, found humans struggled to reliably spot AI-produced text. The study said people should try and see if the content makes sense rather than look for surface errors or misspellings. ChatGPT and Generative AI have closed that quality gap and can produce text that is extremely hard or even impossible to detect as machine generated.

But while Reddit scrambles to come up with an AI detection tool, the job is falling mostly on moderators, who all do this as a volunteer activity.

R/cybersecurity moderator said, “I think a lot of claims about ‘GPT will revolutionize [whatever]’ are bullshit, but I’d bet the farm that traditional social media has a finite lifespan, largely because inauthentic content is becoming so realistic and cheap to make that we’re going to struggle to find who’s real and who’s a bot.”

Source: Vice

A security researcher devised a way to prompt ChatGPT to create a new piece of malware which would escape detection by anti-malware tools and software. 

ChatGPT is not supposed to allow that sort of work. How did the researcher get by those safeguards?  


He broke the larger task into several discrete functions and then assembled the code snippets together, creating a piece of data-stealing malware that can go undetected on PCs.

It took Forcepoint researcher Aaron Mulgrew only a few hours of work and didn’t do any coding himself.

Here’s how Mulgrew’s malware product works:

The software lands on a computer via a screen saver app, then the file auto-executes after a brief pause to avoid detection. The malware then finds images, PDF and Word documents it can steal, breaks them down into smaller chunks, and hides the data in images via steganography. Finally, the photos containing data pieces make their way to a Google Drive folder which also prevents detection.

Mulgrew said in a blog post, “This kind of end to end very advanced attack has previously been reserved for nation state attackers using many resources to develop each part of the overall malware. This is a concerning development, where the current toolset could be embarrassed by the wealth of malware we could see emerge as a result of ChatGPT.”

It’s also why companies like Microsoft and others are doubling down on using AI to detect and prevent these types of attacks. But in an AI arms race, there are going to be rounds won and lost, and unfortunately, as the old saying goes, the good guys have to win every time, the bad guys only have to win once.

Source: BGR

Last year, European lawmakers introduced two pieces of legislation with the admirable aim of addressing software security, quality and liability. Those who make and market software should be responsible for the damage caused by sloppiness or errors. Penalties would make software developers more careful and ensure that they don’t sacrifice quality for profits.

But what about those who don’t make software to make profits?

The Python Software Foundation (PSF) is voicing its concerns that open-source organizations and even individuals might be held unfairly liable for distributing incorrect code.

The PSF said in a statement on Tuesday; “The existing language makes no differentiation between independent authors who have never been paid for the supply of software and corporate tech behemoths selling products in exchange for payments from end-users.”

The Foundation and other organizations are also urging EU lawmakers to clarify the broad language in the proposed legislation so that open-source organizations and developers are not held responsible for flaws in commercial products that use their code.

The maximum fines against software authors under the law can reach €15 million or up to 2.5 percent of annual turnover, whichever is greater.

These penalties could discourage developers from contributing to open source and potentially, shut down a movement that has given us a vast amount of the software including Linux, Apache Servers, two of the largest content management plaforms that support the vast majority of websites and that’s just off the top of my head.

Open source software powers the internet and a great deal of the worldwide software infrastructure.  Once again, politicians that don’t understand software are making rules that can have disastrous effects. the legendary Joni Mitchell once said, “you don’t know what you got til its gone.” 

Source: The Register

OpenAI has a new program which will offer rewards to users who report vulnerabilities in its artificial intelligence systems.

The move comes after ChatGPT faced a ban in Italy over alleged privacy breaches. Like we discussed in yesterday’s episode, France, Canada, U.S., Spain and China are also investigating the chatbot.

The new program called OpenAI Bug Bounty Program, will reward users based on the severity of the vulnerability detected — starting from $200 and going up to $20,000.

However, the program does not include incorrect or malicious content produced by OpenAI systems.

Source: Reuters

Tech is a whirlwind, sometimes exciting, sometimes distressing, sometimes depressing but we can always come back to our furry friends to regain some stability, and enjoy pure moments of happiness.

Yes, we are celebrating National Pet Day and ITWC, IT World Canada has an exciting contest to celebrate our little companions.

Over the past week, IT professionals have sent us the best photos and videos of their furry heroes to win the IT Pet of the Year award.

We have selected the six grand finalists, from Bartholomew batting at cursors on a monitor, Maya keeping our reader’s feet warm during cold winter days and Jessie conversing in her deep husky voice.

They all sound wonderful , but the special  winner will receive a special grand prize worth $500 – a TCL 30 5G smartphone with 128GB internal storage, a 5010mAh long-lasting battery, and a powerful MediaTek Dimensity 700 octa-core chipset.

Head on to our website and vote for your favorite. You can find the URL in the text version of this podcast on ITWC.  Or just go to itworldcanada.com and search for pets.

That’s the top tech news for today.  Hashtag Trending goes to air five days a week with the daily tech news and we have a special weekend edition where we do an in depth interview with an expert on some tech development that is making the news. 

Follow us on Apple, Google, Spotify or wherever you get your podcasts. Links to all the stories we’ve covered can be found in the text edition of this podcast at itworldcanada.com/podcasts

We love your comments – good or bad. You can find me on LinkedIn, Twitter, or on Mastodon as @therealjimlove on our Mastodon site technews.social.  Or just leave a comment under the text version at itworldcanada.com/podcasts 

I’m your host, Jim Love, pay special attention to your pet, and have a Thrilling Thursday!

Exit mobile version