Hashtag Trending Nov.30-Amazon’s new generative AI assistant; Cyber pros less likely to get fired post-incident; AI can acquire skills through social learning?

Amazon’s new generative AI assistant, Amazon Q, sounds a lot like OpenAI Q. Are cyber professionals less likely to get fired after a major cybersecurity incident? And AI threatens to kill clergy jobs.

Hashtag Trending on Amazon Alexa Google Podcasts badge - 200 px wide

These and more top tech stories on Hashtag Trending.

I’m your host James Roy.

AWS is not lagging behind, just ‘re-invent’-ing a bunch of stuff that many other tech giants released and re-released, a long time ago.

That reinventing, however, mostly consisted of comparing and taking jabs at OpenAI and Microsoft.

The comparison could not be more stark, when AWS introduced Amazon Q – its newest generative AI assistant. Q rings a bell though. It started trending out of the blue a week ago, rumoured to be OpenAI’s latest model, capable of achieving artificial general intelligence. Did Amazon take inspiration from OpenAI, maybe?

AWS also announced that Agents for Amazon Bedrock are now generally available to customers, which looks very similar to the custom GPTs. And it introduced its first-ever in-house LLM models while Microsoft also announced an in-house-built open-source model called phi-2.

And, AWS finally released an image generator joining the ranks of OpenAI, Microsoft, and Google.

To be fair, the goliaths are all doing pretty much the same thing in the ongoing AI juggernaut, so guess how AWS sought to demarcate itself? Responsible AI.

It detailed its advocacy for responsible AI and announced guardrails for Amazon Bedrock. Will that be enough for AWS to actually set itself apart and pick up the pace?

Source: Analytics India Mag, Tech Crunch

Cyber professionals have long feared getting fired after a cyber incident. But that might be changing.

Trellix’s “The Mind of the CISO” report revealed that only 13 percent of the 500 chief information security officers surveyed said their company fired people in the first year, following a major cybersecurity incident.

Instead, companies are likely to increase cybersecurity budgets in the immediate aftermath of an event.

46 per cent of CISOs said their companies increased budgets for new tools and technologies, 38 per cent created new jobs and 44 per cent added new contracted services to their cybersecurity program, post incident.

The report, however, notes that post-incident job losses are still happening. Not immediately, but eventually, after a company understands what happened.

31 per cent of CISOs said their companies fired people more than three years after the event.

The report says, “Perhaps impacts to the team aren’t an immediate change following an incident but occur as time passes, when the dust has settled, and CISOs look to restructure or make team overhauls.”

Source: Axios

Okta has admitted that the scale of its October data breach could be potentially much larger. 

The company’s chief security officer, David Bradbury, originally said that the files of just 134 Okta customers or less than one percent were accessed by attackers.

But an update published this morning revealed that the data related to every single Okta customer support system was accessed.

For 99.6 percent of customers, the data accessed was the full name and their email address.

Bradbury said, “While we do not have direct knowledge or evidence that this information is being actively exploited, there is a possibility that the threat actor may use this information to target Okta customers via phishing or social engineering attacks.”

Okta advised all its customers to employ multi-factor authentication and consider the use of phishing-resistant authenticators to further enhance their security.

Source: The Register

Researchers at Google DeepMind claim that they have been able to demonstrate that AI can acquire capabilities by social learning. 

Social learning is how one individual or animal acquires skills from another by copying.

In a physical simulated task space called GoalCycle3D — a sort of computer-animated playground with footpaths and obstacles — they found AI agents could learn from both human and AI experts across a number of navigational problems, even though they had never seen a human or had any idea what one was.

The study said, “Our agents succeed at real-time imitation of a human in novel contexts without using any pre-collected human data. We identify a surprisingly simple set of ingredients sufficient for generating cultural transmission and develop an evaluation methodology for rigorously assessing it. This paves the way for cultural evolution to play an algorithmic role in the development of artificial general intelligence.”

Source: The Register 

The fate of the Apple Card and the Apple Savings Account is up in the air after Apple and Goldman Sachs ended their partnership.

This news comes as rumours swirled that Goldman Sachs was looking for a way out of its deal with Apple. Regulatory filings show that the company has lost a lot of money on the Apple Card partnership so far.

In a statement to CNBC, Apple said that it will “continue to innovate” for Apple Card customers.

Apple has also been working to bring as many of its financial products in-house as possible. 

Reportedly, it is developing its own payment processing technology and infrastructure, called   “Project Breakout” that would make it less reliant on partners such as Goldman Sachs.

Source: 9TO5Mac

And here’s another one on the nail-biting relationship between AI and religion, which by the day gets more ridiculous.

The UK’s Department for Education found that the country’s clergy, of all things, is among the professions most at risk from AI.

A number of jobs ranked above the clergy as more likely to be taken over by AI. They include telephone salespersons at the top, further education teaching professionals, market and street traders and assistants, legal professionals, and more.

(Psychologists also feature in that list, really.)

But the researchers admitted that the report’s usefulness is limited. They emphasize that “the estimates of which jobs are more exposed to AI are based on a number of uncertain assumptions, so the results should be interpreted with caution.”

Plus, just because you can automate the writing of a sermon, does not mean you should or that it’ll completely erode the quintessential human experience of religion.

But, some other cases continue to raise eyebrows. Like when 300 churchgoers attended a service led by OpenAI’s LLM in Germany. Or as we reported in a previous episode, when tech entrepreneur Anthony Levandowski decided to bring back his idea of an AI church, and make AI the ultimate God to pray to.

Source: The Register

And that’s the top tech news for today.

Hashtag Trending goes to air 5 days a week with a special weekend interview show we call “the Weekend Edition.”

You can get us anywhere you get audio podcasts and there is a copy of the show notes at itworldcanada.com/podcasts 

And while we cover cybersecurity stories that we think are of general interest, you can keep really up to date on cybersecurity with our podcast featuring security journalist Howard Solomon. It’s called CybersecurityToday.  It’s rated as one of North America’s top 10 tech podcasts.

I’m your host James Roy.  Have a Thrilling Thursday!

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Follow this Podcast

More #Hashtag Trending