ITBusiness.ca

Canadian privacy commissioner to probe ChatGPT

Image from Shutterstock.com

ChatGPT is being investigated by Canada’s Privacy Commissioner for possibly using personal information without permission.

AI technology and its effects on privacy is a priority for my Office,” Privacy Commissioner Philippe Dufresne said in a statement. “We need to keep up with – and stay ahead of – fast-moving technological advances, and that is one of my key focus areas as Commissioner.”

The investigation into OpenAI, the operator of ChatGPT, was launched in response to a complaint alleging the collection, use, and disclosure of personal information without consent.

The statement didn’t say who or what organization filed the complaint. Because the investigation is now ongoing, the privacy commissioner’s office won’t answer questions.

It comes after Italy’s data protection authority, known as Garante, temporarily banned use of the online version of the chatbot and accused Microsoft Corp-backed OpenAI of failing to check the age of ChatGPT users and the “absence of any legal basis that justifies the massive collection and storage of personal data” to “train” the chatbot. In response, OpenAI geofenced access to ChatGPT from users in Italy.

(It may or may not be a coincidence, but Atlas VPN reports that after right after that access was blocked in Italy, downloads of virtual private network apps (VPNs) jumped 400 per cent, suggesting some people hope a VPN can bypass the ban.)

According to Reuters, privacy regulators in France and Ireland have reached out to counterparts in Italy to find out more about the basis of the ban. Reuters also quoted the German commissioner for data protection telling the Handelsblatt newspaper that Germany could follow Italy by blocking ChatGPT over data security concerns.

ChatGPT falls into a category of artificial intelligent systems called generative AI. These systems can return answers to queries in a natural language, with the ability to compose paragraphs, feature articles and even graphics — and help threat actors compose better phishing messages and malicious code.

Related content: Microsoft leverages ChatGPT in Security Copilot

Researchers have been working on generative AI systems for some time, but when OpenAI released ChatGPT last November people were stunned at its capabilities.

So stunned that last week, big names in technology like Elon Musk and Steve Wozniak issued a letter calling for a six-month ban on developing generative AI systems until questions have been answered about their ability to create disinformation and take away jobs.

In a recent panel discussion, two Gartner analysts doubted there will be a huge replacement of jobs by smart AI systems.

For more pros and cons of a ban, Beauceron Security chief executive officer (CEO) David Shipley expressed his thoughts on the most recent episode of ITWC’s Cyber Security Today Week in Review podcast.

Last month, OpenAI had to take ChatGPT offline after reports of a bug in an open-source library that allowed some users to see titles from another active user’s chat history. It’s also possible that the first message of a newly-created conversation was visible in someone else’s chat history if both users were active around the same time, the company also acknowledged.

There’s no shortage of views on the possible dangers of generative AI. Brad Fisher, CEO of Lumenova AI, a platform to help developers manage AI risks, this week penned a blog looking at five potential problems. The news site VentureBeat has a blog by Fero Labs CEO Berk Birand on avoiding the problems of generative AI.

For its part, OpenAI felt in February it had to address what it says are misconceptions about its product.

“We want as many users as possible to find our AI systems useful to them ‘out of the box’ and to feel that our technology understands and respects their values,” OpenAI’s blog says in part.

“Towards that end, we are investing in research and engineering to reduce both glaring and subtle biases in how ChatGPT responds to different inputs. In some cases ChatGPT currently refuses outputs that it shouldn’t, and in some cases, it doesn’t refuse when it should. We believe that improvement in both respects is possible.

“Additionally, we have room for improvement in other dimensions of system behavior such as the system ‘making things up.’ Feedback from users is invaluable for making these improvements.”

Stephen Almond, executive director for regulatory risk at the U.K. Information Commissioner’s Office, didn’t call for a temporary ban. But in a blog published Monday, he did say it is important for AI developers “to take a step back and reflect on how personal data is being used.”

“While the technology is novel, the principles of data protection law remain the same – and there is a clear roadmap for organizations to innovate in a way that respects people’s privacy,” he wrote.

“Organizations developing or using generative AI should be considering their data protection obligations from the outset, taking a data-protection-by-design-and-by-default approach. This isn’t optional – if you’re processing personal data, it’s the law,” in the U.K., whose privacy law is based on the European General Data Protection Regulation (GDPR).

If you’re developing or using generative AI that processes personal data, he said, you need to ask the following questions:

  1. What is your lawful basis for processing personal data? If you are processing personal data you must identify an appropriate lawful basis, such as consent or legitimate interests.
  2. Are you a controller, joint controller or a processor? If you are developing generative AI using personal data, you have obligations as the data controller. If you are using or adapting models developed by others, you may be a controller, joint controller or a processor.
  3. Have you prepared a Data Protection Impact Assessment (DPIA)? You must assess and mitigate any data protection risks via the DPIA process before you start processing personal data. Your DPIA should be kept up to date as the processing and its impacts evolve.
  4. How will you ensure transparency? You must make information about the processing publicly accessible unless an exemption applies. If it does not take disproportionate effort, you must communicate this information directly to the individuals the data relates to.
  5. How will you mitigate security risks? In addition to personal data leakage risks, you should consider and mitigate risks of model inversion and membership inference, data poisoning and other forms of adversarial attacks.
  6. How will you limit unnecessary processing? You must collect only the data that is adequate to fulfil your stated purpose. The data should be relevant and limited to what is necessary.
  7. How will you comply with individual rights requests? You must be able to respond to people’s requests for access, rectification, erasure or other information rights.
  8. Will you use generative AI to make solely automated decisions? If so – and these have legal or similarly significant effects (e.g. major healthcare diagnoses) – individuals have further rights under Article 22 of U.K. GDPR.

In Canada, the government has proposed a new Artificial Intelligence and Data Act, which will impose some obligations on “high-impact” AI applications. However, that legislation is still early in the parliamentary process, and even after it is passed, it will need the government to develop regulations before it comes into effect.

Exit mobile version