ChatGPT problems keep coming... leaked information and fake information
The chatbot "ChatGPT", which occupies people and fills the world these days with its massive text-generating capabilities, seems to suffer from shortcomings that threaten to put the owner company in the dock.
Samsung information leaks
- Employees of Samsung Electronics used the ChatGPT robot to help them perform their work, and during that, confidential information about the company was leaked.
- The "Economist Korea" website stated that there are 3 employees of the company who put confidential information in the robot, after which this information became available to everyone.
- It relates to the details of the source code or secret program code (a set of commands in the programming language to develop a specific program), when the employees were trying to find errors in it.
- In another case, an employee of a South Korean company put details of a secret meeting into a robot in order to turn it into a slideshow.
- The leak came just weeks after the company lifted the ban on bot use.
- “Samsung” took immediate action to reduce the access of its employees to the robot, and there were reports of its intention to design a similar robot, but exclusively for its employees.
- The company, "Open Eye", which develops the robot, had admitted in late March that there was a vulnerability that violated the privacy of users, by showing the history of the topics they were searching for to others.
False information
- The mayor of Victoria in Australia has threatened to sue Open Eye unless it fixes false information about it.
- The mayor said that those who search for him in the robot find information that he was convicted of the crime of paying bribes and was sentenced to 30 months in prison.
- In fact, Brian Hood was not accused of any crime, but rather helped expose an international bribery scandal nearly 20 years ago.
- Hood asserted that the AI bot is discrediting him.
Leave a Comment