news-05072024-214230

OpenAI, a well-known AI firm, had a security breach in 2023, but they chose not to inform the FBI, law enforcement, or the public about it, as reported by the New York Times on July 4. The breach was disclosed internally during an April 2023 meeting, but it was not made public because the attacker did not access any information related to customers or partners of the company.

According to sources familiar with the matter cited by the New York Times, the attacker managed to access OpenAI’s internal messaging systems and obtained details about the firm’s AI technology designs from employee conversations on an online forum. However, they did not reach the systems where OpenAI develops its artificial intelligence or access any code.

Executives at OpenAI did not view the incident as a national security threat, as they believed the attacker was a private individual with no ties to a foreign government. This led them to refrain from reporting the breach to the FBI or other law enforcement agencies. Former OpenAI researcher Leopold Aschenbrenner expressed concerns about the incident and urged measures to prevent the theft of company secrets by China and other foreign countries.

In response to Aschenbrenner’s claims, OpenAI representative Liz Bourgeois stated that they disagreed with many of his assertions regarding security, including the handling of the security breach in question. Bourgeois emphasized that they addressed the incident and shared it with the company’s board before Aschenbrenner joined the organization. Aschenbrenner alleged that he was terminated from OpenAI for leaking information and political reasons, but Bourgeois denied that his concerns led to his departure.

Matt Knight, OpenAI’s head of security, highlighted the company’s ongoing commitment to security measures. He acknowledged that AI development carries inherent risks that need to be addressed. The New York Times also mentioned a legal dispute between the publication and OpenAI and Microsoft over alleged copyright infringement, a case which OpenAI believes lacks merit.

Despite these challenges, OpenAI continues to prioritize security in its operations and remains dedicated to the safe development of artificial general intelligence (AGI). The incident serves as a reminder of the complex landscape of cybersecurity in the AI industry and the importance of vigilance in safeguarding sensitive information from unauthorized access and potential threats.