Skip to main content

Biden discusses the risks of AI with the CEOs of Microsoft and Google


On Thursday, President Joe Biden met with the CEOs of some of the largest AI firms, including Microsoft and Alphabet's Google, and emphasised the need for them to make sure their technologies are secure before they are used.

The popularity of apps like ChatGPT has made the term "generative artificial intelligence" popular this year, spurring a rush among businesses to release comparable products they hope would alter the nature of work.

Millions of users have started experimenting with these tools, whose proponents claim can make medical diagnoses, write screenplays, create legal briefs, and debug software. As a result, there is growing worry that the technology could result in privacy violations, skew employment decisions, and be used in power scams and misinformation campaigns.

The hazards that AI poses to people, society, and national security were discussed by Biden, who has used ChatGPT and experimented with it, according to the White House.


The White House noted that the meeting included a "frank and constructive discussion" about the necessity for businesses to be more open with lawmakers about their AI systems, the value of assessing the safety of such products, and the requirement to defend them against malicious attacks.

Sundar Pichai of Google, Satya Nadella of Microsoft Corp., Sam Altman of OpenAI, and Dario Amodei of Anthropic attended the two-hour meeting on Thursday along with Vice President Kamala Harris and other administration officials like Jeff Zients, the chief of staff for Vice President Joe Biden, Jake Sullivan, the director of the National Economic Council, Lael Brainard, and Gina Raimondo, the secretary of commerce.

In a statement, Harris stated that while the technology has the potential to benefit lives, it may also raise issues related to civil rights, safety, and privacy. The administration is open to adopting new laws and supporting new legislation on artificial intelligence, she told the chief executives, adding that they have a "legal responsibility" to safeguard the security of their artificial intelligence products.



Comments

Popular posts from this blog

Leak of iOS 17 shows consumers’ eagerly anticipated improvements

Leaked information indicates big modifications will be made to a number of functions as Apple prepares to release iOS 17. The new update will generally guarantee system improvements, stability, efficiency, and performance gains. According to the leaked information, iOS 17 will function on all devices running iOS 16 at the moment. More features will be added to Dynamic Island to improve its functionality. Since the Camera app UI changes have been postponed for some time, iOS 17 is expected to include them. Users may also anticipate upgrades to the display settings, focus mode settings, new emojis, notification settings, the user interface for the Health app, and much more.

Online Safety Bill age checks won’t be done by Wikipedia

According to its foundation, Wikipedia will not submit to any age verifications mandated by the Online Safety Bill. It would "violate our commitment to collect minimal data about readers and contributors," according to Rebecca MacKinnon of the Wikimedia Foundation, which sponsors the website. A senior member of Wikimedia UK is concerned that the website might be blocked as a result. However, according to the government, only services that pose the greatest risk to children will require age verification. There are millions of entries on Wikipedia, created and edited by tens of thousands of volunteers from all over the world in hundreds of different languages. According to information from analytics company SimilarWeb, it is the seventh most popular website in the UK. The Online Safety Bill, which is presently before Parliament, will completely take effect sometime in 2024 and requires digital companies to safeguard users from harmful or illegal informat...