Apple has agreed to follow a set of voluntary safety and security standards for artificial intelligence (AI) put forward by President Biden's administration, according to a new report from Bloomberg.
The administration announced Friday that the technology giant is joining the ranks of OpenAI Inc., Amazon.com Inc., Alphabet Inc., Meta Platforms Inc., Microsoft Corp. and others in committing to test their AI systems for any discriminatory tendencies, security flaws or national security risks.
Here are some of the standards outlined in Biden's executive order on AI back in October 2023...
New Standards for AI Safety and Security:
● Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government.
● Develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy.
● Protect against the risks of using AI to engineer dangerous biological materials
● Protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content.
● Establish an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software
While Apple has introduced numerous AI features, called Apple Intelligence, for iOS 18. Recent reports suggest the most impactful ones won't arrive until a software update next year. Please download the iClarified app or follow iClarified on Twitter, Facebook, YouTube, and RSS for updates on Apple AI.
The administration announced Friday that the technology giant is joining the ranks of OpenAI Inc., Amazon.com Inc., Alphabet Inc., Meta Platforms Inc., Microsoft Corp. and others in committing to test their AI systems for any discriminatory tendencies, security flaws or national security risks.
Here are some of the standards outlined in Biden's executive order on AI back in October 2023...
New Standards for AI Safety and Security:
● Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government.
● Develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy.
● Protect against the risks of using AI to engineer dangerous biological materials
● Protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content.
● Establish an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software
While Apple has introduced numerous AI features, called Apple Intelligence, for iOS 18. Recent reports suggest the most impactful ones won't arrive until a software update next year. Please download the iClarified app or follow iClarified on Twitter, Facebook, YouTube, and RSS for updates on Apple AI.