HomeAI News
IBM, Meta launch AI Alliance with over 50 tech members to advance ‘open, safe, responsible’ AI
9

IBM, Meta launch AI Alliance with over 50 tech members to advance ‘open, safe, responsible’ AI

Hayo News
Hayo News
December 6th, 2023

IBM and Meta launched the new AI Alliance today, in collaboration with over 50 founding members and collaborators around the world, including AMD, Anyscale, CERN, Cerebras, Cleveland Clinic, Cornell University, Dartmouth, Dell Technologies, EPFL, ETH, Hugging Face, Imperial College London, Intel, INSAIT, Linux Foundation, MLCommons, MOC Alliance operated by Boston University and Harvard University, NASA, NSF, Oracle, Partnership on AI, Red Hat, Roadzen, ServiceNow, Sony Group, Stability AI, University of California Berkeley, University of Illinois, University of Notre Dame, The University of Tokyo, and Yale University.

According to the group, these leading organizations across industry, startup, academia, research and government has come together to “support open innovation and open science in AI.” The AI Alliance, it says, “is focused on fostering an open community and enabling developers and researchers to accelerate responsible innovation in AI while ensuring scientific rigor, trust, safety, security, diversity and economic competitiveness. By bringing together leading developers, scientists, academic institutions, companies, and other innovators, we will pool resources and knowledge to address safety concerns while providing a platform for sharing and developing solutions that fit the needs of researchers, developers, and adopters around the world.”

In an interview with VentureBeat, Sriram Raghavan, vice president, IBM Research, insisted that the timing of the announcement — coming so soon after the drama at OpenAI and as the EU AI Act is in final negotiations — is coincidental. “It is time for a more nuanced and a richer discussion around AI,” he said, adding that he wouldn’t deny that the timing was prescient, but that discussions began over the summer to address a shift over the past year to more closed, proprietary AI development as well as debates about AI risk that may stifle innovation.

“This wasn’t shaped by the last two weeks,” he said. “This was shaped by a sense of that this is an important direction. This is a voice that needs to be heard…this was a belief that a focus on open innovation was needed and was missing.”

The AI narrative, he explained, had to become bigger than “just which models are risky and what bad people will do with that,” adding that “we’re not minimizing in any way the reality of the risks associated with models being indiscriminately out there with access to anybody, but we don’t believe the approach to address that is to simply turn this into a proprietary, small group of institutions building models and then over-regulating.” Instead, he said it is important to develop safe, responsible AI and “actually to do it in the open, to be able to come together to define benchmarks to qualify what it means for models to behave safely.”

In a quote provided by Meta, Nick Clegg, its president of global affairs said: “We believe it’s better when AI is developed openly – more people can access the benefits, build innovative products and work on safety. The AI Alliance brings together researchers, developers and companies to share tools and knowledge that can help us all make progress whether models are shared openly or not. We’re looking forward to working with partners to advance the state-of-the-art in AI and help everyone build responsibly.”

Raghavan emphasized the project-based, flexible approach of the AI Alliance, with six general areas of focus:

To develop and deploy benchmarks and evaluation standards, tools, and other resources that enable the responsible development and use of AI systems at global scale, including the creation of a catalog of vetted safety, security and trust tools. Support the advocacy and enablement of these tools with the developer community for model and application development. To responsibly advance the ecosystem of open foundation models with diverse modalities, including highly capable multilingual, multi-modal, and science models that can help address society-wide challenges in climate, education, and beyond. To foster a vibrant AI hardware accelerator ecosystem by boosting contributions and adoption of essential enabling software technology. To support global AI skills building and exploratory research. Engage the academic community to support researchers and students to learn and contribute to essential AI model and tool research projects. To develop educational content and resources to inform the public discourse and policymakers on benefits, risks, solutions and precision regulation for AI. To launch initiatives that encourage open development of AI in safe and beneficial ways, and host events to explore AI use cases and showcase how Alliance members are using open technology in AI responsibly and for good.

Reprinted from Sharon GoldmanView Original

Comments

no dataCoffee time! Feel free to comment