A few minutes every morning is all you need.
Stay up to date on the world's Headlines and Human Stories. It's fun, it's factual, it's fluff-free.
The backstory: For over a year now, the world has been able to see the capabilities of generative artificial intelligence (AI). It can be really beneficial to society – like by tracking deforestation at NASA or helping find cancerous tumors in medical imaging – but there are also some risks involved with this kind of technology. For example, it can spread misinformation and create deep fakes.
There’s also the issue of AI tech firms keeping their models under lock and key, so not all models are open source or used to the extent that they could be. For example, OpenAI, Google and Anthropic have closed their models. To make a model open source would mean allowing researchers to fully access and download it, accessing the algorithms and technology that make the model’s functioning unique. While open-source AI would allow more people and organizations to use these models to solve major problems and benefit different communities, there’s also a downside. There’s no sure-fire way to make sure open-source technology doesn’t end up in the hands of a bad actor.
More recently: The problem is that tech companies seem split on whether or not AI technology needs more guardrails and what government regulation of the industry would look like. And lawmakers often don’t know enough about AI tech to understand where the problem areas are and how to deal with them. We’ve recently been seeing governments host AI conferences to better understand it and secure safety commitments from tech leaders. US President Biden hosted a White House AI conference earlier this year, and China, the US and the EU also agreed to work together at the AI Safety Summit at the beginning of November.
The development: On Tuesday, IBM and Meta announced that they are teaming up with over 50 other firms to build a group called the AI Alliance dedicated to open-source AI that will aim to make this technology more accessible and focus on safety. Members of the group include Intel, Linux Foundation, Sony Group, Stability AI, NASA, the European Organization for Nuclear Research (CERN), Cornell University, Dartmouth College, the University of Tokyo, Yale University and others.
The AI Alliance plans to develop safety and security tools for AI. At the same time, it’ll be releasing more open-source AI models, making this technology more freely available. Some of the group’s goals are to responsibly develop benchmarks and evaluation standards for AI, make a diverse set of open foundation models widely available to address societal challenges like climate change, create a “vibrant AI hardware accelerator ecosystem” by encouraging contributions and adopting the right technology, back AI skills building and research on a global scale, build “educational content and resources” to teach the public and policymakers about AI and create programs for the open development of AI in “safe and beneficial ways,” while showing how Alliance members use open source AI responsibly.
One thing to keep in mind, though, is that the Alliance has some members who think we need stricter AI regulations and others who oppose more rules in the sector. But it’s still in its early stages, so the group doesn’t have a full governing board or technical oversight committee yet.
"Open and transparent innovation is essential to empower a broad spectrum of AI researchers, builders, and adopters with the information and tools needed to harness these advancements in ways that prioritize safety, diversity, economic opportunity and benefits to all," the Alliance said in its statement.
“The AI Alliance is focused on fostering an open community and enabling developers and researchers to accelerate responsible innovation in AI while ensuring scientific rigor, trust, safety, security, diversity and economic competitiveness. By bringing together leading developers, scientists, academic institutions, companies, and other innovators, we will pool resources and knowledge to address safety concerns while providing a platform for sharing and developing solutions that fit the needs of researchers, developers, and adopters around the world,” the Alliance said in its statement.
“To some degree, but unfortunately, to a large degree, the last year of conversation and dialogue around AI has been focused on a very small number of institutions,” Darío Gil, a senior VP at IBM and head of its research lab, told Fortune. “The reality is that this field is much, much larger than that.”