At a celebratory meetup last Friday evening on Manhattan’s West Side, two of the leading young artificial intelligence software firms, Stability.ai and Lightning.ai, rallied software developers and startup owners to the cause of keeping AI open-source.
The hashtag for the evening: #keepAIopensource.
Also: The best AI chatbots: ChatGPT and other noteworthy alternatives
About 700 guests, including developers, representatives of large tech firms, and startup owners, gathered for hors d’oeuvres and drinks as sunset streamed through the windows of the sixth-floor event space of Glass House by the Hudson River. The lively crowd huddled around laptops on high-top tables to discuss demos of programs making use of AI algorithms in one way or another.
Banners on the stage showed a colorful illustration of what appeared to be coders cheerfully working together, with the slogan “Keep AI open source.”
Shortly before 8 p.m., Emad Mostaque, founder and CEO of Stability.ai, took the stage with Lightning.ai CEO William Falcon. The two took turns presenting their pitch for why AI should be kept an open-source affair.
Also: Why open source is essential to allaying AI fears, according to Stability.ai founder
Stability.ai is best known for Stable Diffusion, a service that lets you type a phrase and have it converted into an image in any number of styles.
Lightning.ai is best known for a library of functions that plug into PyTorch to smooth the task of training programs including the large language models that underly OpenAI’s ChatGPT.
The evening’s event was clearly a reference to the sudden tilt by prominent companies to keep secret the technical details and the source code for AI programs. In March, OpenAI broke with precedent in the field by declining to disclose technical details of its latest large language model, GPT-4.
Google emulated OpenAI this month when it declined to disclose technical details of its new PaLM 2 language model.
Both gestures break with decades of disclosure by AI scientists, and luminaries of the field have warned that such secrecy can have a chilling effect on AI research. Mostaque has warned that closed-source AI programs are incompatible with the role of AI going forward, and he has pledged to be “the leader of open even as everyone else does closed.”
Also: Google follows OpenAI in saying almost nothing about its new PaLM 2 AI program
Falcon presented a mash-up of media messages and lessons from the history of open-source software. He noted that the company’s software had built upon the success of Torch, developed 20 years ago, which was later integrated into Meta’s PyTorch library. “Stuff that’s possible today wouldn’t have been possible without the Torch people,” said Falcon.
Falcon played the original Macintosh computer commercial of 1984, evoking Big Brother, on the stage’s projection screen, and also called up images from Apple’s iconic “Think Different” ad campaign when the late Steve Jobs returned to the company.
Also: ChatGPT’s success could prompt a damaging swing to secrecy in AI, says AI pioneer Bengio
“At the end of the day, a lot of things are happening,” said Falcon. “It [the AI movement] could go fully closed, or fully open.”
Alluding to remarks by OpenAI CEO Sam Altman that code needs to be kept secret to protect the world from bad actors, Falcon remarked, “Any tech we create, someone will find a bad use for it.”
Falcon ended with a photograph of Jobs giving a defiant gesture to IBM, to applause and laughter from the audience.
Taking his turn at the microphone, Mostaque told the audience that the battle for open-source software is about a tool, AI, that allows people to tell stories. “You should be able to tell your stories,” he told the audience.
Also: With GPT-4, OpenAI opts for secrecy versus disclosure
“We want to bring Stability to every country in the way that the people in that country will be able to control their own destiny,” said Mostaque. “Because that is the story of your life, not a panopticon that is controlled by a few companies.”
The large language models are “so important they will rule our lives,” and therefore, “should not be closed,” said Mostaque. “They should not be black boxes because who will be making the decisions?”
Mostaque predicted that the use of such programs will spread globally, and emphasized that populations in each country will have to use them to relate their own narrative.
“Why should an entire nation not be able to create?” said Mostaque.
Also: 3 ways OpenAI says we should start to tackle AI regulation
“What happens when we give one of these models to every child on earth?” He urged: “Think about it: Does anyone think that every child on earth will not have their own AI?”
“That is the mission,” said Mostaque, “to activate humanity’s potential.”