Nearly half of AI researchers surveyed recently say there’s a 10% chance that AI will end humanity. At the same time, it’s estimated that AI will add $7 trillion in annual global GDP over the next 10 years. Not since the invention of nuclear energy did something have such a fearsome duality of power and AI is improving at a far faster pace.
In an open letter signed by prominent AI researchers, they have called for a 6-month pause on AI development for AIs more sophisticated than ChatGPT-4. And while an industry-wide research stoppage is implausible and impractical with so many companies chasing market share, there is precedent for how a more careful and thoughtful movement can rise up amidst hyper-fast innovation: the Slow Food Movement.
Founded in 1986 “in Italy after a demonstration on the intended site of a McDonald’s at the Spanish Steps in Rome,” Slow Food is a global organization that is a counterbalance to the increasing speed and impact of the industrial food system. I have no professional affiliation with the Slow Food organization but am in support of their manifesto which calls for a food system that’s good, clean and fair, defined as:
GOOD: quality, flavorsome and healthy food
CLEAN: production that does not harm the environment
FAIR: accessible prices for consumers and fair conditions and pay for producers
It’s a simple declaration of the fact that food needs to nourish people and care for the planet and the people who produced the food. How might we apply the Slow Food philosophy to AI development? Can it help inspire AI researchers to advocate for responsible innovation? What would it mean for AI to be good, clean, and fair?
Good, Clean, and Fair AI
For starters, Slow Food calls for good food to hit the trifecta of being high quality, flavorful, and healthful. For AI, this means that generative AI creates content that is as truthful and free from hallucinations as possible. Companies have already made significant improvements in the past 6 months to the quality of chatbot output and work will continue on indefinitely to optimize them. There’s still a lot to be done to reduce alignment issues and to reign in unintended AI behaviors, but the industry is aware of these problems and is working on them. Ideally, researchers will soon have a better handle on the interpretability problem, which is the lack of understanding for how chatbots actually decide what to say. A lot of the work to make generative AI tools more “good” is already underway and is arguably the more straightforward, yet still challenging, objective compared to making “clean” and “fair” AI.
Good AI could also call for healthy relationships between humans and AI, which can mitigate the extent to which AI becomes an addictive tool of distraction like social media has become. There are a lot of positive, productive things that generative AI can bring us, but society must take care not to offset those gains with use cases that degrade human potential. The internet, email, and social media all promised to make the planet more connected and productive. And while it did so in many areas, it also turned many into doom scrolling zombies who can easily waste hours going down TikTok or YouTube rabbit holes with nothing to show for it afterward.
“At the end of the road of the pursuit of technological sophistication appears to lie a playhouse in which humankind regresses to nursery school,” says Jaron Lanier in You Are Not A Gadget. Social networks like Facebook aimed to bring people closer together through connection, yet they also made us more divided as their algorithms prioritized content that appealed to our most primitive instincts like sex and conflict. Like hyper-processed junk food, using AI generated or curated content to numb and distract us can make us feel like we’re doing something but ultimately leave us empty inside.
Slow Food defines clean food as being produced in a way that doesn’t harm the environment. In the literal sense of the planetary environment, generative AI model training is energy intensive, which does need to be managed. “Training a single model can gobble up more electricity than 100 US homes use in an entire year,” referring to the array of GPUs that use electricity to crunch training datasets for days or months to create a sophisticated model like ChatGPT.
But generative AI can also be used to harm our societal environment, like creating content misinformation farms that pollute the internet with fake and incendiary news. The upcoming 2024 American presidential election will be an interesting test to see if and how AI generated deepfakes could be used to sway public opinion. Easily creating content at the click of a button goes both ways and we need to find ways, through watermarking for example, that can detect and flag AI generated propaganda and misinformation.
Fairness is the third element of the Slow Food manifesto, which calls for food to be sold at fair and accessible prices to consumers, with equally fair and equitable conditions and pay for food producers. For AI, this can be applied to ensure that AI business models are free from content creator exploitation and manipulative ad models that use conversation history to personalize advertiser messages.
There is a brewing movement of content creators calling for better acknowledgement and compensation if their work is included in datasets used to train generative AIs. The secret sauce of any AI is the training data that was used to “teach” it to write or make images. Many researchers claim to only use training data that is already freely available to the public, but it’s difficult to ensure that there’s no copyrighted content sitting inside of enormous training datasets that can contain billions of pieces of content. Even if you could pinpoint all copyrighted content within a training dataset, there is no established way yet to measure out some kind of compensation to owners of that intellectual property. Currently, the makers of AIs like ChatGPT are deriving all the benefits from their chatbot without any compensation to those who contributed content contained in AI training data. There’s no good solution yet on how to do this, but there needs to be one.
In food, there’s a similar economic dynamic whereby food brands and middlemen extract much of a product’s retail value while growers and others farther up in the supply chain are rewarded with a disproportionately small percentage of the retail price. Like Big Food building building corporate empires on the backs of farmers, Google, Facebook, and OpenAI are creating AI empires on the backs of content creators who aren’t compensated enough for their contributions to training an AI to write or draw.
To avoid heavy concentrations of power like we see in the food industry, we need to find ways to create equitable and fair supply chains for generative AI that give all stakeholders a shot at the rewards from AI. Creating a “nutrition facts” label for AI training datasets would be a good first step so at least users can be aware of what’s going into their models and would be a foundation to building more transparency into AI systems. It’s good to know where your food comes from and how it was made, and having that same provenance information for data can be just as beneficial for AI.
We are also in a tender phase of development where tech giants are trying to figure out how to make money from generative AI. It’s undetermined which direction the various players will ultimately head, but we need to learn lessons from the social media age where maximizing user attention by nearly any means necessary was the revenue model. The incentive to win clicks and attention encouraged clickbait and sensationalism to reign over nuanced and substantive content. This dynamic has left society’s minds frazzled and polarized, as it’s easier to lull yourself into a self-affirming echo chamber than it is to be challenged by an opposing view. AI can certainly add to this kind of damaging content but also be used to manipulate us into buying things far better than any well-coordinated Instagram ad campaign could ever do. It’d be a shame if one of the most profound technological inventions of the 21st century was mainly used as a tool to mindlessly fuel rampant consumerism.
The blueprint is there for what a good, clean, and fair industry looks like and Slow Food principles are a good parallel for what responsible generative AI looks like. But even as the Slow Food movement exists, big, fast food also exists and dominates the global market. The Slow Food mentality isn’t the mainstream ideology of the food industry, yet it is there for eaters who want something more thoughtful in food. Right now, a handful of Silicon Valley giants are setting the agenda for where generative AI goes, but this doesn’t need to be the case forever.
Just as thousands of food startups created healthier, more sustainable products to chip away at Big Food’s dominance over the grocery store, open source AI tools can be used to create alternative choices to Big AI. Even today, with some basic programming knowledge and moderate effort, anyone can download software to create their own AI chatbot or image generator.
“You, as a user, should be able to write up a few pages of ‘here’s what I want; here are my values; here’s how I want the AI to behave’ and it reads it and thinks about it and acts exactly how you want because it should be your AI.”
Sam Altman
CEO, OpenAI
For instance, one could download a base chatbot model that already understands the basics of writing in a general sense, then have it read through a million recipes to gain specialized cooking knowledge. The initial training of a model can costs thousands if not millions of dollars in computing power, but pre-trained open source models are freely available and able to be customized by anyone with additional training data that is also easily found online free of charge. Like an undergraduate college student who chooses to attend graduate school for additional training, we can take pre-trained chatbot models (the undergraduates) and further shape their development by providing graduate training in the form of specialized datasets. The food system doesn’t thrive in a monoculture, and neither will the AI industry, so all AIs representing all kinds of views should exist.
This process will only get easier as more user-friendly AI tools emerge or as chatbots get better at assisting us in coding new AI models. The knowledge barrier to what a programmer does is falling and anyone can soon use plain language to ask an AI chatbot to help us create a chatbot that shares our worldviews and desired knowledge base. Like Big Food, Google, Facebook and OpenAI have an outsized influence on the industry. But empowering people to create viable alternative tools to the mainstream ones is an important check on the power that these firms have over the AI movement.
Putting the power of AI development in the hands of everyone creates a cyber-diversity of sorts, where a range of varied and specialized generative AIs can exist to better serve the needs of niche communities. There are already custom chatbots that are designed to specialize in agriculture, law, recipes, or even fast food drive through ordering and more will come. The better-for-you food movement of the last 20 years enabled many sub-genres of food to emerge like Keto, gluten-free, Paleo, and more. The same can happen in AI where people are empowered to create things beyond the mainstream that better fit their needs and values.
Big Tech is going to do what it takes for them to scale up their AIs and create a massive revenue pipeline. Aside from enacting draconian regulations to slow Big Tech down or consumer boycotts to express our displeasure, it’s going to be difficult to convince the entire industry to slow AI development. Even if in the unlikely event that Google stopped all work on AI, OpenAI and Facebook would simply fill the void.
But with the increasingly accessible power to create our own alternatives to mainstream generative AI, individuals arguably have a bigger say in the direction of AI than they do with food. Slow Food exists and thrives against a backdrop of increasingly faster food devoid of nutrition, flavor, or environmental benefit. It’s an oasis of individuals who opt-out of the prevailing fast food culture. Slow AI can be that oasis for the tech world, ensuring that individuals always have access to AI that’s good, clean, and fair.
Footnotes
3 Recent posts from my Substack
3 Highlights from my current Generative AI reading list
What Keeps a Leading AI Scientist Up At Night by Benjamin Hart - New York Magazine
The Andy Warhol Copyright Case That Could Transform Generative AI by Madeline Ashby - WIRED
Hype grows over “autonomous” AI agents that loop GPT-4 outputs by Benj Edwards - Ars Technica
My email is mike@thefuturemarket.com for questions, comments, consulting, or speaking inquiries.