Fundraising impact: AI backlash soon?
Brett here:
AI: — amiright?
If you're like me, you're equally fascinated by what AI can do today and horrified by what it might do sooner than later.
It seems to me there are 3 main viewpoints people have on AI right now:
- Meh. It's mostly hype.
- This is awesome, it's the future, I'm obsessed.
- This is bad, maybe historically bad, I'm obsessed.
If you use or think about AI at all, you are "early" according to this May 2024 survey published by the Reuters Institute and the University of Oxford and shared by Wharton professor Ethan Mollick:
I've been following AI religiously since ChatGPT first blew my mind when it arrived in November of 2022. I hosted a webinar on AI in fundraising in January of 2023. I've curated and now daily track a list of 54 accounts on Twitter/X — top AI leaders, skeptics, optimists, and pundits. I read their hot takes, announcements, and newsletters. (Zvi Mowshowitz has the most incisive, comprehensive AI newsletter.) I use AI (mostly Chat GPT4o and Claude Opus 3) almost every day for researching, brainstorming, and summarizing.
As I acquaint myself with this society-transforming tech, I wonder and I ponder and I hypothesize and I prepare for what looks to be coming down the pike. Also, I strive for detachment to avoid burnout or undue worry.
Still, yeah, I worry.
Today's leading voices in AI debate whether the new intelligences they're creating will save us, disempower us, or destroy us. Yet their progress shows no sign of slowing. The acceleration continues backed by ever more scaling of compute, training data, algorithmic innovation, and global race dynamics.
I've steeped myself in things AI long enough that I have some takeaways I think you might find helpful.
Fundraising impact: AI backlash soon?
I anticipate a widespread AI backlash.
Soon.
Within a year.
Here's how I'll lay out my thinking for you:
- Current AI limitations
- Future AI capabilities
- Recent AI controversies
- Some possible near futures
- The backlash I anticipate
- How a backlash might affect your nonprofit
- Questions you might want to keep in mind
Current AI limitations
Today's best AIs are superhuman in speed of processing, tirelessness, and knowledge.
They are, however, limited in their working and long-term memories, in some subject areas (e.g., spacial relationships), in some capabilities (e.g., writing from a singular viewpoint), and in agency (to take multi-step actions in the real world).
They also hallucinate, rather like people do. You can't always trust them.
Future AI capabilities
The limitations mentioned above are projected by many if not most AI experts to be temporary. The weaknesses will be replaced by strength. This won't take long.
AI frontier labs such as OpenAI, Anthropic, Google, and xAI are betting billions that they'll hit the AI jackpot of human-level (AGI) and then superhuman-level (ASI) systems by building ginormous compute clusters with state-of-the-art chips in some cases powered by nuclear plants.
The next models now in training use more compute (chips), more data (including synthetic and multimodal data: video, audio, text, code, etc.), and longer training runs (time the AIs are given to crunch the data and learn).
Advances in AI chip design plus plummeting costs promise better AI models every year or so.
The trajectory we are on seems to be something like:
Which means we're likely in a slowly-then-all-at-once scenario:
Most of us are used to slower progress. The iPhone 15 is marginally better than the iPhone 14, and so on.
This is different.
AI is progressing wildly faster. The GPT behind ChatGPT and nearly all of the other current top models was only developed in 2017. Google did the research and made it public, then other companies like OpenAI built on it.
For example, look how far AI images have come in the past two years:
Another example:
Similar progress is being made in AI-generated text, audio, coding, video, and robotics.
AI capabilities grow ever more powerful. And, like nuclear tech, AI is dual use; it can be leveraged for creation (e.g., power in the case of nuclear; e.g., scientific research in the case of AI) or destruction (bombs in the case of nuclear; bioweapons engineering in the case of AI).
Perhaps it should be no surprise that AI controversies have been in the news lately.
Recent AI controversies
You've probably heard of most or all of these:
- OpenAI apparently trained an AI voice to sound like Scarlett Johansson after asking for and not receiving her permission to train on her voice.
- Visual artists are organizing against AIs training on their work without permission.
- So are music artists.
- The New York Times is suing OpenAI over this.
- Hollywood has inked AI protection clauses in labor agreements and braces for disruption.
- OpenAI apparently pressured exiting employees to sign potentially illegal nondisclosure and nondisparagement agreements or else risk losing lots of money in earned equity.
- Even people helping create the most powerful AI systems think their creations might soon take their jobs.
I could go on.
These controversies, I believe, are only the beginning of a backlash that will mount as more people use the latest AIs and as the newest models are released.
At some point, we might hear a collective gasp from across the land once the the stakes have become clearer.
Some possible near futures
Imagine, if you will, it's late November, 2024...
- The US presidential election is over.
- OpenAI decides it's now safe to release its next AI model, GPT-5, as the risk of political deepfakes influencing the election has passed.
- The intelligence and capabilities of the new model shocks the average person. OpenAI claims GPT-5 is 5-10 times smarter than GPT-4o.
- People who try it out for themselves fear it's too good.
- They tell their friends and family.
- They question what this all means for the workforce, for their job, for the economy, for geopolitical risk.
The widespread backlash might start in earnest then.
Or the backlash might come after a popular celebrity (think Tom Hanks and Rita Wilson announcing they had Covid just before the pandemic was declared) falls prey to an AI-based personal attack (a bogus video, perhaps).
Or the backlash could happen after an AI was used by an individual, an organization, or a government to conduct a scam, to launch a cyberattack, or worse.
Once enough people sense they are or soon may be negatively impacted by AI, the backlash, I anticipate, will swiftly spread and become a chaotic force in the world.
How a backlash might affect your nonprofit
Right now, there are plenty of reasons to harbor serious ethical/moral reservations about many aspects of AI.
This will likely snowball and become more obvious and pressing as the public at large grasps what AI can do — good and bad — and how it might affect them personally.
Suddenly, AI could go from ignored, tolerated, or niche to a third rail you don't want to touch.
Potentially, your donors then start asking you,
"What's your AI policy?"
(Well, do you have one?)
Questions you might want to keep in mind
Granted, there are reasons techno-optimists give for tolerating all of these risks. (Many of them rally around the cry, "Accelerate!")
They say it's our only hope to avoid the meta-crisis caused by the interplay of climate change, falling birth rates, the wars in Europe, the fragility of the supply chain, microplastics in the ocean, and so on.
Even if you believe this to be the case, I think it's only human nature that many people will likely be quite upset about a small number of Silicon Valley-types creating a technology that displaces us as the smartest "species."
If and when a backlash comes, will you be as ready as you can be?
I think it's worth considering and discussing the following questions related to any potential AI backlash:
- If donors ask us about our AI policy, how will we respond?
- Which, if any, aspects of AI do we find to be unacceptably ethically/morally problematic?
- Will we use AI images? When? How?
- Will we use AI copy? When? How?
- What about AI tools integrated into programs such as Canva or Copilot in Microsoft Word?
- If some AI companies are more problematic than others, what does this mean for us?
- What are the risks to allowing AIs to access our donor data and other proprietary information?
- Do we need legal review for any of this?
- If a backlash comes tomorrow, how might what we do today appear to others?
- What's the right thing to do?
This is all very thorny.
Much of the modern world is problematic.
Some of what's problematic is unavoidable.
We are in uncharted territory.
Time to make our own maps.
Note: sorry for the heavy read. We are normally really fun, I swear! :)
The Fundraising Writing Newsletter
Every week we send a letter about fundraising, writing, donors, and life to smart fundraisers like you.
We share fresh, practical donor communication tips and resources... and some silliness too.
It’s free. Unsubscribe whenever you want.
We love when good things arrive in our inboxes. If you do too, subscribe today.