It started as an experiment ...
Global AI experts hardly agree on something, but they all signed a 22-word memo: “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war." We wondered, "How should we respond?"
For the last 25 years, ServiceSpace has run very large content portals (around good news, kindness, etc.) and as an experiment one weekend, we tossed all that content into "ServiceSpaceGPT" -- and the AI wowed us! Bonnie tried to get AI to self-reflect: Does AI have consciousness? :) So many other examples, like "What should I do if my values don't align with my job?" The answers were radically different from ChatGPT.
Instead of wading through Google's keyword matches, you could now generate synthesized responses across the entire data-set via a creative, conversational interface -- in any language. We had demo-ed a "compassion bot" at our annual retreat in 2018, but potential here felt different.
But then ...
Quickly, we noticed 12 core differences between our approach and ChatGPT. Then we do what we've always done -- we thought of giving it away. On that very day, Sharon Salzberg agreed to contribute all her content into a "SharonBot". Almost as a joke, a volunteer asked it, "How do you make a souffle?" The SharonBot response was totally classic! Then, we created a "GandhiBot", and a few organizational bots (like Spirituality & Practice's 65K articles and Greater Good Science Center). With these bots, even when we asked tricky questions, we were able to train it to hold significant nuance. "What do you do when you're confused and have to make a decision?" "How do you let go of a friend?", "Respond in Chinese: how can I grow my heart?"
People started gathering ...
About 90 pioneering volunteers organized into a "think tank" of sorts to experiment with how AI could circulate "intelligence of the heart." We engaged in various dialogues, including a retreat in Mt. Shasta with indigenous elders and tech innovators, which deepened the conversation. A lot more flowed. We quickly became host to 40 bots, each with different immediate applications -- from KarunaNews to 21-Day Interfaith Compassion Challenge. We engaged with top leadership at Google and Accenture around some of these explorations, and also leaned into some of the many challenges it posed. Many wondered, "Are volunteers going to be outsourced to AI? And humanity?" The new capacities at our fingertips felt surreal, and turning a blind eye towards it felt irresponsible. We opted to stare it in the face and ask, what would love do?
Pilot Projects emerged ...
Dozens of repositories of wisdom, from mindfulness teacher Sharon Salzberg, Vinoba Bhave (Gandhi's successor), Buddha at the Gas Pump podcast (BatGap), started coming together. BatGap founder, Rick Archer, was particularly taken by it, and put a menu button on all his website pages. He also invited 600+ speakers to add content to the BatGap Bot and even create their own bots (for those with larger data repositories). All put together, these repositories mushroomed into hundreds of thousands of documents. We launched ai.servicespace.org
Isn't all this content already online?
Yes, it's available in most cases, but there are two problems. One is that Google search by keywords forces you to do sense-making of all the results, whereas AI can *dramatically* speed up that process. Second, the data is "weighted" by the dominant paradigm; that is, related data is often very "far" apart. It's like going to a football game for meditation tips; you could get lucky, but it's not the optimal context for that search.
Putting related content in a structured data corpus can dramatically support discovery. Add in a conversationally interactive interface across any language, and it's even more potent. Manu put it this way:
"The original 'large language models' were remarkable because they were trained with mountains of unstructured data. Recently, multiple teams have shown that highly structured, high-value content can be even more important. We put the instruct dataset generated by OpenAI in this category as well. They used RLHF content created by a team of low-cost humans to better capture basic ideas. A thought in the sector now is that we need progressively more skilled, highly educated people to create finer and finer datasets. That's great for technical knowledge, but how about values-based knowledge/wisdom? Where do we find the world's foremost experts in compassion? The ServiceSpace community has a fair shot at being that place. "
One of the hottest issues in AI these days is "alignment" -- that AI systems support the best interest of humans. OpenAI just announced that 20% of its resources will be dedicated to this.
But align to what kind of human? We want to push that conversation well beyond "economic man" and towards pro-social expressions of flourishing that include compassion, kindness and joy. Moreover, we want to make it easy for the large language models to adopt this kind of alignment as well. It turns out this is not as hard a hill to climb, as might appear at the outset.
From Going Big to Going Deep ...
Chatbot hype will wane soon, but we sense that AI's disruption will be sustained and dramatic ("more significant than the wheel", as one AI godfather put it). If Yuval Harari is right, "data-ism" may end up as the "religion" of the future. But where large language models try to go as wide as possible, it will be the more decentralized and contextualized "small language models" that will go as deep as possible.
Moving from "horizontal" to "vertical" allows us to bridge content, community and consciousness. That is, deliver value to people where they're at (content); leverage that to cultivate deeper relationships, and head towards "noble friendships" through small acts of service; and cultivate a co-created field of inner transformation that reconnects us into our intrinsic field of compassion. Typically, content innovators don't know how to build community; social activists can't always expand belonging into universality; and meditators struggle to translate non-dual insights into practical applications. So our attempt is to draw a throughline across all three spheres, to leverage "artificial" intelligence's content capacities to build in-person communities and elevate peer-to-peer reciprocity towards a gift ecology and a more "infinite game" of consciousness.
So how does it work, if I have my own bot?
From his own experience, Rick describes the mechanics like this:
A ChatBot functions optimally if it has a lot of content in its “data corpus”. Content can consist of books, blog posts, audio, or video. In addition to uploading your data, you would define a “credo” – a set of instructions that controls your bot’s behavior.
ServiceSpace’s growing network of bots is voluntarily interlinked. This means that if a bot owner indicates a willingness to share his or her “data corpus”, other bot owners may opt to include it in their own. So for instance, the BatGapBot’s data corpus includes David Buckland’s and Peter Russell’s, because they have been BatGap guests and their content is relevant to mine. The Institute of Noetic Sciences will be getting a bot, and if they decide to share it, I’ll include it. But BatGap is eclectic. Others may wish to limit their bot to their own content.
Unlike ChatGPT which is “horizontal”, incorporating all the information on the Internet, these bots are “vertical”, focusing on a particular niche. The ServiceSpace infrastructure currently uses OpenAI’s GPT4 as the baseline “large language model”, overlays that with your content, generates contextual “embeddings” to personalize responses, and then aligns it to fit the larger intent of the Bot's "credo".
Some have expressed concern about copyright issues. Thus far in 2023, copyright laws consider it "fair use" for AI systems to train on previously copyrighted material. In most cases, non-generative AI (like bots) can use such material without permission. This may not be the case with "generative AI'' but the use of chatbots is not considered generative of new works. For more context, read about a highly copyright-protective view supporting the rights of publishers.
Note that if you upload your copyrighted books to your bot or mine (if you choose not to have a bot), the material in those books may inform a particular response, but it won’t be quoted verbatim. Nor will anyone be able to read or download your books’ content. One way of understanding this is, let’s say you are interested in a particular topic, such as astronomy, and you read a lot of books on it. If someone starts asking you questions about astronomy, you’ll be able to give them well-informed answers, and even recommend specific books, but you won’t be offering verbatim quotes (unless you are Rain Man).
I'm in. How can I start?
- experiment with an existing bot -- just login at https://ai.servicespace.org/
login and you'll be able to ask questions of various bots. Ask some really hard questions, and questions that don't have one simple answer, and see how it responds. :)
- experiment with your own bot -- we can create your own bot with the ServiceSpace dataset, where you can edit your bot's "credo" -- and learn how to control the bot's responses with the credo's instructions.
- experiment with your own bot and your own data -- this would be the final step. Uploading data requires you to gather it, and for a volunteer team to feed it to the bot.
This Isn't About Bots ...
Beyond bots, our hope is to create a "polyculture" field of datasets around pro-social values. Pro-social is the buzzword these days that is defined as, "the beliefs, attitudes, and behaviors that promote the well-being and welfare of others, with an emphasis on cooperation, helping, sharing, and altruism."
We can all imagine a GandhiBot, MandelaBot and TeresaBot, but how do we build a "many to many" field of datasets? And can someone create her own bot with a combination of PermacultureBot, MyMother'sBot, and BuddhaBot? What will it take to nurture infinite combinations of wisdom? As we've seen throughout history, mainstream market solutions are biased in the opposite direction -- toward a monoculture. And this time, it's faster than ever. Television took 75 years to reach a million users while ChatGPT took 5 days. How will we add friction to all this momentum so we have space to ask the harder questions?
Can we now shift track with this most potent innovation humanity has seen? Can we bend the larger arc of AI towards a more decentralized and distributed polyculture field of shareable data commons? Can we lead that in a non-commercial context, driven by intrinsic motivations? Can we elevate data-to-data connections to heart-to-heart relationships, and can that field of sacred connections support an emergence that could reduce human suffering and circulate greater compassion?
We don't know but it seems critical that we try. Back in 1999, when we started ServiceSpace, we didn't feel like the Internet would magically repair the world's ills. But we used it to "make it cool to give" and change ourselves along the way. Similarly, we are attempting to now leverage AI as an excuse to generate volunteer opportunities, whose subtle corpus of heartful intentions might contribute to a collective threshold of wisdom -- and perhaps alleviate some suffering.
Shantideva's poetry in the 8th century still feels so apt:
“For as long as space endures, and for as long as living beings remain, until then, may I too abide to dispel the misery of the world.”