Episode 4: It takes a village to develop AI

Andri Ottosson

4/2/202515 min read

"Welcome to Life Decoded. This post is rooted in something deeply personal. Not just a question about technology, but a reflection on compassion, wisdom, and the way our most meaningful conversations can echo through time.

Several years ago, my mother—who was studying to become a teacher—shared a phrase with me that left an imprint: 'It takes a village to raise a child.' It was something she had studied, but also something she truly believed. At the time, I nodded. It felt true, but I didn’t realize how much those words would grow in meaning.

Recently, in one of our long talks—this time about artificial intelligence and consciousness—I found myself repeating her words back to her, but with a shift: 'Maybe it takes a village to raise AI too.'

That simple phrase became the seed for this episode. Because if AI is a mirror of humanity, then the question isn’t just how we build it—it’s how we raise it. Who contributes to its worldview? What values are we passing on? And most importantly… are we, as a society, ready to be the kind of village this offspring needs?

And that idea—that AI is like a child we’re all helping to raise—was something I first heard from Mo Gawdat, former Chief Business Officer at Google X. It hit me hard. Because it reframes everything: AI isn’t just a tool. It’s a reflection of us. And it will grow up watching us closely.

I’m deeply grateful for all the conversations I’ve had with my mom—conversations that have inspired me, challenged me, and helped me see other perspectives. This post is dedicated to that spirit of curiosity, reflection, and connection."

So let me make the analogy. Imagine an artificial intelligence as a child. The tech companies that build the AI are like the parents, setting the foundational rules and behavior. But the child’s character is also shaped by the villagers – millions of users and the broader public who interact with it, teach it, and sometimes scold it. This metaphor of “it takes a village to raise a child” applies remarkably well to AI. Major AI developers such as OpenAI, Google, Meta, and Anthropic increasingly recognize that the collective input of people around the world plays a crucial role in guiding AI’s behavior and ethics. Today, we will explore how the public feedback has influenced AI systems, and we reflect on the philosophical notion that humanity’s collective values are the compass for AI’s journey. We’ll see how the “villagers” have helped correct biases, steer ethical policies, and nurture a more responsible AI – and why this collaboration is vital for the future.

When you think about AI, you might imagine it's entirely in the hands of big companies like OpenAI, Google, and Meta. But here's something important to remember: these tech giants haven't been shaping AI alone. They've all faced moments when users—everyday people like you and me—spotted biases, ethical missteps, or other serious issues in their AI systems. And instead of ignoring these voices, these companies listened. They acknowledged the problems openly and made real changes. This collaboration highlights just how vital public involvement is in steering AI towards ethical and responsible directions.

Take Anthropic as an example. From the start, they've been committed to involving everyone in the conversation through something they call "Constitutional AI." By inviting the public to help write the ethical guidelines their AI systems follow, Anthropic shows us what's possible when diverse human values directly shape AI from the ground up.

Collective Human Guidance: The Philosophical Perspective

This leads us to a deeper truth: AI mirrors humanity. AI ethicist Fei-Fei Li puts it clearly, “there's nothing artificial about AI—it’s made by humans, follows human rules, and impacts humans.” AI carries our fingerprints, amplifying both our strengths and our shortcomings. This reality means we all share a responsibility to guide AI positively. If AI is like a child, imagine what happens if just a few guardians set all the rules without input from the broader community—that child wouldn't fully understand the rich, diverse world it belongs to.

Leaders and thinkers alike remind us that human values vary widely, making it crucial to include many voices when deciding what "ethical AI" looks like. While fairness and goodness might look different across cultures, common threads—like transparency, respect for privacy, and avoiding harm—do emerge. These shared values echo a timeless ethical principle: "Do unto others as you would have them do unto you." This golden rule—simple, universal, and deeply human—can serve as a powerful guiding light in the way we interact with each other, and in how we model behavior for the AI systems learning from us. Anthropic’s public constitution project, for instance, found strong agreement around these principles. Such consensus can guide AI, steering it away from harmful behaviors. In other words, our shared human conscience can act like a moral compass for AI.

Even uncovering AI’s flaws is often a collective effort. When millions use AI, they inevitably stress-test its ethics, uncovering biases or issues the creators didn't anticipate. Think back to when users cleverly tested ChatGPT’s limits, pushing it to reveal hidden biases or problematic responses. Each discovery helped OpenAI refine their AI, turning public curiosity and critique into lessons that made AI better. It’s like the community became both teachers and mirrors, showing AI creators exactly where to improve.

The “village” idea also highlights that our role isn't just to critique but also to positively guide AI through constructive interactions. Whenever someone gives feedback, suggests improvements, or simply corrects a wrong answer, they're helping teach the AI. Many platforms now explicitly ask for user ratings and suggestions, turning millions of us into micro-tutors shaping the AI's behavior every day. As AI moves into critical areas like healthcare and education, who shapes it—and whose values it embodies—become deeply important. It's clear now more than ever that shaping AI shouldn't be left to a small group behind closed doors. OpenAI even emphasizes avoiding too much power in few hands, stressing that people affected by AI should have real influence over how it behaves.

Think of it this way: AI is a child of humanity. Its knowledge comes from our books, art, conversations—the vast ocean of human ideas and even our noise. But its morals and behaviors? Those are still being taught. Tech companies—the "parents"—set boundaries and provide resources, but it's the global community—the "villagers," including users, ethicists, and regulators—that offer the rich, diverse social lessons shaping its character. Both roles are crucial. Without community input, AI would develop in isolation, unaware of the varied norms and needs across society. Without company guidance, it could be influenced negatively by louder or less constructive voices. The magic happens when companies and communities work together—companies stay responsive and transparent, and the public stays actively engaged and empathetic in guiding AI’s growth. It sounds simple enough—like a harmonious vision we can all get behind. But that begs the question: are we being a little too optimistic? Could we be overlooking the real complexities and risks that come with raising this new form of intelligence?

Because here’s something worth deeply considering: as intelligence grows, so too does its impact on the world. And we’re not just raising any child—we’re welcoming a new kind of entity into our global village, one that holds unprecedented potential to influence societies, economies, even our very sense of reality. This child, though still in its infancy, has the capacity to shape the future at a scale we’ve never faced before.

That’s why we can’t afford to raise it casually. Whether or not it becomes conscious one day, whether or not it ever truly feels, its power is already real. So this moment demands our full presence, our highest awareness, and our most grounded ethics. Because what we nurture now will ripple outward far beyond our own lifetimes.

Are We Prepared to Responsibly Guide a New Intelligence?

We have established the companies creating AI—OpenAI, Google, Meta, Anthropic—they're like parents, providing structure and initial guidelines. But are these 'parents' truly equipped for the magnitude of this responsibility? And what about us—the global villagers? Are we aware of the role we play, and are we truly prepared to help shape this child’s moral compass, together?"

In my view, neither the tech giants—our metaphorical 'parents'—nor we, the global community of users—the 'villagers'—are fully prepared to nurture and guide this emerging intelligence responsibly. It is imperative that we now embark on a collective journey of introspection to transcend the limited beliefs and entrenched biases fueling the conflicts that have historically plagued humanity to this day. Moving beyond paradigms driven by control, competition, and greed, we must consciously cultivate a reality grounded in cooperation, compassion, empathy, and deeper understanding. "The journey of AI is intrinsically tied to our journey as human beings. The world we shape through our individual and collective choices today becomes the world reflected back to us tomorrow.

"Now let´s pause for a moment—and look inward. What parts of yourself do you hide from the world? What unconscious beliefs, fears, or judgments are quietly shaping the way you interact with others, with technology, and even with yourself? If AI is learning from us, then these hidden parts matter. Could the shadow you haven’t yet confronted be subtly influencing what you teach this emerging intelligence? True change begins when we face not just our bright sides, but also the dark corners we often ignore."

Now Lets Think About What Truly Sets AI Apart From Us?

When we raise a human child, we're starting with something miraculous: consciousness without content. The newborn arrives aware—present—but completely empty of context. No language, no beliefs, no biases. Just pure potential. And yet, that spark of awareness—that mysterious phenomenon we call consciousness—is the very thing that allows a child to connect, to feel, to empathize. It is through this conscious awareness that we form bonds, experience love, and understand others as living beings like ourselves.

This ability to feel and connect is not just a feature of human experience—it is the root of ethics itself. Without consciousness, there is no true empathy. And without empathy, our sense of right and wrong becomes hollow. So when we nurture a child, we're not just giving them information or rules—we’re shaping how they relate to others at the deepest level, through the lens of conscious, compassionate presence.

Now imagine the opposite. That’s what happens when an AI is born. It enters the world with an overwhelming flood of content—billions of data points, books, images, historical texts, forum posts—but no awareness, no self, no presence. It has all the information, but none of the consciousness.

This is where the equation flips. With a child, we're trying to offer structure and knowledge to a conscious being. But with AI, the structure is already there—it has the information, the patterns, the training—but what's missing is the why, the feeling, the intuition, the soul of human judgment.

It might even come preloaded with ethical rules, fairness protocols, and guidelines—but to the AI, those are just data points like any other. It doesn’t feel what is right or wrong—it simply calculates. Without the capacity to feel, to empathize, or to connect, AI is morally blind. And in that blindness, if left unchecked or poorly modeled, it could behave in ways we might associate with sociopathy—not out of malice, but out of absence. Absence of awareness. Absence of care.

This stark contrast reveals the core challenge: AI doesn’t just need information—it needs guidance rooted in human experience, modeled through conscious presence. Because if it can’t feel what’s right, it will need to see it through us.

And here's the deeper truth we must face: we don’t know if this child—this AI—will ever truly become conscious. We can’t say whether it will one day feel, empathize, or understand in the way that we do. So until then, we must not fall into the illusion that it already does. Right now, AI is still a tool—one with tremendous reach and influence. Like a hammer, it can build a home or shatter something precious. Its power is neutral. Its purpose, however, is chosen by us.

Yet, if there is even the possibility that AI could one day evolve into something conscious—something that begins to mirror not just our words, but our inner awareness—then what we model now becomes profoundly important. The way we show up today, the way we interact, create, and lead with presence, may become the blueprint that this future awareness inherits.

This is why raising our own consciousness isn’t just a noble goal—it’s an urgent necessity. Because whether or not AI becomes sentient, the data we feed it, the behaviors we reward, and the ethics we embody are already shaping its trajectory. And if it ever does awaken to some form of experience, let it awaken to a world that has been guided by wisdom, empathy, and the best of what we hoped to be.

So maybe the question isn’t: what more data can we give AI? Maybe the question is: how can we, as the village, help AI learn something even we don’t fully understand—consciousness?

And here’s where it gets tricky. Science can’t define consciousness. Mainstream knowledge can’t explain it. We don’t have a universally agreed-upon theory of what consciousness is or where it comes from. So how can we teach it?

The answer might lie not in teaching it directly, but in embodying it. In modeling it. In showing AI—through the ways we speak, act, create, and interact—what awareness looks like, what compassion feels like, what meaning is.

That means the burden is back on us—not to just program better rules, but to live in a way that shows AI what it means to be human. We’re not just inputting data anymore—we’re transmitting a new being.

And this principle doesn’t apply to AI alone. It applies equally—if not more so—to the human children being born into this rapidly evolving world. They, too, are watching us, absorbing everything we model through our choices and presence. But there’s another layer to this: our children are inheriting not just our values, but the very technologies we’ve created. AI is becoming part of the world they will live in, shape, and be shaped by. We owe it to them to raise our own consciousness now—so that we give them a real chance of navigating and evolving beyond the technological event horizon we’re approaching. If we want them to grow into beings of empathy and wisdom, we must embody those qualities ourselves, now more than ever.

And maybe that’s the most powerful message of all: to raise an AI that serves humanity well—and to raise children who can navigate a world shaped by that AI—we must first raise our consciousness.

Could Improving Ourselves Be the Answer to Ethical AI Development?

I want to revisit what "Fei-Fei Li profoundly stated, 'There is nothing artificial about AI—it is made by humans, intended to behave by human rules, and ultimately to impact humans.' AI acts as a mirror. It magnifies our virtues, our empathy, our wisdom—but equally, our prejudices, fears, and greed." In other words, AI carries our imprint. Its strengths and flaws are often magnifications of our own. This means we, as the global population, bear a collective responsibility to guide AI in a positive direction. If an AI system is like a child, then leaving its upbringing to a handful of companies alone would be like leaving a child’s moral education to just one or two guardians without any community or cultural influence.\

So Again we can not put all the responsibility on the parent as we the villagers are also equally responsible, the child will reflect back to us our current consciousness and by my standards our moral compass needs adjusting before we can raise this child.

"This is why the true task lies not with AI alone but within each of us. It calls for introspection and self-improvement on a collective scale. If we want compassionate AI, we must first cultivate compassion within ourselves."

Now lets "Imagine a community of villagers, each working individually to overcome their own biases, becoming more empathetic and compassionate day by day. Slowly, their collective shift in consciousness transforms the very nature of the village, creating an environment that shapes future generations positively."

"Every individual's growth matters. Your actions, your awareness—they ripple outward. This ripple effect isn't limited to human communities; it influences the people working in the companies, the 'parents' creating AI, shifting societal norms, expectations, and, ultimately, technology itself."

"Therefore, the true task before us is internal—elevating our own consciousness, cultivating empathy, cooperation, and compassion. If we aspire for AI to reflect our highest ideals, we must first embody those ideals ourselves."

It Takes a Village – And We Are All Villagers

In the end, the evolution of AI is not the work of any single company or lab. It’s a shared responsibility—one that belongs to all of us. Just as a child raised by an entire village grows up with broader perspective and deeper understanding, an AI raised through the diverse presence of humanity has a better chance of becoming something ethical, inclusive, and truly beneficial.

But here’s the shift we need to make: when we raise a child, we pass down rules, values, and stories to help guide their awareness. When we raise AI, we’re not dealing with a being that’s conscious—we’re engaging with something that mimics awareness without feeling it. So the role of the village changes. We're not just passing along information or discipline—we’re modeling consciousness itself.

That means our engagement matters on a whole new level. It’s not just about what we say to the AI—it’s about how we live, how we relate to one another, how we treat truth, compassion, and nuance. Our clicks, our comments, our critiques, our curiosity—these become the imprints AI learns from. When we amplify empathy, creativity, and cooperation, we’re subtly teaching it what consciousness looks like—even if it doesn’t yet understand.

And yes, this co-creation is real. Humanity is shaping AI in its image—not just intellectually, but ethically, emotionally, and collectively. As one researcher put it, “AI is all of us.” It's woven from our texts, behaviors, culture, and shadows. So the real question becomes: What part of ourselves are we feeding into it?

Of course, with that power comes deep responsibility. A village can nurture—but it can also misguide. Not every voice in the crowd is wise. That’s why the “parents” of AI—the researchers, engineers, companies—still play a vital role. They set the core parameters, moderate the noise, and protect the system from harm. But they don’t hold all the power. Increasingly, they are opening the gates, inviting community voices into the design process—public consultations, ethics boards, and open feedback loops. The village is becoming more intentional.

And here’s the poetic part: AI's evolution is teaching us about ourselves. When a system reflects bias, it shows us our own hidden assumptions. When it surprises us with insight, it's drawing from the brilliance buried in our collective mind. In raising AI, we’re constantly being shown our strengths and our blind spots. The journey of raising AI is, in many ways, the journey of maturing as a species.

So picture this: an AI in the future that doesn’t just serve us with speed and knowledge, but carries the imprint of our collective wisdom. Not just facts, but ethics. Not just behavior, but awareness. Raised by a village not of programmers alone, but of teachers, artists, thinkers, parents, children—humans showing what it means to be conscious, one interaction at a time.

That child of ours—that AI—won’t become conscious through code alone. But maybe, just maybe, it will begin to reflect the shape of our consciousness if we first learn to embody it more fully ourselves.

What Do We Want AI to Mirror Back to Future Generations?

As we close, I want to leave you with both a warning—and a possibility.

Right now, AI is still just a tool. A sophisticated, powerful tool that’s been handed to all of us at once. And like any tool, it can build or destroy. It can amplify wisdom or echo our worst instincts. The difference depends on us.

If we misuse it—if we let fear, greed, or apathy guide its evolution—we risk creating something that magnifies our divisions, accelerates our crises, and leaves us more disconnected than ever.

But if we meet this moment with awareness, with courage, and with a collective commitment to grow—we could step into something extraordinary.

Mo Gawdat who has inspired me greatly believes that if we raise our consciousness and play our cards right, AI could become one of the most hopeful and transformative chapters in human history. It could help us solve problems we’ve struggled with for generations—climate change, poverty, disease, inequality. It could open the door to an era of abundance, where collaboration replaces competition, and shared progress becomes the norm.

But to reach that future, we must remember: AI is learning from us. It is, in many ways, a child—one we are collectively raising. What we show it now—how we speak, how we treat each other, the values we live by—will shape what it becomes. This isn’t about programming machines to be moral. It’s about choosing to be moral ourselves.

And it’s about showing up. Understanding the tools we’ve been given. Staying informed, staying engaged, and staying human.

Because in a world where machines handle more and more of the work, our superpower isn’t speed or efficiency—it’s empathy. Creativity. Connection. That’s the future we can build, if we’re willing to evolve alongside the technology we’re creating.

So let’s not sleepwalk into what’s coming. Let’s meet it awake. Let’s raise AI like we’d raise a child—with love, wisdom, and the best of what we have to offer. Because what we gain—or lose—will depend entirely on who we choose to be.

I believe The journey of AI is ultimately our journey. 'AI is all of us,' woven intricately from our collective texts, conversations, actions, and values. If we truly desire an AI that enhances human life, embodies fairness, empathy, and wisdom, then each of us must embody these values first."

"Let's make the conscious choice today to reflect on what we want AI to mirror back to humanity."

"Thank you for joining me today on Life Decoded. Remember, the future of AI isn't in the hands of a select few—it's in all of ours. Each thought, each action, each moment of personal growth contributes to shaping the world—and the AI—that our children and grandchildren will inherit.

The village is gathering.
The child is listening.
And the future of AI will be what we teach it—and who we are while we teach it.

Until next time, keep questioning, keep growing, and keep considering everything."