We Use AI. Here We Explain Why And How
Exploring AEGIS’s Approach to Responsible AI Practices and Advocacy towards equitable policies
The Deal With AI
With artificial intelligence, we find ourselves at a crossroads, where the crucial innovation intersects with fundamental ethics. AI may be a marvel of human ingenuity, but it also casts a long shadow over our societal and environmental fabric. We ourselves often become so optimistic as we get to feel so empowered by AI that we need to remind ourselves that just because we use and try to help steer the development and distribution of these tools in ways that are rooted in ethics and moral high grounds, that does not mean that the majority of people do as well. And beyond the attractive narratives of progress embedded in AI dialogue lies a reality that demands our urgent attention. So before we delve into understanding how to best use AI today —and how we ourselves perceive this new terrain through our practices at AEGIS— we must start the story where it needs to start. And that is with AI’s creation. The fact is that AI has been built on a problematic system. A system that we must now steer in fairer directions.
Beyond the habitual conundrum of norms of passage of research, that allow publicly or charitably funded quests to pass from academia or nonprofit grants to the startup world, or the habitual problems related to a lack of inclusion of women and marginalised demographics in the initial foundational development of tools, which can quickly become skewed and biased at their root, with AI we also face a general operational structure that is morally lacking. The human labor that powers these AI behemoths is often invisible and exploitative. While machine learning, design and coding take the spotlight, it is actually mostly underpaid humans from countries where labor is valued cheaply who label and train these models at scale, with often dramatic consequences in terms of trauma due to exposure to horrific content. This exploitation, a stark contradiction to the spirit of equitable technological advancement, is a pressing situation to reckon with.
Furthermore, AI, in its embryonic state, mirrors and magnifies the biases of its creators, on top of those existing today in the world wide web. These biases, woven into the very algorithms we herald as beacons of neutrality, perpetuate existing societal inequalities, shaping decisions and destinies in ways both overt and subtle, skewing and automating our world’s flawed perception of reality through the refracted lens of Western civilizations and their status quo.
But there’s an even more insidious aspect often overlooked – the environmental impact. The immense compute power required to train and run sophisticated AI models translates into a significant environmental footprint. The resources that also need to be mined in order for us to be able to create these systems that power AI are ingrained in our world’s most horrendous issues linked to the lives and deaths of millions of people from the Global South. This devastating human reality, along with these questions of consumption of energy not only deplete our planet’s resources while trapping entire nations in neocolonial shackles, but also contributes to pollution and CO2 emissions driving our climate crisis through the roof. It’s a selfishly justified paradox where our pursuit of futuristic technology is tethered to the depletion and devastation of the very planet we inhabit.
Beyond these technical realities, can we even find the time to talk about the risks AI presents to bring mayhem into our understanding of reality? We are now getting a mere preview of the types of doubt, chaos and misinformation that AI tools cast over our future. And for people like us, who see so much magic and potential embedded within powerful technologies, these facts are hard to always remind ourselves of. Yet we do need to remember, so that we do not lose track of what we need to get right if this AI situation is ever going to work out for us.
At our organization, we do confront these realities head-on, with a deep sense of shared responsibility —as should all those who benefit from AI. Our condemnation of the unethical facets of AI – labor exploitation, ingrained biases, data misuse, environmental impact, etc – is unequivocal. For many, it may have been the way we have carried out business up until now in the Global North, but fortunately, our moral high grounds are reaching out towards kinder belief systems. As our ways of exchange and communication with people around the world accelerate, so does our universal objectivity, our unifying values and principles that place ethics, planet and life of all at the heart of our belief systems and ethical consensus. The path forward must be navigated with a compass calibrated not just to innovation but to the principles of sustainability, equity, and moral integrity.
We may not control the release of these technologies, and we may not have the power to tell the whole world how to use them. Yet we also know that if we —the NGOs, the designers, the philanthropists, the activists, the futurists, the humanists— step out of this conversation or decide to ignore the reality and true potential for both good and bad possibilities coming from these newly released forces of automation, we can be sure that big companies around the world will do no such thing. And if we ever did step aside, we can be sure that altruism and the greater good will definitely not be prioritized over corporate growth goals at the expense of our peoples and planet. We feel the need to participate in this quest for this very reason: understanding how we humans will adapt to this technology. And as we with AEGIS venture further into the realm of AI, our commitment is to tread this path thoughtfully, ensuring that our steps will experiment with an altruistic path.
So how do we actually democratize AI towards the greater good? Because we do indeed need a world where the most powerful technologies and strategic power around them are not optimized for the objectives of the wealthiest corporations but can instead manage to become shared freely with those who are on the front lines of societal change. This should not be a distant dream, but a pressing necessity in our current landscape, where AI’s transformative potential remains largely untapped by nonprofits and grassroots movements.
And there lies an unmistakable irony in the current state of AI. On one hand, it’s a beacon of innovation, driving unprecedented advancements. Yet, on the other, it remains an elusive tool for those striving to mend the very fabric of our society. Nonprofits, often operating on shoestring budgets and hardly ever invited into the rooms that decide of the future of innovation when it is being designed, are left navigating a sector where AI is spoken of in timid tones as something just out there in the world —because very few among NGOs ultimately have had the time to prepare for the arrival of this tool, or are being actively involved in decision-making surrounding its evolution. This is not just a technological oversight but a profound social issue. It’s a reflection of our values, about how we agree to guide the evolution of our world, and based on who’s principles. Who we choose to empower and who we sacrifice or leave behind. In the narrative surrounding AI there is a missed connections, and we need to figure out how the dots between technology and social good are yet to be fully connected.
As it stands for our organization, we seek to edit this narrative. Our mission is clear: bring AI into the hands of those who can wield it for the greater good. It’s about transforming AI from a tool of capitalist conservation into a catalyst for planetary regeneration and systemic evolution. We envision a future where AI is not just centrally important to corporate boardrooms but a practical tool, a real world-building, future-shaping assistance in the hands of altruists, creatives, activists, scientists, environmentalists, polymaths, ethicists and community leaders. It’s about time we reorient AI’s towards a future where technology is a conscientiously shared and spreading resource, a common good that uplifts, empowers, humanizes and unites. Because technology is an integral aspect of human civilization. Cooperation and technology enabled our world and our societies to become what they are. However, as we transitioned from manual and mechanical technologies to exponential, infinitely replicable, and automated digital technologies, history veered in a flawed and tangled direction. Instead of enhancing the human condition, we prioritized the economic potential of civilization, sacrificing fundamental aspects of our humanity and flexible evolutionary norms along the way. The tech ecosystem is not designed to make us better, healthier, or happier. Globally, beyond the tech creation ecosystem, we are linked to technology in order to keep the wheels of society turning. We, humans, have become the mechanics behind the vast machine of our civilization, powered by tech, labor markets and institutional status quo. This state of existence is not working. It’s not just cruel but also illogical, unscientific, and archaic. AI embodies optimal design by mimicking the physics and biology of the natural world. This insight should guide our understanding of what great design is about, across the entire spectrum of creation from software, to civilizational design to nature. The most effective way for our world to evolve is to think from first principles and biomimetics, not by adhering to historically biased and ideologically flawed power structures. We must seek ways for technologies and the institutions powering them to return to the essence of what tech was always meant for in the context of the human condition: to improve life around us and support the natural evolving nature of human potential, both individually and in our interdependence, within our circles, communities, cities, lands, and the ecosystem on which our long-term survival and potential depend.
As such, as we delve into the discourse around AI, it becomes also crucial to broaden our perspective beyond the often-discussed impacts on the arts and authorship. There are indeed ethical problems when it comes to creative fields right now, and the basic rights of creatives are indeed infringed upon. However, AI’s reach extends far beyond creative industries, profoundly affecting a broadening spectrum of professions, many of which are typically unseen or undervalued in public debates about technology. In the shadows of the AI revolution, countless professions find themselves at a crossroads. Translators, and language-related educators for instance, have seen their work subtly co-opted by machine translation for years now, altering the landscape of their industry. Most content available for free online falls into this category of non-consensual data scraping for the purpose of automation. And all this, from where we stand, leads us to broader questions regarding the state of our world and questions of equity and cooperative norms that need to be implemented. The labor of all professional fields who have contributed to the vast datasets that power AI, often goes unrecognized and uncounted for. Because beyond this being a question of AI training, data sets, it is a question of business models at scale. The labor coming from all fields of life, with a particular unfair disregard towards the value of work carried out by women and marginalized demographics around the world, involves mountains of invisible labor, and disregarded workers who pedal the wheels of our economies in ways that are unrecognized and need to be protected from the reality of automating capitalist norms. These norms of exponential growth that benefit investors above all people need to change. And we need to find new business models and scalable frameworks to substitute them.
By focusing on this angle, we by no means intend to imply that we do not need to focus on the regulation of business surrounding AI (we do), but ultimately we insist on the reality that this inequity in AI’s impact is a reflection of much broader societal disparities. It raises crucial questions about who benefits from AI and who is left behind, along with who benefits from our world’s systems and who is destined to spin its wheels. We have for a long time now entered a co-optation race, where institutional norms around us accelerate around their perceived roles as aggregators and synthesisers (of data, resources, frameworks, tools, strategies…), and this is one of the biggest situations that we need to untangle, by creating the types of policies, norms and frameworks that can include individual humans in humanity’s understanding of equity.
This is the main reason why conversation around AI and ethics cannot be confined to intellectual property rights or artistic attribution alone. It must encompass a wider scope, acknowledging the contribution and rights of all those who have unwittingly become part of the AI ecosystem, along with the conversation surrounding institutional automation and equity at large. How will we get to a world where we can stop fearing co-optation, destruction and extraction of everything that we value and cherish as humans? This is the real question that we need to tackle. This is the real task at hand that we must now orient our future towards.
On our side, our attempt and commitment is to try to guide ways though which AI serves as a tool for empowerment and equity, not just for a select few, but for a majority of people who want to make use of it, especially those working tirelessly in sectors crucial for our collective future. This broader understanding of equity in AI forms the backbone of our approach. It’s about reshaping the narrative. And we can indeed do this by steering the conversation towards a future where AI is harnessed for the greater good, where its benefits are equitably distributed, and where every contribution to its development is acknowledged and valued. It’s a vision of AI that transcends profit and power, aiming instead for inclusivity, fairness, sustainability and the upliftment of all sectors of society.
Furthermore, in order to dig deeper into our perspective, you may already know that our approach to AI is also deeply rooted in the principles of biomimetics and first principles thinking. We draw inspiration from the natural world, observing patterns and systems that have evolved over millennia. In nature, solutions are not just effective; they are inherently sustainable, self-regulating and harmonious. This methodology is about understanding and implementing the underlying principles that make these systems so effective and adaptable. It is at its core this first principles philosophy that guides our use of AI because ultimately, we can’t help but acknowledge how AI itself is powered by biomimetic thinking, as its very design was modeled in ways that attempt to mimic the human brain, and its core knowledge base is a representative of the patterns of our collectively expressed hive mind. And although this biomimetic reality is not universally unbiased yet, we can not deny the fact that it has the potential to become universally fair one day if we make it so —and if our experimentation can make a dent in helping us get there, we do consider that a worthy attempt.
Overall, our use of AI is reflective of this philosophy. And our utilization of tools like ChatGPT is probably distinctive from many because of that. We don’t use these tools as mere data retrieval systems or language output models for content. Instead, they serve as patterns aggregators and partners in a dialogue, helping us to structure our long-term research and uncover and hierarchise patterns in environmental analyses, ideologies and societal behavior or needs. These “conversations” with AI are extensive and iterative, often spanning weeks or months per topic and resulting in dialogue documents that, when backed into documents for our archive can be as lengthy as hundreds of pages per topic. It is from these very elaborate groundwork and training that we can then distill any piece of content or writing, be it an article, a report or a blurb. This process ensures that each piece of content we create is not only informed by AI’s power in terms of pattern analyses but is imbued with the unique research, writing, insights and perspectives derived from our own work and experiences carried out over the past ten years.
This distinctive approach to AI aligns seamlessly with our biomimetic principles. Just as nature finds efficient ways to solve complex problems, we use AI to sift through vast amounts of information, identify patterns, motions and signals, and propose solutions that are both innovative, effective and pragmatic. Our focus is on utilizing AI to uncover new ways of thinking, new connections, and unexplored paths that can lead to groundbreaking solutions in the nonprofit sector and in altruistic worldviews at large.
In our operations, AI becomes more than just a tool; it’s a partner in our quest to understand and improve the world. Whether it’s structuring long-term research, identifying societal aspirations, writing, creating or aiding in strategic planning, AI is integrated into our workflow in a manner that is thoughtful, purposeful, and aligned with these core values of ours. We aspire to transcend the conventional use of AI to unlock its full potential as a catalyst for meaningful change.
Through such approaches, AI becomes a tool for deep exploration and understanding of patterns. Going beyond the typical use of AI for surface-level tasks, reflecting our commitment to a thoughtful and profound application of technology at pro-bono, altruistic and non-profit ends.It is incredible how, through such philosophies, AI became for us catalyst towards innovation, driven not by our emotions or goals but by a deep respect for the principles that govern our natural and social ecosystems.
So that’s how we use AI to explore AEGIS’s work. But AEGIS is indeed more than our core activities surrounding philanthropic innovation. We are after all an organization that serves of representative and alliance for a growing number of causes and grassroots initiatives. And in the narrative of AI’s evolution, a crucial chapter remains underwritten – one where its profound capabilities are harnessed to amplify the voices of those in the nonprofit sector and grassroots horizon as a whole. These people that we represent administratively and help incubate are often individuals working on the fringes, addressing the world’s most pressing issues with limited resources and recognition. AI, in this context, emerges not just as a technological tool, but as a medium of empowerment, a means to bring their untold stories and invaluable work to the forefront. The reality for many nonprofit workers and change makers is a daily grind of managing scarce resources, overwhelmed volunteers, navigating bureaucratic challenges, and striving to make a tangible impact. Here, AI can play a transformative role. From streamlining organizational tasks or brainstorming options to enhancing communication strategies and extending outreach —AI can be the wind beneath their wings. This potential does however remain largely untapped. The irony is palpable —the very tool that could elevate these causes is often out of reach due to a lack of communication targeting nonprofit actors, or resource constraints and a lack of technical expertise. We would definitely like to attempt to bridge this gap, but we are definitely going to need help in order to do that (and here might be a good place to mention that if you would like to see this happen also please reach out to join forces). We envision a world where AI is not a luxury of the well-funded or profit-driven but a staple for every nonprofit or change maker striving to make a difference. It’s about democratizing technology, ensuring that those who work tirelessly for social change have access to the same powerful tools as large corporations.
For sure this vision extends beyond mere access to technology; it’s about equipping nonprofit workers with the knowledge and skills to effectively harness AI. It’s a plea for a commitment from companies that develop AI tools to make sure that they also offer programs that enable us to distribute those tools freely to those able to wield them towards selfless goals. And beyond that, to not just hand over technology but to foster a collaborative environment where knowledge is shared, skills are developed, and AI becomes a true ally in our shared noble endeavors as humans in charge of preserving our planet’s magic. Our journey with AI in the nonprofit sector is just beginning, but our path is clear —a path where technology empowers, elevates, and enlightens, bringing the voices of change-makers to the forefront of societal transformation.
Beyond that, we can not stress enough the importance of testing all new AI tools in the context of nonprofit work, to understand how exactly they can be used to make our world better. New AI tools emerge with promises of greater efficiency, deeper insights, and broader capabilities. But these tools are not just advancements, they are new opportunities —to explore, to learn, and to adapt our strategies in ways that align with our most altruistic missions. Testing new AI tools is akin to planting a variety of seeds in our garden of ideologies and technology, watching closely to see which ones thrive, which ones adapt, and which ones need more nurturing. This process is more than a technical evaluation; it’s a creative exploration. We dive into each new tool with a sense of curiosity, asking not just what it can do, but how it can be molded to serve our goals of altruistic impact and individual empowerment. Our approach to these tools is methodical yet open-minded. We start with small-scale experiments, gauging the tool’s effectiveness, user-friendliness, and alignment with our ethical standards. This careful scrutiny ensures that we only integrate tools that truly enhance our capabilities in ways that do not compromise our values. But our experimentation goes beyond mere utility. We engage with these tools creatively, exploring their potential to open new avenues for storytelling, to provide novel insights into complex social issues, and to forge stronger connections within our community. Each new AI tool is a step into the unknown, a chance to push the boundaries of what’s possible in the realm of humanity’s evolving story. It’s a journey filled with learning, unlearning, growth, and the occasional surprise, all shaping this crazy quest for an equitable world.
And that brings us to where it all ties together into the ways through which we can help co-create the future of our world beyond localized small scale experiments. Through these experiments and well informed opinions, we can help shape the future of fairer AI policies. Because it is indeed terrifying how often policy makers have no practical understanding about how new technological tools can be wielded towards the development of a better and fairer world. More often than not, when we are in rooms that attempt to make sense of the future, we are extremely puzzled by the age and traditional backgrounds of the people in charge of this power. And of course, we would urge the world, on the stage of national and international politics, to assign decision-making power to people who do indeed have in-depth practical both corporate but more importantly societal and civilizational understanding of how AI tools shape the future. Because even though we are a small organization, it is undeniable how much of a difference our presence in these rooms make. We bring a unique perspective to the table, one that is often overlooked in discussions dominated by tech giants and commercial interests. Our participation in boards, assemblies, and projects at national, European, and international levels is not just a commitment; it’s a necessity to ensure that the voices of AI savvy members of the civil society working towards unbiased objectives that serve life on Earth above anything else are heard and empowered.
It is shocking that in these policy-making arenas, we are often among the few, or perhaps the only, representatives from the nonprofit sector, without ties to major institutional donors. This unique position allows us to offer insights based on real-world experiences, emphasizing the need for AI policies that prioritize logic, innovative cooperative norms, fairness, transparency, and social good above profit and efficiency. Our advocacy centers on creating regulatory frameworks that not only mitigate the risks associated with AI but also maximize its potential to benefit society at large.
Our stance is clear: the future of AI should not be shaped solely by those who stand to profit from it. Instead, it should be co-created by a diverse coalition, including, if not centered around those who use AI to tackle some of the most pressing challenges of our time. We stress the importance of inclusive, equitable AI policies that consider the broader societal impact, ensuring that AI serves as a tool for positive change and not as an instrument of inequality.
Ensuring that policies are grounded in real-world applications and the needs of those working towards a better world are crucial. This proactive involvement is a cornerstone of our philosophy at AEGIS – to be at the forefront of innovation, not just as users of technology, but as architects of a future where AI is a force for good.
Additionally, I would just like to point to any reader, thinking that we should wait for policy in order to guide us, that we are also independently able to decide about our ethical principles and frameworks regarding the question of AI and exponential tech at large. At AEGIS, we firmly believe that waiting for external policies and laws to dictate our ethical boundaries with AI is not sufficient. Instead, we take a proactive stance, implementing self-regulation and internal best practices as our guiding principles. This approach is not just about compliance; it’s a manifestation of our desire for doing right by humanity and our planet, even in uncharted technological territories. For us, our internal framework for equity is anchored by the ‘Fair Share Model’ — a moral compass and equity distribution tool that we developed internally. For us, it is this framework that directs every decision we make regarding AI, as well as the decisions that we make regarding the growth and recognition of any sense of legacy within our institution. This model isn’t just a set of guidelines for us; it’s a living, breathing ethos that permeates our organizational culture. It ensures that our practices align not only with our mission but also with the broader values of equity, justice, and social responsibility.
In every new venture or experimental direction we take with AI, our first step when questioning whether we take the practice out of the realm of experimentation and into the space of norms, is always to ask: How does this align with our quest for philanthropic innovation and our ethical standards embedded in first principles thinking? How can we ensure that our actions reflect our core beliefs and contribute positively to the world? We don’t shy away from these tough questions. Instead, we embrace them as essential checkpoints, guiding us to work ethically and responsibly.
Our approach to self-regulation involves continuous learning and adaptation. As AI evolves, so do our methods, tactics, and frameworks. We stay informed about the latest developments in AI tools, in AI ethics, participate in community discussions, and seek input from diverse stakeholders. This practice allows us to not only keep pace with technological advancements but to do so in a way that upholds our principles and furthers our mission for a more equitable and hopeful world. In essence, our approach to AI is reflective of a deeper philosophy at AEGIS: that technology should be a force for good, guided by human values and ethical considerations. It’s a commitment to not just use AI, but to use it wisely, compassionately, and with a vision for a future where technology and humanity coexist in harmony and in progressively increasing symbiosis with our natural world.
So as we stand at the intersection of technology and ethics, the journey of AEGIS with AI is about helping figure out how this could possibly work out for us all. We’ve navigated the complexities of AI, faced its horrors and faced the realisation that it is out there and we might be able to be better for it if we put in the effort to enable it to be so. We championed its ethical use, and advocated for policies that align with our collective values. But the journey doesn’t end here —it’s an ongoing process, an ever-evolving landscape that demands continuous engagement, learning, and adaptation. We invite you —volunteers, creators, partners, thinkers, and doers— to join us in shaping the future of AI in civil society. Your expertise, your perspectives, and your voices are crucial in this endeavor. Whether it’s contributing to our understanding and experimentation with AI, helping to develop new tools and strategies, or advocating for fair and equitable policies, there’s a role for everyone in this mission.
This is not just a call for assistance. It’s an invitation to be part of a movement. A movement that sees technology not as an end in itself but as a means to enhance our collective capacity for good. If we neglect this part, and let consumerism guide us, we are doomed. We need to radically tackle the ways through which we can apply AI towards the good of our people and planet. Because ultimately, when we hear that “AI could solve all the world’s problems”, that is a flawed statement. The truth is that it is us, the people wielding the AI and guiding its applications who are going to define whether we do indeed make the world better or worse thanks to AI. For now, let’s be honest, it’s not looking great. Profit and acceleration is leading the conversation, consequences be damned. More of us altruists, philanthropists, scientists, activists, futurists, designers, humanitarians and creators need to step up in order to shape where we go from here. Together, we could ensure that AI becomes a catalyst for positive change, and a testament to our shared commitment to a more equitable and hopeful world. But without us, there are grounds for more worry than hope in the current state of affairs. And wishful thinking without proactive measures will not take us far. We need to unite. We need to co-create. We need to experiment. We need to play. And we need to shape the results of all that into frameworks and policies that scale in order to guide the future of our existence.
So at AEGIS, we are going to keep doing just that, as we continue to explore the potential of AI. Always guided by our principles, always striving for a future where technology serves humanity and our planet along our most noble pursuits. We would like to harness the power of AI to create a legacy of hope and logic, one that future generations could look back on as a turning point towards a more just, compassionate, and equitable world. If you would like to do that too, we should be allies, and make that happen. And starting now would be good.
Note: This article is a testament to our method, uniting collaborative work between our team of lively humans and AI— in this case, ChatGPT-4. It was written based on a compilation of hundreds of pages of proprietary internal documents used as training data for this topic, about a hundred pages of brainstorming dialogue between our team members and the AI, along with the study of initial texts, writing out a draft for the article and describing individually the intent behind each section of this final piece (all initial ideas and arguments having been gathered and written by a human). The text was then rewritten by the AI to offer a more structured and neutral narrative voice that our contributors would not have been able to express easily without extensive training in journalistic and professional writing styles or a professional writer/journalist on the team (which we don’t have at the moment). The rewritten output from the AI was then edited into this final form by people from the team and published along with an image generated with Midjourney. This image was created through prompts that have been experimented with for the past year and a half to refine and define a visual style that will carry AEGIS’s symbolism and vision through time. Our Midjourney experimentation, along with that of other generative art tools, combines thousands of different attempts at creative expression, from which we only ever distill and refine a handful. This is our process for an article such as this one. It reflects the lengthy iterative nature that comes into play when we co-create with the help of AI, which ultimately helps us not only to overcome the overwhelming amounts of information to gather, enabling us to go technically much faster (or to keep going at all) while remaining authentic in regards to ideas, observations, and ideological evolution. Ultimately, it also enables us to go deeper in ways that our small teams and often overworked collaborators would have struggled towards otherwise. So as you see, a lot comes into work with AI. It is far from being a black and white picture, and we strive to help build scalable frameworks that take this nuanced complexity into consideration. We hope that sharing our approach can help those less familiar with these tools understand how it is possible to express authentic and carefully curated information and narratives as an enhanced, deepened, and accelerated version of what an overworked human alone could have accomplished otherwise.
AEGIS | Philanthropic Group for the Future of our Planet and People