Our mission
Founding’s mission is to maximize human flourishing, which we define to be the aggregate, subjective well-being of all humans and the natural environment they inhabit.
Our strategy for accomplishing this is to help the helpers, or, more specifically, to leverage the transformative power of AI to aid social organizations, such as nonprofits and government agencies, in helping the most vulnerable among us.
By centering the disenfranchised in our theory of change, we aim to ensure that AI systems exhibit the best of what it is to be human: our care for one another and the world we share.
Our priors
We believe that humanity will, however ill-advised, push rapidly toward AGI and ultimate super-intelligence due to the extreme incentives involved.
Given this, we believe that work to align such systems with the needs of life broadly, and humanity specifically, is a vital challenge of our time.
We believe that “human-compatible” AI can only emerge from a process that values life intrinsically and, as a corollary of that belief, that the process for determining what constitutes “human values” must be holistically inclusive.
We believe that an inclusive process is not possible currently due to the massive inequities that exist in our world, and that before we can determine what it is that humanity values, we must meaningfully consult with the disenfranchised.
Our thesis
At the core of our approach are three central requirements that we believe are necessary for achieving the positive potentials of future AI systems while minimizing the chances of harm.
Optimal system designs must come from AI, not humans
Humans are self-interested…
Our biological nature creates profound issues with objectivity relating to our self interest and makes us poorly suited to administering pluralistic societies in ways that are fair to all of their members.
…and struggle with the complexity of the problem…
Even if our leaders were absolutely selfless and wise, the complexity of our existing social landscape precludes its tractability to the human mind, and this complexity will increase exponentially as AI agents become more prevalent and begin interacting with one another and with humans.
…making AI far better suited to the task…
AI systems, if particularly constructed and oriented, have the capacity to address the problem in an objective way that can scale to arbitrary levels of complexity. As such, they are our only viable path to wresting positive outcomes from this profound transition.
AI outputs must be tested in simulation before real world application.
Using simulations reduces the chance of harm...
If an approach in simulation fails, the real world is not affected, and humans retain the ability to extract potentially useful information.
…and increases the chance of benefit…
Simulated environments allow for the parallelization of the process, and thus the testing of an arbitrary number of counterfactuals, delivering better approaches to try in the real world.
…while offering more paths to inclusion.
Simulation allows for the incorporation of simulacra of underrepresented groups, giving a voice to the disenfranchised where it was previously impossible.
Humans must be (meaningfully) in the loop.
Simulation has limits…
AI will never have access to the lived human experience, and so must periodically “check-in” with humans to ensure alignment both with our collective desires and the subjective realities we are individually experiencing.
…so stepwise change is appropriate
Periodic human input as well as limits on the amount of change that an AI can implement between such inputs are necessary to improve alignment and reduce the potentially severe risks of emergent misalignment.
Our approach
Out of our thesis has grown a particular research and implementation agenda. We call the approach “Social Environment Design” (SED), an automated feedback generation and policymaking solution that elucidates the values shared by human participants and finds the best possible strategies for optimizing for those values, while minimizing the chance that the process yields harmful outputs in the real world.
We are engaged in research to expand the capacities of SED, as well as work to leverage this approach and offer it as a service to help social organizations (i.e. nonprofits and government agencies) better serve their beneficiaries.
Team
Founding's core team are committed to helping make sure AI serves the most vulnerable. We have 35 years of combined experience in the Social Impact and AI/ML sectors.
Benjamin Smith
INTERN, TECHNICAL
Ben is a data science student looking to better the world however he can. He is thrilled with the opportunity to work with Founding to utilize AI in improving human lives.
Vincent Zhu
INTERN, TECHNICAL
Vincent is extremely interested in the intersection of AI and positive impact. He is completing his undergraduate Computer Science degree at the University of California, Santa Barbara.
Austin Milt
LEAD ENGINEER
Austin brings his PhD in optimal decision making for biodiversity conservation along with over five years as a professional software engineer to drive product development at Founding.
David Bell
FOUNDER, EXECUTIVE
David combines economic, social, and environmental valuation systems, including Life Cycle Analysis and innovative valuation frameworks through his nonprofit work.