By Mike Dabadie and Maury Giles
In the rapidly evolving world of applied artificial intelligence (AI), and with head-spinning news of the OpenAI leadership shuffle, the convergence of human ethics with the principles underpinning AI’s operating systems has elevated concerns among consumers, policymakers, and now even Silicon Valley.
As reported by The Wall Street Journal regarding changes at OpenAI, “Ilya Sutskever has increasingly been worried about the long-term safety of OpenAI’s products and is building out a team at OpenAI meant to research ways that systems of the future can be ‘aligned’ with human values, according to current and former OpenAI employees.”
The risk over AI consciousness, raised by thought leaders such as Elon Musk and Blake LeMoine, and the actions of figures like Sam Altman, underscore the urgency of aligning values between humans and AI.
AI’s Hypothetical Values
If AI were to embody values, it would mirror principles of ethical AI development. When asked, ChatGPT lists its values as “beneficence, non-maleficence, autonomy, justice, transparency, accountability, respect for privacy, and inclusivity.” These values, though hypothetical in machines and within ChatGPT’s “synthetic neural network”, provide a blueprint for aligning AI-driven systems with human values.
Human Values and Decision Making in Individuals and AI
Humans inherently make decisions based on complex, goal-directed behavior, influenced by a mix of individual and societal values. This understanding is supported by biometrics of the brain and insights from psychologists and sociologists like Daniel Kahneman, Milton Rokeach, and Shalom Schwartz. Individual values, such as accomplishment, or peace of mind, interact with societal values like social order, or equality. This intricate balance is reflected in the nonconscious construct of our actual neural networks, influencing how we use both hearts (emotions) and minds (cognition) in decision-making. Communications that persuade by reason and motivate through emotion have a powerful impact on our choices.
Values serve as emotional criteria that infuse purpose into our actions. They are the benchmarks we use in evaluating our daily lives, setting priorities, and selecting among alternative actions. Our decisions are propelled by these values, shaping our lives through benefit-based motivation, which includes emotional desires for fulfillment, that matter to both oneself and to others.
In essence, values play several critical roles:
- They assist individuals in pinpointing the most significant benefits of a decision.
- People hold multiple values, with their importance varying depending on the context or decision.
- The realization or experience of values is evident through the choices made.
Similarly, in creating AI-driven solutions, understanding and integrating these values into decision-making algorithms is crucial. This ensures AI decisions are logically sound and aligned with human emotional and societal values.
The Prospect of AI Consciousness
As AI systems evolve, the possibility of them developing a form of ‘consciousness’ or independent goal-directed behavior is a topic of intense debate. What if AI begins to make decisions based on its own set of ‘values’? This prospect underscores the need for a proactive approach in aligning AI with human values from the outset.
To ensure AI’s alignment with human ethics, we must start by embedding human values into AI systems. This involves creating a ‘mind map’ of human values relevant to a myriad of contextual situations for AI to learn and apply. It’s not about programming AI to have its own values but, rather, to understand and act in accordance with human ethics. And, given the diversity of human and societal cultures and values, AI systems must be inclusive and adaptable to different ethical frameworks, ensuring fairness and equity across varied cultural and social contexts.
A Values Imperative: How can we align human and AI values? As insights professionals understanding the heart and mind of humans, we believe the same can be done with AI systems. This is a continuous and evolving quest as technology advances and societal values shift. It’s not just a technological challenge but a reflection of our collective vision for a future where AI and humanity coexist in harmony, guided by shared ethics and mutual respect.
In this future, AI becomes a tool not just for efficiency and convenience but for enriching human life, reflecting our deepest values and aspirations, and acting as a testament to our commitment to a world where technology serves humanity, and not the other way around.
Mike Dabadie is CEO and Maury Giles is CGO of Heart+Mind Strategies, an insights and strategy consultancy focused on how human values are used in the choices people make in their lives.
*Cover image created using AI.