Our clinic yesterday on Mastering Business Domain Autonomomy in Data Managementspawned lots of productive discussions.

One of them was about AI agents co-existing with humans—which led to the expression domesticated AI. This opens up new ways of thinking about AI and ourselves. Can we—shouldwe—domesticate AI as we domesticated dogs and horses? Can we co-exist with AI in a way that respects both human and machine agency?

AI isn’t human. However, lots of effort is going into designing AI agents to to be as human-like as possible in human contexts—such as self-driving cars driving like we expect humans to do or text-to-speech mimicking human errors, the extraneous ‘filler’ sounds we make, and personal idiosyncrasies. I’m not convinced that this is unconditionally positive. Expecting an AI agent to be human in more aspects than the specific interaction they were designed for (the Halo Effect) risks unnecessary, possibly counterproductive, anthropomorphism.

We already co-exist and cooperate with non-humans. We even co-habit with them. And have done so since we domesticated dogs and horses ages ago.

Context Setting: The Human-AI Conundrum

We imbue the learning machines we build with human-like qualities. When we explicitly design them to act human, the line between tool, companion, and colleague starts to blur. This creates ambiguity and ethical dilemmas. We argue that AI ethics is not about whether or not the AI agents lack morality—it’s about our own ethics and our fears, human behaviour and bounded rationality.

When we cannot distinguish an AI agent from a human agent, we risk projecting our own biases on them. We risk misunderstanding their nature and capabilities, potentially leading to a Halo Effect of a misplaced anthropomorphism. We already anthropomorphise our pets—which are not human—which is not always beneficial to us or them. When we treat as human something which is not, we place unrealistic expectations on their behaviour—making decisions that can be dangerous for them or for us.

Problem Discovery: The Dual Doomsday Scenarios

The discussion we had raised some reflections on humanity’s trajectory. We can frame this by two cinematic extremes: the apocalyptic rebellion of “The Terminator” and the complacent dystopia of “Wall-E”. These scenarios serve as cautionary tales, highlighting the potential consequences of our current path—either losing control of our creations or abdicating our responsibilities to them. We argue that these stories reflect a deeper concern about how we view ourselves, our intelligence, and the ethical considerations of living alongside capable and autonomous beings of our creation.

Unpacking the Issue: Domestication vs. Anthropomorphism

Given co-existence with domesticated animals, we coined domesticated AI. This conceptual space allows AI, like domesticated animals, to exist in a symbiotic relationship with humans—serving specific roles without the pretense of being human. This perspective prompts a reevaluation of our goals in AI development, urging us to consider utility, relations, and coexistence over imitation.

AI designed to be human becomes a mirror that reflects our own insecurities, aspirations, and ethical quandaries. The push towards human-like AI challenges our understanding of intelligence, agency, and morality. It forces us to confront the limitations of our empathy and the depth of our stewardship. Are we seeking to create tools or colleagues? Servants, companions or idealised new beings?

Call to Action: Ethical Frameworks and Inclusive Dialogues

We are rushing headlong into an interesting future. But the path forward demands more than technological ingenuity—it requires ethical courage and philosophical clarity. Developing an ethical framework for AI

  • that respects both human and machine agency and
  • that fosters dialogue about the future we wish to build,

becomes imperative. Are we becoming better humans by developing AI? Domestication of AI poses questions about ourselves—the answers to which we might not like.