Member-only story
Ethical considerations when building artificial consciousnesses
Things to keep in mind when making minds
Want to read this but not a member? Use this friend link instead to get free access to the article.
If you take it as a given that the QIC theory, or something a lot like it, presents a solid model of consciousness, and believe we can overcome the technical challenges of building a quantum computing platform to feed imaginary realities into a deep-learning style cognition system, you also would expect that building a working consciousness, even just in a simulation, may involve some ethical considerations.
You’d expect that determining the moral status of a simulated consciousness would be a major challenge, but I submit that we have relatively few issues doing truly awful things to literally trillions of actual consciousnesses. I’m not at all convinced that we’d accord a machine consciousness any rights at all, unless it could convincingly argue for them itself, and that’s not a question of consciousness but cognition.
The ethics of creating or terminating a simulated consciousness vary according to the limits of its cognition. Initially, as experimental entities stay below a certain cognitive threshold, we’d care less about them than we care about insects hitting our windscreens. But there is…