Search
share
Search
Taming the Beast: Building Self-regulating AI

Date

author

share

Artificial Intelligence is all the rage these days–at least among the tech Illuminati. The attitude among the general population on the other hand seems to range from indifference, to fear, to outright antagonism. One of the big concerns many people share is the worry that AI systems will become too powerful and grow beyond the ability of their creators to manage and control. As an example, see this Ted Talk in which philosopher Sam Harris warns his audience of the impending ethical and practical dangers of building too-powerful AI systems.

(Un)Limited Resources

I was doing some reading recently and came across an interesting idea that sparked some thoughts about how AI systems could possibly be built to be self-regulating. The book that contained the passage is Daniel Dennett’s From Bacteria to Bach and Back (I’m working on an analysis of this book which I hope to publish soon). In discussing the evolution of minds, the author talks about the difference between biological systems and artificial systems like robots and AI systems. He briefly speculates that one reason why people have an intuition that artificial systems may get too powerful and thus turn malevolent is because they are not being made to ‘care’[1] about what they’re doing. This is because they generally are given whatever resources (in terms of computing power and energy) they need to do their work and have not been designed to ‘worry’ or manage how they spend those resources or even be denied resources in any meaningful sense. In other words, we keep feeding the beast with no upper limit on how much and how often. And with quantum computing on the horizon, this situation is going to exacerbate exponentially.

For example, an AI system that has to do a bunch of work generally is given as much electrical energy it needs and if that energy is throttled, systems typically are built to optimize around more resources not less. A translator application works best with lots of computing power and while it can work with less, it can do its work much better with the best-of-breed technology so engineers tend to prefer better specs. These systems aren’t made to ‘care’ about how much power they use so they just use whatever is available. Of course there are very real physical and financial limitations to what these systems can use and are being given but these barriers are being broken down each day with new and more powerful hardware coming online and tech companies are investing billions in developing these systems.

Humans on the other hand tend to self-regulate because we know we have limited resources (we can’t just eat with impunity–if our resources don’t limit us, our bodies do) and we know we’ll have to sleep at some point taking forced breaks from whatever benevolent or malevolent work we’re doing. Psychologically too, our minds need a break. We take vacations, parental leave, and time to manage grief or illness because mentally we need to take a break from our work and give our minds time to focus on other things. And while we can network, at some point we’re limited by our own ‘hardware’ that requires travel, moving our physical selves from location to location to get work done. All of these factors force us to choose how much we’ll consume and what we’ll spend energy on and that, in a very real sense, limits at least our individual power.

Building Self-regulating AI Systems

I think this an interesting insight and may have some implications for how we might build ‘ethical’ AI systems; systems that are self-regulating and thus self-restrained. Building into the engineering of these systems a time to ‘sleep,’ a limit to how much compute power is available to a specific set of computational resources, a limit to how much energy a system can consume in a given period, and a limit to how collaboratively these systems could work as a network could, jointly not only be environmentally responsible but also provide a kind of self-limiting mechanism that would prevent AI systems from becoming too powerful. Of course the system has to be kept ‘ignorant’ that this is a part of their engineering and would have to be prevented from changing it. Imagine the enormous power a human would wield if he or she learned that by taking a simple drug, they could avoid sleep entirely and just eat more of specific nutrients to keep going indefinitely without sleep and without losing energy. History shows us what can happen when an individual (malevolent or benevolent) discovers that they can extend their power by building a network of other humans that help them accomplish their agenda essentially enabling them to wield that power limitlessly.

At the very least, this could be an interesting thought experiment or research project. The biggest challenge to this approach isn’t an engineering one. The problem is that AI technology is in a very real arms race by the big tech firms where each is trying to outdo the other and build faster and more powerful systems. To engineer these systems to be self-regulating, companies would have to agree to (or be forced to adhere to) something akin to a nuclear disarmament or climate change accord and build in self-regulation across the industry. Any business or government entity that didn’t adhere to these standards, would have a distinct advantage over others and ruin the scheme.

Here’s the passage that gave rise to these ideas:

Turing, Shannon, and von Neumann, tackling a spectacularly difficult and novel engineering project, intelligently designed computers to keep needs and job performance almost entirely independent. Down in the hardware, the electric power is doled out evenhandedly and abundantly; no circuit risks starving. At the software level, a benevolent scheduler doles out machine cycles to whatever process has highest priority, and although there may be a bidding mechanism of one sort or another that determines which processes get priority, this is an orderly queue, not a struggle for life. (It is a dim appreciation of this fact that perhaps underlies the common folk intuition that a computer could never “care” about anything. Not because it is made out of the wrong materials–why should silicon be any less suitable a substrate for caring than organic molecules?–but because its internal economy has no built-in risks or opportunities, so its parts don’t have to care.) Dennett, Daniel C. From Bacteria to Bach and Back: The Evolution of Minds (Kindle Locations 2702–2709). W. W. Norton & Company. Kindle Edition.


  1. I’m putting all the anthropomorphic terms in scare quotes to avoid the implication that current AI systems have the emotional and psychological capacities of humans. While Dennett argues that computational cognition and human cognition may be on a spectrum rather than having clear “essentialist” lines between them, I’m not committing to that idea either way at this point.

More
articles

More
news

Jeremiah

You made me tell the hardest thing of all, The truth. Judah just will not want to hear Anything but...