AI + Democracy
Summary
Divya Siddarth reveals AI governance failures across 70 countries and proposes personal agents as democratic infrastructure to constrain AI power before it concentrates beyond public control.
SESSION Transcript
I will be talking about AI and democracy. It's a big topic. I've been working on it for a while. I am constantly trying to figure out how to do something about it and broadly want more input. So that's what we're discussing.
I care about democracy because I care about people having agency in the world. I think there are multiple paths to people losing power and agency due to transformative AI. We currently give people power and agency in two major ways. One, they can participate as democratic agents in nation state structures broadly. The other is that ideally they can participate as sort of non-dominated market participants in fair markets. So you can have enough information to participate in the market, to make your voice heard, to make your choices matter, or and ideally also you can participate in democratic structures.
One way that we can lose power in agencies to erode these things, we already have. We could make markets a lot worse. We could make information asymmetries a lot worse. These things might happen by default, elections might get rigged, or misinformation, or all of these ways that we erode the structures that we have for empowerment. And the second major way is we could move decision making out of this stuff into, into some other place. And so it's possible that you could still have democratic nation states, markets or something like that while the important stuff is happening elsewhere. And so it doesn't even matter that people have voice in those places. I am currently largely thinking about the first thing because I think it would lead to the second thing anyway. But they're both worth thinking about.
Okay, again, why should someone care about democracy? Well, yes, we want people to have control over their lives, but democracy is supposed to be sort of a useful technology for large scale decision making. The things that it is supposed to do are basically three things. One, value reconciliation. People have different values, they have different preferences, they have things that are in opposition to each other. We want some fair and legitimate way to decide what to do given that this is the case. Democracy or democratic structures are supposed to help with that.
The second thing is around information. We want to distribute our epistemic labor in some way. We want it to be the case that we're getting useful information from a bunch of different sources. Sort of the like Hayekian market-based argument for democracy. Central planning isn't going to work very well. It's very difficult for any center of power to know all of the information it needs to know. You need some sensors out into the world to bring information into decision making. So like the first thing is a bit more normative. People should have say over their lives. We need to reconcile values that matter. Second thing is more around information.
The third that I think might be the most important is democracy is supposed to act as a set of constraints on concentrated power. The difficult thing about this is that concentrated power can give us a bunch of great things. It can be very efficient, it can make decisions quickly, lower transaction costs, and you often don't realize you need to constrain it until that power already has done something that you don't want it to do and you no longer have a constraint over. So democracy is sort of a fail safe in that way. We want to have some constraint on concentrated power before our problems with concentrated power arise. This is a difficult thing because we don't know exactly how much that constraint should be and what form it should take. So ideally 1 and 2 kind of help with 3 in terms of creating good reasons for constraints on this power and ways to do it.
We're not very good at any of these things, but our current mechanisms make an effort. One reason I care about AI and democracy, not just because I think there's a path to erosion of power, but also ideally we use this technology to get a lot better at this stuff, right? I actually think there's a fair amount of low hanging fruit in stuff like value reconciliation. There's obviously a ton you can do on information and so I'll talk a bit about like democracy for AI today. But to be clear, I think there's a ton of work on AI for democracy that we should be doing. Doing that CIP, which is the org I run, does some work on.
People come and talk to me about AI and democracy all the time and they tend to focus on public input. It's the easiest, kind of most shiny, obvious way that we think about democracy. We ask people what they want, it's a great thing. And I think there are lots of ways to use public input usefully for AI and democracy. I'll walk through a couple that we do.
So one good way of using public input is as we talked about, people want incommensurate things. So one thing that we have worked on recently is how does society adapt to human AI relationships? This is a big problem and different societies see it very differently. We run these big global input processes. Seventy countries, people from a bunch of religions and languages, they actually just see what it looks like for humans and AI to have relationships in incredibly different ways. People are way more religious, are way more against it. People in certain countries are way more against it than others. There is just no one single way that people care about this. We need public input to figure out when that's true. We need public input to kind of create federated value structures so different places can have different policies. This is a good kind of way to use public input. When we come up with problems like this, which happen often, I think we should use public input more.
Another place that we use it. So we've recently been doing a lot of work, unsurprisingly, on politically unbiased AI. No one knows what something being unbiased really means. There's not one shared definition of it. And this is a case where I think it, again, makes sense to use public input from a basic epistemic perspective. Right? Like, we want a bunch of people to feel that something is unbiased. Great. Like, asking them is useful. We've done a bunch of surveys around this. And, you know, you can ask people about responses, you can ask them about definitions. You can actually arrive at a pretty good version of what the majority of Americans, let's say, will think is unbiased. Another good case for public input.
Another one is in spaces where we've already agreed that people should have democratic rights, right? So CIP does a lot of work in Taiwan in particular. We've done a lot of work on information integrity. They face a lot of huge, sustained misinformation campaigns from China. And this is a case when we've decided people have a certain type of democratic input elections. We want to preserve that. We use public input to help preserve that and to have the legitimacy for it. So public input can be very helpful for things like we want to understand a bunch of incommensurate values. We want the response that we have to those values to be something people buy into and understand. But it's actually not that helpful for constraints on power because people don't often have leverage in these things. All of these things we did voluntarily. We thought input would be helpful, but there was no inherent power constraint anyway.
And so I think there are a lot of different ways that we think AI and democracy, we go immediately to public input. But actually, a lot of what we want out of democracy requires much more than that. So I'll spend my last couple minutes on just thinking about what that could look like.
So as Brian mentioned, one thing that CIP spends a lot of time on is global input into AI. I think sometimes we don't contend with the fact that, so the majority of people live in situations so different from this, and they already are massively using these technologies. They're already going to be really impacted. I'll use some examples from India because we've been doing a lot of work with the Indian government. 20% of the world's population lives in India. It's the biggest democracy in the world. And yet a lot of different kinds of models do not work very well in this context at all. What do we do about this? This is basically an epistemic plus a power problem, right? Like, one is people don't have any understanding of how to feed into these technologies themselves. And two, they don't have any say into how models are being built and how they're being deployed in their context.
The way we've been addressing this is working with civil society organizations around that country in trying to construct ways for people to evaluate models themselves. So this is kind of a basic collective intelligence problem. Things are being deployed. All of the testing is being done in a very small part of the world. This is a big problem. How do we get more information from more people, but also how do we connect them to power so that they're not just shouting into the void and being like, hey, this isn't working for me. I mean, and the kinds of examples we see are crazy. Like in India, for example, it is illegal to get ultrasounds at a certain age because there are major gender issues and people tend to abort their children who are girls. It is now very easy to get those things with language models. It's very easy to do that yourself. This is a major problem and has already had issues in a bunch of public health contexts, and we just don't even notice it. And there are hundreds of things like this. So a lot of what we try to do is get more information to solve those kinds of problems. And that is like a very kind of collective intelligence, information value approach, right?
But a fully different thing that we're thinking about a lot is more on the market side, which is thinking about personal agents as democratic infrastructure. So one way to have input is what we're doing in the Indian context, including a bunch of people getting their information, trying to build that into the technology. However, in a world where we're going to have increasingly adversarial information environments, it's very important that you also give people more of the internal capacity to have representatives work on their behalf. If we're going to have automated decision making processes that people are subject to, then they also should be able to be active participants in those processes. So something I'd like to see a lot more work on is just expanding people's ability to participate in contestable markets and representatives that help people kind of choose, negotiate and exit.
I think a lot of what's necessary here is actually evaluation on fidelity to people's preferences. Particularly again, in the global sense. Can we actually represent people's nuanced preferences from different places? Auditability. Having worked on democracy for a long time, it's pretty obvious to me that people don't love participating in stuff. And so we need things to be auditable at a very achievable level of effort. Then coordination, as we have more agent based interaction, number of interactions is going to increase exponentially compared to the number of agents. That's going to happen very quickly. How do we enable coordination structures? Again, pretty classic collective intelligence problem. I think we could use a lot more kind of techniques from collective intelligence into figuring out these kinds of questions. And I hope we can see more work on that.
Democracy. I think it's very important. I think it goes beyond public input. There are some ways that we work on to do this. I'd like to see more ways. Come talk to me about it. Thank you.
I care about democracy because I care about people having agency in the world. I think there are multiple paths to people losing power and agency due to transformative AI. We currently give people power and agency in two major ways. One, they can participate as democratic agents in nation state structures broadly. The other is that ideally they can participate as sort of non-dominated market participants in fair markets. So you can have enough information to participate in the market, to make your voice heard, to make your choices matter, or and ideally also you can participate in democratic structures.
One way that we can lose power in agencies to erode these things, we already have. We could make markets a lot worse. We could make information asymmetries a lot worse. These things might happen by default, elections might get rigged, or misinformation, or all of these ways that we erode the structures that we have for empowerment. And the second major way is we could move decision making out of this stuff into, into some other place. And so it's possible that you could still have democratic nation states, markets or something like that while the important stuff is happening elsewhere. And so it doesn't even matter that people have voice in those places. I am currently largely thinking about the first thing because I think it would lead to the second thing anyway. But they're both worth thinking about.
Okay, again, why should someone care about democracy? Well, yes, we want people to have control over their lives, but democracy is supposed to be sort of a useful technology for large scale decision making. The things that it is supposed to do are basically three things. One, value reconciliation. People have different values, they have different preferences, they have things that are in opposition to each other. We want some fair and legitimate way to decide what to do given that this is the case. Democracy or democratic structures are supposed to help with that.
The second thing is around information. We want to distribute our epistemic labor in some way. We want it to be the case that we're getting useful information from a bunch of different sources. Sort of the like Hayekian market-based argument for democracy. Central planning isn't going to work very well. It's very difficult for any center of power to know all of the information it needs to know. You need some sensors out into the world to bring information into decision making. So like the first thing is a bit more normative. People should have say over their lives. We need to reconcile values that matter. Second thing is more around information.
The third that I think might be the most important is democracy is supposed to act as a set of constraints on concentrated power. The difficult thing about this is that concentrated power can give us a bunch of great things. It can be very efficient, it can make decisions quickly, lower transaction costs, and you often don't realize you need to constrain it until that power already has done something that you don't want it to do and you no longer have a constraint over. So democracy is sort of a fail safe in that way. We want to have some constraint on concentrated power before our problems with concentrated power arise. This is a difficult thing because we don't know exactly how much that constraint should be and what form it should take. So ideally 1 and 2 kind of help with 3 in terms of creating good reasons for constraints on this power and ways to do it.
We're not very good at any of these things, but our current mechanisms make an effort. One reason I care about AI and democracy, not just because I think there's a path to erosion of power, but also ideally we use this technology to get a lot better at this stuff, right? I actually think there's a fair amount of low hanging fruit in stuff like value reconciliation. There's obviously a ton you can do on information and so I'll talk a bit about like democracy for AI today. But to be clear, I think there's a ton of work on AI for democracy that we should be doing. Doing that CIP, which is the org I run, does some work on.
People come and talk to me about AI and democracy all the time and they tend to focus on public input. It's the easiest, kind of most shiny, obvious way that we think about democracy. We ask people what they want, it's a great thing. And I think there are lots of ways to use public input usefully for AI and democracy. I'll walk through a couple that we do.
So one good way of using public input is as we talked about, people want incommensurate things. So one thing that we have worked on recently is how does society adapt to human AI relationships? This is a big problem and different societies see it very differently. We run these big global input processes. Seventy countries, people from a bunch of religions and languages, they actually just see what it looks like for humans and AI to have relationships in incredibly different ways. People are way more religious, are way more against it. People in certain countries are way more against it than others. There is just no one single way that people care about this. We need public input to figure out when that's true. We need public input to kind of create federated value structures so different places can have different policies. This is a good kind of way to use public input. When we come up with problems like this, which happen often, I think we should use public input more.
Another place that we use it. So we've recently been doing a lot of work, unsurprisingly, on politically unbiased AI. No one knows what something being unbiased really means. There's not one shared definition of it. And this is a case where I think it, again, makes sense to use public input from a basic epistemic perspective. Right? Like, we want a bunch of people to feel that something is unbiased. Great. Like, asking them is useful. We've done a bunch of surveys around this. And, you know, you can ask people about responses, you can ask them about definitions. You can actually arrive at a pretty good version of what the majority of Americans, let's say, will think is unbiased. Another good case for public input.
Another one is in spaces where we've already agreed that people should have democratic rights, right? So CIP does a lot of work in Taiwan in particular. We've done a lot of work on information integrity. They face a lot of huge, sustained misinformation campaigns from China. And this is a case when we've decided people have a certain type of democratic input elections. We want to preserve that. We use public input to help preserve that and to have the legitimacy for it. So public input can be very helpful for things like we want to understand a bunch of incommensurate values. We want the response that we have to those values to be something people buy into and understand. But it's actually not that helpful for constraints on power because people don't often have leverage in these things. All of these things we did voluntarily. We thought input would be helpful, but there was no inherent power constraint anyway.
And so I think there are a lot of different ways that we think AI and democracy, we go immediately to public input. But actually, a lot of what we want out of democracy requires much more than that. So I'll spend my last couple minutes on just thinking about what that could look like.
So as Brian mentioned, one thing that CIP spends a lot of time on is global input into AI. I think sometimes we don't contend with the fact that, so the majority of people live in situations so different from this, and they already are massively using these technologies. They're already going to be really impacted. I'll use some examples from India because we've been doing a lot of work with the Indian government. 20% of the world's population lives in India. It's the biggest democracy in the world. And yet a lot of different kinds of models do not work very well in this context at all. What do we do about this? This is basically an epistemic plus a power problem, right? Like, one is people don't have any understanding of how to feed into these technologies themselves. And two, they don't have any say into how models are being built and how they're being deployed in their context.
The way we've been addressing this is working with civil society organizations around that country in trying to construct ways for people to evaluate models themselves. So this is kind of a basic collective intelligence problem. Things are being deployed. All of the testing is being done in a very small part of the world. This is a big problem. How do we get more information from more people, but also how do we connect them to power so that they're not just shouting into the void and being like, hey, this isn't working for me. I mean, and the kinds of examples we see are crazy. Like in India, for example, it is illegal to get ultrasounds at a certain age because there are major gender issues and people tend to abort their children who are girls. It is now very easy to get those things with language models. It's very easy to do that yourself. This is a major problem and has already had issues in a bunch of public health contexts, and we just don't even notice it. And there are hundreds of things like this. So a lot of what we try to do is get more information to solve those kinds of problems. And that is like a very kind of collective intelligence, information value approach, right?
But a fully different thing that we're thinking about a lot is more on the market side, which is thinking about personal agents as democratic infrastructure. So one way to have input is what we're doing in the Indian context, including a bunch of people getting their information, trying to build that into the technology. However, in a world where we're going to have increasingly adversarial information environments, it's very important that you also give people more of the internal capacity to have representatives work on their behalf. If we're going to have automated decision making processes that people are subject to, then they also should be able to be active participants in those processes. So something I'd like to see a lot more work on is just expanding people's ability to participate in contestable markets and representatives that help people kind of choose, negotiate and exit.
I think a lot of what's necessary here is actually evaluation on fidelity to people's preferences. Particularly again, in the global sense. Can we actually represent people's nuanced preferences from different places? Auditability. Having worked on democracy for a long time, it's pretty obvious to me that people don't love participating in stuff. And so we need things to be auditable at a very achievable level of effort. Then coordination, as we have more agent based interaction, number of interactions is going to increase exponentially compared to the number of agents. That's going to happen very quickly. How do we enable coordination structures? Again, pretty classic collective intelligence problem. I think we could use a lot more kind of techniques from collective intelligence into figuring out these kinds of questions. And I hope we can see more work on that.
Democracy. I think it's very important. I think it goes beyond public input. There are some ways that we work on to do this. I'd like to see more ways. Come talk to me about it. Thank you.