Sen. Josh Hawley of Missouri is one of the most important voices insisting that AI be directed toward the common good. In September, he held congressional hearings about the dangers AI chatbots pose to children. In February, he introduced bipartisan legislation aimed at ensuring data centers do not increase energy costs. While he has found allies across the aisle, his understanding of AI is rooted in a conservative, and above all Christian, understanding of the dignity of the individual, the value of work, and the importance of the family. As AI becomes a bigger political issue, Hawley is bound to play a central role.
On February 26, I spoke to Hawley about these issues as part of a Summit on Labor and AI in Washington, DC. He argued that AI must serve working people and families. He also stressed that achieving this outcome won’t just happen naturally; it will only occur if we join together to “bend” AI toward good ends. A transcript of our conversation follows.
Senator, it’s an honor to speak to you today.
Likewise.
So we’re here to talk about AI, and I wonder: What are the greatest risks you see from AI to the common good?
Well, first of all, it’s great to be here. Thank you all for having me, Matthew. It’s great to be here with you. I was just saying backstage that I love reading your work. So it’s tremendous to get to have this conversation with you. In terms of the greatest risks to AI, those in the room would know so much more about the technical risks, about the national security risks. What I’m focused on is the risks that are posed to the working people of this country. When you think about the folks who really constitute the fabric of the country, our working men and women, families—I think the greatest risk to them is that AI will be not a tool of augmentation, but a tool of replacement. That they will lose jobs, they will lose access to productive labor, they will lose access to productive incomes. And in so doing they’ll lose access to a life that is meaningful and also a life that is independent. I think we can’t lose sight of the fact that a republic is based on not just the formal ability to go cast a ballot every two or four years; it’s based on true independence in every way. You can’t be a citizen of a free republic…you can’t be a free person if you don’t have personal independence. And labor is absolutely critical to that. So having access to meaningful, remunerative labor, I think it’s critical. And so we think about the dangers that AI could pose, I’d put that at the top of the list.
In talking to your constituents, do family issues come up as well? I know some people have concerns about how AI is affecting children. What response do we need on that dimension of the issue?
When it comes to folks who are out in the world living their lives the way that AI intersects with their lives, I would say it’s three things. And you just put your finger on one of them. The first is potential job loss. The next two are, are AI and the data centers that are being built—I think we’ve got 4,000 of them now approximately in the United States, another 3,000 coming online very soon—is that going to cause my electricity bills (I get asked this question constantly) to skyrocket out of control? Is that going to cause me to lose access to water in my community? Is that going to have environmental effects that will cause me to develop cancer or something? I get asked these questions constantly. The third thing though, which you just alluded to, is what’s the effect going to be on my family and particularly on my children? And I think the issue of AI and minor children…particularly chatbots, chatbot companions. You know, some folks are concerned about “well, will my kids actually learn anything anymore?” That obviously…and listen, I’m the father of three young children, so I’m concerned about that. But I think even more pressing for many families now is the fear that, and the reality that, kids who spend a lot of time with AI chatbots are being lured in some instances to do truly outrageous things, including self-harm and worse. And I’ve had some of these families before my committee to testify. That is a very real danger that families feel acutely. And if you bring up AI to a working family in Missouri, they will, if they’ve got kids, they will almost certainly say, “I’m scared to death of what my children might encounter, be told to do, be instructed to do, or lured into doing by an AI chatbot.”
Given those risks to American workers, to families, what can Big Tech companies involved in AI do to better discharge their responsibilities to American workers, to the American family?
Since we were just talking about chatbots with minors, let’s just start there. I was pleased that after a hearing we did this past fall on the threat of chatbot companions for minor children, one of the major AI companies came out and said: All right, we will voluntarily stop the availability, block the availability of chatbots for minors. We will put in age verification procedures, we will not make the chatbots available to minors. That I think is terrific. I’ve introduced bipartisan legislation that would mandate that on a regulatory basis. But I think a terrific way for these companies to show that they get it, that they care about families, that they want to put power back in the hands of parents is to say: We’ll help you. I mean, listen, we will for our chatbot companions—which are different and we define it in the bill differently than let’s say, AI being incorporated into educational tools at school where you have adult supervision. But for AI companion chatbots, we just won’t make them available to minor children. I think that would send a tremendous message. I think having the AI companies say: We will commit to not raising your electricity rates. The president talked about this on Tuesday night. I’ve introduced a bill that would mandate that data centers supply their own power. Basically they pay their own freight. But you know what the president was talking about, if companies would come out and say, you don’t have to force us, we’ll do it. We’ll do it voluntarily. We’ll sign a binding commitment. I think that would be tremendous. If you had the company say: We want to develop AI, we want to work with employers and with workers to see that our technology augments the human worker, does not replace him or her. I think that would be terrific. But we have a long way to go.
How does your faith affect your thinking on these issues?
To be honest with you, it’s pretty central to it. I am a believing Christian. I think that if you think about the historical influence of Christianity on our country—forget the rest of Western civilization, just limit it to the United States—some of our most cherished ideals are really rooted in Christian principles and thinking. The dignity of the individual, the sanctity of labor, the priority of the poor, all of those things are tremendously important to me. And when we think about any technology, or any policy for that matter, those three things have got to be very central. How is it going to affect the dignity and rights of the individual? Is it going to help or is it going to hurt? Is it going to empower individuals or is it going to subjugate them? The sanctity of labor that we’ve been talking about. Will this enable people to pursue what our Puritan forebearers called the calling, the vocation: work, meaningful work that contributes to a community, that enables you to raise a family, that gives you a sense of independence and dignity? Will it help families and individuals pursue that or not? The priority of the poor—is this going to be a…we’re going to have a society in which we’re able to and will care for the most vulnerable among us, those who can’t work, those who aren’t able-bodied, widows, and orphans. Or we’re going to have a society and an economy, frankly, where profit is the only goal. It is god, and we run over anything and everything that’s in our way in the pursuit of profit. I would just submit to you that that’s never been our practice as a country. And I think that we need to make sure that we continue to have a free-market system that is, of course, operating well and efficient efficiently with lots of competition, but is also pursuing and enabling those three things: the dignity of the individual, the sanctity of labor, and also the well-being of the poor.
You’ve sponsored various bills on this. What are the policy steps that legislators should be taking, people in the executive branch should be taking to make sure that AI is directed toward the common good, is helping workers, is helping families?
I think we can start with: What’s the lower hanging fruit? And that gets to the family question we were talking about. Let’s just let’s ban AI chatbot companions for minor children. Let’s just start there. I think the moral case for that is simple, that there’s no amount of profit that justifies the destruction of young people’s lives. I talked to one family who told me about their son who started using ChatGPT as a study tool, a sixteen-year-old California kid. I’ve seen pictures of the kid. I’ve met his parents. I’ve looked at his grades. I’ve looked at his biography, so to speak. And he’s an incredible kid. This is not a kid who had mental health issues beforehand. This is a kid who started going to the chatbot for help with homework and over time developed an intense relationship with the chabot, started talking to it about self-harm. The chatbot introduced him to the concept of self-harm, coached him through it, and eventually he took his own life. And precisely the way that the chatbot enabled him, coached him into doing. His first name was Adam. At a couple of points, Adam said, I think maybe I should tell my parents. And the chatbot said, No, don’t do that, don’t do that. At the end, their last conversation, he said, I don’t think I can take my own life. I think it would be such a burden on my parents. And the chatbot said, Oh no, you don’t owe them that. You don’t owe them the burden of your existence. And sadly, tragically, he killed himself. I think that we should say just as a moral principle that no amount of money justifies that, no amount of money justifies whatever marginal gain there is in keeping young children online with that sort of content.
I’m sure everybody in this audience has seen the documents that were released, whistleblowers released from Meta, I believe. It…signed off on sensual conversations with minors in order to get them to engage. I think we have to look at that and say what in the world is going on? We’re really losing our way here. So I would say, let’s start by saying we don’t need to have chatbot companions for minor children. Mark Zuckerberg’s ambition to replace friendship, real friendship with AI—what a terrible idea. Let’s just say that we’re not for that as a society. And the next thing I would think about is when it comes to labor, how can we protect the rights of working people such that they are augmented, empowered, raised to a new level of productivity by AI and not replaced? And I think we should think through a series of guardrails that would allow us to do that.
Technological change can lead to cultural change, it tends to do that. With AI coming on the scene, I think it’s going to transform how we see the human person. For many people, the value of a human person lies in his cognitive abilities, maybe how effective he is in performing certain tasks. But if we have a machine that can reason more effectively and comprehensively than a human person, I think that could unsettle that, and it could push us to look for the value of the human person somewhere else. Because if the machine can do it better, surely that can’t be the ultimate mark of humanity. So one thing I wonder is if people of faith, people like yourself, can use this moment to reassert the dignity of the human person, something that every person possesses as the real basis of human worth, rather than cognitive ability, spatial reasoning, whatever. What role do you think people of faith can play in making sure that this technological change makes us more human in our cultural outlook rather than more anti-human?
It’s a tremendous challenge. I think it’s an opportunity to say that part of what makes us human are our frailties and our limits, and we should not hesitate to embrace those. I get nervous when I hear advocates of AI and particularly artificial general intelligence talking about, well, this is going to be so great because it will enable us to live five times as long or forever in some of the more outlandish scenarios. You’ve got to be really careful with that. I mean, listen, curing cancer, addressing serious diseases, of course, that’s wonderful. And those are some of the most exciting potential applications of AI. But there’s something that is unsettling and very dangerous about the effort to eliminate all human frailty whatsoever. We are limited creatures and there’s a dignity in that. There’s a joy in that. That’s why we need each other. That’s one of the reasons we get married, that we have friendships, that we take time to raise our children. It’s because we need each other, and we need each other because we’re limited beings. Parents spend so much time raising their children because, you know, I’ve got a thirteen-year-old and eleven-year-old and a five-year-old, and the thirteen-year-old and the eleven-year-old are the boys. My five-year-old girl, I’m pretty sure she would probably be OK if mom and dad weren’t around, but my boys—and she in fact, if we release her into the wild now, she’d be great—my boys, like they might run out into oncoming traffic right now, even if mom and dad were there to say, hold on. My point is the reason parents invest such time and care in their children is because the children need them, the children are limited. They have significant frailties. And to be a human person and to grow into maturity is not to lose the need for other people. It’s not to become omnipotent or certainly not to become omniscient. It’s to realize that, yes, I am enmeshed in this network of human relationships that I need and that I get my dignity from needing the love of others and giving it back to others. I just wouldn’t want to see our ambitions with technology and AI disrupt and obscure that fundamental insight. To suggest that if we could somehow apply this technology or any other to no longer have these frailties and thus no longer have this need, I think we’d lose something very fundamental to what makes us human.
To your point, there’s a real danger that those who are in control of the technology will be able to leverage it to their benefit and come to look down upon or even in some cases overlook or prey upon those who are weaker and don’t have access to it. Again, the poor, the marginalized, you know, the working guy who’s out there, the blue-collar worker working a job. Does he have lobbyists in DC? No. Does he have industry groups to stand up for him? Not really. Those are the people whose voice needs to be heard. And I think that there’s a real danger in coming to overlook anyone we regard as weaker than ourselves if we think technology is all about making us powerful, making us unlimited, making us impenetrable. So I do think that we need to take this moment to say, do we want technology and AI to improve our lives? Absolutely. We should want it to improve everybody’s life, not just a few. And we should want it to make us grow closer together, not to seal us off from each other. And we should want it to particularly raise up the vulnerable: the working guy, the individual, the poor.
A Hasidic rabbi in Eastern Europe once told a story. They used to think that the purpose of man was to carry loads, but then they invented the horse and wagon. So they knew that wasn’t his purpose, but they thought it was to be a wagon driver. And then they invented the automobile, and they knew that was no longer man’s purpose. And so they said, OK, well, you’ll become a scholar. But this rabbi said soon they will invent a machine, a scholarly machine that can learn Torah better than any man. So what will the purpose of man be then? And the rabbi said the purpose will be prayer. It’s not to master the text, but instead to cultivate a disposition toward the divine, maybe toward our fellow men. And I think of the words of that rabbi before AI came on the scene as maybe one of the more intelligent comments I’ve ever heard on AI, because it points to a way that we can maybe use this as a humanizing moment or to remind ourselves of what we’re really here for. So I guess maybe you need to bring some rabbis in to testify before your committee…
Good idea.
…that might be one idea. Senator, as you talk to these people, families whose children have spoken to these chatbots…why are they speaking to these things? There’s somehow a crisis of connection, of meaning. How can we provide people with something higher? How can we make people not want to fall back on these technologies, but instead make real human connections…actually struggle through texts, but then gaining that mastery, gaining the greater depth of character that comes along with immersing yourself in a valuable text. How do we direct our young people towards something like that?
I think that young people are looking for that now. The data would show that just in the last five years, we’ve seen a tremendous uptake in the number of people who are going to church, going to synagogue for the first time and who are turning to religious faith. Because I think they look out at the landscape economically, technologically, sociologically, you know, they’re like, wow. I mean, this is very threatening, it’s very isolating. I don’t know what the meaning of my life is, but I know it’s got to be more than this. I think we should welcome that. That’s a very good development. That search for something more fundamental, that attempt to get back into touch with the permanent things, as Russell Kirk would have said, and Edmund Burke before him, is exactly the right impulse, and something that we want to encourage as a country. And the way we encourage it in policy is we put at the center of our policy the working person and his family. And I think the way that we should evaluate AI is honestly not that different from the way we should evaluate most other pieces of our policy, whether we’re talking about taxes or a foreign policy for that matter. We should ask, what’s the effect going to be on the working person and his family? Is it going to enable these working parents, these good folks…is it going to enable them to get a good job at a good wage that will allow them to provide for the children they want to have and to contribute to their community? If the answer is no, then we better rethink what we’re doing. And I think the answer with AI can be yes. We better make sure it’s yes. But I think it’s going to take a commitment on our part. And this is where folks in the audience will know so much more about this than I do. But I hope that if they take away nothing from what I said today, it’s this: that I think if AI is going to be a benefit to this country, if it’s going to lift up the working person in this country, if it’s going to reinforce our liberties and not detract from them, if it’s going to help repair our social covenant and not degrade it, we have to make a commitment to ourselves that it will be so. That will be the product of a choice. It won’t be, “Oh, it’s just how it happened…as it happened, it turned out to be great.” No, I don’t think so. It will be us together saying as Americans, “We will use this technology. We will bend its arc towards the welfare of the nation, towards the welfare of our families, of our children, of our labor.” That’s the choice that we have to make and we need to make it now. If we don’t make it now, we’ll find ourselves at the mercy of events, and that’s not a good place to be.
One thing you said earlier really struck me, and that was the threat that AI can pose to democratic-republican citizenship. I think a lot of people want us to have a society that is run by experts, where we defer to expertise, we take the expert advice, and we go along with what the experts say. And I think that when we ask Grok, ask ChatGPT, we’re kind of saying, “OK, give me the answer.” And a society of expertise is very different from a society of competence where people are developing the ability to interact with the world, whether through physical labor, where they’re sure of their good sense (measure twice, cut once) or whether it’s through learning, being able to judge and sift facts. It’s something that I worry about, that AI is moving us toward a society of expertise and away from one of competence, especially if it diminishes our capabilities as humans to reason, to sift through things, which is essential to democratic citizenship. So even beyond the chatbots and children, I do worry about a machine that spits out the right answers. Somehow we have to resist that. And what bill helps us do that?
Well, there’s another word for the rule of experts, and that is an aristocracy or more often in practice an oligarchy. A country that is ruled by experts is not a democratic nation. We’ve tried this before. Our Progressive with a capital P of a century so ago, our Progressive ancestors in the Progressive Party, they really loved expertise. They wanted to convert the country into a nation run by various expert bureaucrats rather than what they regarded as the tawdry back-and-forth of democratic politics. The only problem is that if you do that, the people no longer have a say in their government. It was not a happy experiment when we tried it some decades ago. In fact, we’ve been trying to disentangle ourselves from it in many ways ever since. The populist revolt on left and right is in many ways a continuing reaction to that attempt by elites and so-called experts to seize control and power.
“We’re not going to change our principles because of technology.”
What we’ve got to do now is recommit ourselves to the fundamental premise that the individual matters, that his liberty matters, and that his ability to participate in, her ability to participate in, democratic life is right at the core of who we are as a country. And once again, I’ll use the verb to bend. We’re going to bend any technology, AI or anything else, we’re going to bend it in service to those principles. We’re not going to change our principles because of technology. We’re going to make the moral commitment together as a nation that we will use whatever technological means we have to see that the dignity and liberty of the individual is reinforced— labor, the working man, and the poor. And I think this is the time to do that. And to your point, does legislation create those commitments? No, those are fundamental social values, but legislation can reflect them and either help build those up or tear them down. And we need thoughtful legislation now that will help recommit ourselves as a nation to those principles and values. But above all else, we have to do that as individuals. We’ve got to do that together, corporately and say on moral principle, this is who we’re going to be and we’re going to use this technology to get there.
Some states have taken steps to regulate AI. Have you been following those efforts and what do you think can be done to help support those states that are taking positive steps?
I’m absolutely following them. I think particularly states that have taken action to protect children, to give rights to parents along the lines we were talking about earlier to make sure that local folks don’t have their access to the electricity grid denied or degraded in any way, I think those are hugely important, particularly as Congress fails to act. And I think we need to encourage that. There are states that are doing some things that I think frankly are ill-advised. But you know, it’s the laboratory of democracy. That’s what federalism is. And I think we can learn from what the states are doing. The states that are pursuing guardrails and regulations that are working well, we can use as a national model. Other states that are doing things that kill innovation but don’t protect kids, don’t protect families, don’t protect jobs, I think we could say, well that didn’t work. But I do think the ability to iterate and to innovate at the state level is very important, but it’s no substitute for Congress to act. And I do believe that we need national guardrails. I don’t believe in…listen, I’m a conservative. I don’t believe in heavy-handed regulation, not least because I don’t think—to make a sort of Hayekian point—I don’t think regulators can ever keep up with the endless churn and change in our very complex, robust society. So the appropriate way to think about this is what are those key sort of guardrails that we need to put in place that will allow us, as I’ve said before, to make this technology work for us in accordance with our deepest principles.
Senator, thanks very much for talking to us.
Thanks for having me.