Future Facing: Maxwell Scholars Respond to the Rapid Rise of AI and Autonomous Systems
June 8, 2023
Amid the rapid rise of artificial intelligence and autonomous systems, Maxwell scholars are gathering critical data, designing policy and informing future leaders.
In early 2022, the City of Syracuse’s Surveillance Technology Working Group met to discuss a proposal from the police department to install street cameras that automatically scan license plates as an aid for investigating crime. While the technology had the potential to help detectives identify and apprehend suspects, it also raised real concerns about privacy and oversight. How would data be used and stored, and who would have access to it?
The Surveillance Technology Working Group—a mix of city employees and community members—was created by Syracuse Mayor Ben Walsh ’05 M.P.A. to address these kinds of policy questions and give the public a voice in the process. In the end, the group voted to approve the license plate scanners with important stipulations. The data can only be used for identifying vehicles and occupants that are part of an active criminal investigation or have been reported missing. It cannot be used for immigration enforcement, and it must be purged after a set time.
Johannes Himmelreich is one member of the Syracuse group, appointed by the mayor, with significant experience in weighing issues of technology and policy. An assistant professor of public administration and international affairs at the Maxwell School, Himmelreich’s research focuses on the ethics and governance of technologies such as self-driving cars, autonomous weapons and machine learning in the public sector.
“I think it’s really important work that the mayor has started,” says Himmelreich. “Sometimes, the work of the group is understanding: What is the technology? Is it a surveillance technology? What do we use it for, and what are the risks and trade-offs? This is a way of collaborative, participatory policymaking that has been very successful.”
As autonomous systems and artificial intelligence (AI) continue to advance, the need to understand these new technologies and ensure they are used safely and ethically is more acute than ever.
Himmelreich is among numerous scholars at Maxwell who are rising to this challenge—applying the tools of the social sciences to emerging technologies. He and colleagues across disciplines are conducting important research on everything from drones to robotics to generative AI tools like ChatGPT, helping produce data and shape policy that impacts the American public and beyond. Their scholarship has a symbiotic effect on students, who will no doubt navigate different technologies in their future careers.
CREATING A HUB
Syracuse University has been ahead of the curve in focusing on the policy and social impact of emerging technologies, and much of that is centered on an institute housed in the Maxwell School.
About five years ago—amid increasing conversations about autonomous vehicles, robotics and AI—a number of academic leaders began laying the groundwork for a Universitywide initiative focused on the intersection of technology, policy and society. To gauge interest on and off campus, Jamie Winders, a professor of geography and the environment at the Maxwell School and Syracuse University’s associate provost for faculty affairs, met with a wide range of University scholars and outside experts.
“What became immediately clear was that not only did we have a critical mass of faculty interest across all schools and colleges, but also that we as a University had an opportunity to approach this area in ways that were different from what we were seeing elsewhere,” Winders recalls. “I spoke with about 100 industry leaders, advocates and policymakers, and when they talked about how they saw these fields developing, they kept pointing to the absence of work where technology meets policy, and on wider societal impacts and public perception.”
To address that gap, in 2019, the University launched the Autonomous Systems Policy Institute (ASPI). Mike Haynie, the University’s vice chancellor for strategic initiatives and innovation, was a driving force, along with Maxwell Dean David M. Van Slyke, who named Winders its founding director. “We had the opportunity to position ASPI as interdisciplinary at its core,” she says. “We thought of our existing interests in autonomous systems as three circles on a Venn diagram. We have faculty who are really interested in the technology and design aspects; we have folks interested in the policy, law and governance; and we have many interested in the societal impacts. From the beginning, ASPI sat at the center of that Venn diagram.”
The institute now has more than 60 affiliated faculty researchers, connecting scholars from the social sciences, humanities, computer science and engineering, information studies, law, and communications.
Among the Maxwell faculty who serve as senior research associates with ASPI are Johannes Himmelreich; geographer Jane Read, who teaches and does research on drone usage; Austin Zwick in policy studies, who studies smart cities and the social and economic transformation brought about by technological change; sociologist Aaron Benanav, author of the book “Automation and the Future of Work” (Verso, 2020), which explores the much-discussed “rise of the robots” and its impact on the labor market; and political scientist Baobao Zhang, whose research focuses on trust in digital technology and the governance of artificial intelligence. Zhang and Himmelreich, along with colleagues from other universities, are editors of the forthcoming “Oxford Handbook of AI Governance” (Oxford University Press).
One core function of ASPI is to foster collaboration among scholars in different fields. Its Artificial Intelligence Research Working Group, for instance, meets monthly for faculty to share ideas and projects. “I work with computer scientists, experimental psychologists, philosophers and communication studies scholars,” Zhang says, “and it’s great that ASPI exists as a hub to support us.”
In early April, ASPI coordinated a panel of faculty from across the University to tackle a tech issue that has garnered much media attention in recent months: ChatGPT.
CROSSING DISCIPLINES
The April ChatGPT panel included a newcomer whose addition to Maxwell and the University complements efforts to harness the social sciences and shape decisions about technology.
Hamid Ekbia joined the University in January 2023 as a University Professor in the Maxwell School and takes the helm as director of ASPI this July. In keeping with the collaborative nature of ASPI, Ekbia has long worked across disciplinary lines. Initially trained as an engineer in his native Iran and at UCLA, he was drawn by advances in artificial intelligence and went on to earn a Ph.D. in computer science and cognitive science from Indiana University Bloomington.
Before joining Syracuse, Ekbia served as professor of informatics, cognitive science and international studies at Bloomington. Throughout his career, Ekbia has taken a wide-angle view of technology and its impact. Beginning with his doctoral work, he says, “I went beyond cognitive science to look at the social, cultural, economic and political aspects of technology and especially computers. That’s what I’ve been doing for the last roughly 20 years.”
Among his projects is “Heteromation, and Other Stories of Computing and Capitalism” (MIT Press, 2017). Co-authored with Bonnie Nardi, the book explores automated systems in grocery stores, banks and airports and on platforms like Facebook and Google, which rely on unpaid labor by the public that generates value for the corporations.
Ekbia considers himself a humanist and describes himself as a “poet of technology” in multiple senses—including as an acronym for the policy and ethics of technology, a formulation that is at the heart of ASPI. “I see these as closely intertwined,” he writes on his website For a Better Future, “with ethics guiding our thinking about the potential harms and benefits of technology, and policy giving the thinking teeth and legs.”
A key goal of ASPI, in Ekbia’s view, is to bring the broader population into the conversation about how emerging technologies are used and regulated. “The average user, as they say, does not have much of a voice so far,” he says. “Nobody comes and asks us what technology we’d like to have in our homes and offices and working spaces.”
GOVERNING AI
Bringing the public into the policymaking process is the focus of a major new research project by Baobao Zhang. A Yale graduate with an M.A. in statistics and Ph.D. in political science, she is one of 15 scholars from across the U.S. chosen by the philanthropic organization Schmidt Futures to serve in the inaugural cohort of AI2050 Early Career Fellows.
The fellowship provides Zhang with up to $200,000 over two years for multidisciplinary research in artificial intelligence. For her AI2050 project, Zhang is creating a mini-public of regular citizens to learn about a topic and make policy recommendations. She is working with the nonpartisan Center for New Democratic Processes to recruit a group of 40 participants, randomly selected from the U.S. adult population. Through a 40-hour process planned for this summer, this group will learn about AI systems from computer scientists, ethicists and social scientists and deliberate on how to classify risk from AI systems.
“AI is a highly technical field, as you can imagine, where most of the policy discussions that are happening are between experts: computer scientists—whether from industry or from universities—civil society groups and other academics,” says Zhang, assistant professor of political science. “But it’s important to have everyday people in the policymaking process, because they are the ones who are going to be impacted by the AI systems, whether benefiting from them or suffering the negative consequences.”
Navigating between the marketing hype about new technology and skepticism or alarm about it can be difficult for citizens and policymakers alike, Zhang says. She cites the example of large language models such as ChatGPT, which can generate remarkably cogent writing from a prompt but also false information—like providing a citation from a book that doesn’t exist.
“The question is, should we classify these large language models as high risk?” she says. “A general-purpose AI system like ChatGPT can do many things; it can play chess with you or write a joke. But it can also generate spear phishing emails. There are also researchers trying to fine-tune it to give medical diagnoses, which is pretty high risk. So as more and more of these general-purpose AI systems come online, we need to think about risk differently. The technology can be used in many sectors where it’s not very risky, but in some cases, it can really cause a lot of harm if not used correctly.”
EXPANDING CURRICULUM
Along with fostering collaborative research, ASPI supports opportunities for undergraduate students to delve into the field through courses such as Using Robots to Understand the Mind, Introduction to Unmanned Aerial Vehicles, and Ethics of Emerging Technology.
“It is important to shape the research agenda,” Jamie Winders notes. “But it’s as important to help produce the next generation of thought leaders in this area, who are excited about issues and also committed to the public good—who want to think about how to innovate in an equitable manner.”
Course offerings continue to grow. In the fall semester, Hamid Ekbia will introduce a course called AI and Humanity: Charting Possible Futures, designed as an introduction to the field for undergraduates with varying backgrounds—from the arts, engineering, and natural and social sciences to humanities, law and media.
A group of faculty connected with ASPI, led by Baobao Zhang, is also working toward introducing an undergraduate minor in artificial intelligence and public policy. The proposed minor would expand the curriculum with courses on topics such as governance and ethics of AI and the responsible design and auditing of algorithms, with the goal of equipping students with the technical and ethical skills to responsibly develop and deploy AI systems.
Opportunities to study these emerging technologies are expanding at the graduate level too.
One Maxwell student engaged in this area is Harneet Kaur, a social science Ph.D. student who specializes in education policy, labor economics and student mental health. Keenly interested in applications of AI, Kaur took a course on predictive analytics in the public sector with Leonard Lopoo, professor of public administration and international affairs, Paul Volcker Chair in Behavioral Economics, and director of the Maxwell X Lab.
“I got interested to learn more about how social policy research can be improved with machine learning techniques,” she says. To dig deeper, she joined the ASPI Grad Lab and adds, “I am excited to collaborate with like-minded researchers and push the boundaries of what’s possible in this dynamic and rapidly evolving field.”
In the area of executive education, David M. Van Slyke teaches a capstone course for the Maxwell-Whitman Defense Comptrollership Program in which students earn an MBA and an executive master of public administration (executive M.P.A.). Through a partnership with ASPI, some students undertook a final project titled “Autonomous Systems, COVID-19 and Public Health Response.”
Van Slyke’s students explored autonomous systems for everything from disaster relief and emergency management to improving health outcomes. For instance, they’ve looked at the use of drones to deliver supplies, including blood, to regions cut off by traditional physical infrastructure. Their studies apply to current headlines: Medical response drones have been used deliver aid to civilians and soldiers in hard-to-reach areas of war-torn Ukraine.
“Too often legislation, policy and regulation are reactive to new forms of technology,” he says, adding that public-private partnerships provide an opportunity for a more collaborative approach to governing technology design, use and oversight.
“Partnerships can integrate intergovernmental and cross-sectoral perspectives and create an environment in which different stakeholders work together to achieve new forms of innovation that can have positive outcomes in the public interest,” adds Van Slyke. “That requires discussions, understanding and compromise. With a partnership orientation, governance need not be reactive and adversarial.”
POLICY IMPACT
The growing body of work on emerging technologies by Maxwell scholars is helping frame issues and shape policy beyond campus.
Last summer, for instance, Jamie Winders was invited to present at a White House summit on developing advanced air mobility systems that rely on automated or autonomous technologies.
As part of his ongoing research into machine learning in the public sector, Johannes Himmelreich is studying the use of automated risk scoring tools in unemployment insurance—where determinations about eligibility have a huge impact on individuals’ lives.
Amid the current debate about how large language systems will impact employment, Aaron Benanav argues that the mass job displacement by robots forecasted a decade ago has not materialized and that there are good reasons to doubt the same predictions about AI chatbots—and to focus on using these tools equitably and ethically. Benanav is an assistant professor of sociology at Maxwell.
“Without holding back technologies,” he wrote recently in The New Statesman, “we can dig out new channels for them to flow into, to ensure such innovations improve rather than harm society.”
Faculty scholarship on autonomous technologies and AI feeds not only into current policy debates but also into teaching. Himmelreich draws on his work for Syracuse’s Surveillance Technology Working Group to illustrate the benefits and risks of technologies like automated license plate readers. “The input we get from citizens, departments and other stakeholders allows me to convey to students the breadth of concerns and the difficulty of recommending policy.”
In one class, Himmelreich asks his students to list surveillance technologies they have come into contact with on that day. “We sometimes miss the local examples, even if they are right in our faces,” he says.
The local examples always resonate with students. “Students see the cameras on Euclid Avenue and wonder, what do they actually record? Do they record Wi-Fi signals? Do they record sound? They find it fascinating that there is a process in City Hall to look into things like this. They realize there are huge issues in local government to figure out whether we put up surveillance devices, for what purpose and with what oversight.”
While faculty sharpen their focus on the applications of social science scholarship and policy formulation to emerging technologies, Maxwell School alumni reflect the school’s longstanding excellence in preparing leaders for success in the field.
After earning a master’s degree from Maxwell’s interdisciplinary international relations program, which draws on social science expertise from across the school, Scott Renda ’05 M.A. (IR) held a series of technology advisory and policy development roles at the U.S. Department of Commerce, U.S. Department of the Treasury and the Executive Office of the President at the White House. He currently works as senior manager of product management at Amazon Web Services (AWS), where he helps customers tackle data privacy, governance and analytics opportunities and challenges.
Kerstin Vignard ’96 M.A. (IR) is an international security policy professional whose 26-year career at the United Nations Institute for Disarmament Research included leading efforts to support governments to develop international normative and regulatory frameworks for increasingly autonomous weapon systems. In 2021, she was named to the list “100 brilliant women in AI ethics.”
Today, Vignard is a senior analyst at the Johns Hopkins Applied Physics Laboratory and research scholar at the Johns Hopkins Institute for Assured Autonomy, where she focuses on strengthening the linkages between technologists and policymakers on responsible innovation in the military domain. “It isn’t enough to have governments agree on high-level principles on AI and autonomy,” she says. “We will not succeed with operationalization—moving from principles to practice—without the practitioners themselves having a leading role.”
Another alumni leader is Travis Mason ’06 B.A. (PSc), a member of the Maxwell Advisory Board, who works in the field of autonomous aviation systems.
Looking ahead, Hamid Ekbia aims to expand ASPI’s outreach with policymakers, building on the Maxwell School’s extensive connections and alumni community in Washington, D.C., and internationally.
“This is a unique moment where we can make a contribution, especially applying the strengths of Maxwell as a policy school to technology,” he says. “These technologies are very complex. There’s a lot of confusion about what they are, how they work, what they are capable of doing. So, one very specific role I see for ASPI is to clear the air about these technologies and educate people—not just the public but also policymakers, regulators and legislators. There are other players who do this, but oftentimes they are driven by a certain type of perspective. As an educational institution, we can be less biased and more balanced.”
By Jeffrey Pepper Rodgers
Published in the Spring 2023 issue of the Maxwell Perspective
Related News
School News
Dec 3, 2024
School News
Nov 19, 2024
Commentary
Nov 14, 2024
School News
Nov 14, 2024