Americans have a long history of concern about technology-driven government surveillance. We are now in the age of artificial intelligence, and with the Department of Homeland Security announcing the adoption of AI technology this week, that concern won’t go away anytime soon.

But federal agencies can alleviate some of the concerns. How? Through public participation.

The government’s use of technology has sparked public debate since at least 1928, when the Supreme Court decided to allow law enforcement to use wiretapping technology. Two years ago, a public outcry forced the IRS to shelve newly announced plans to use facial recognition technology to identify taxpayers. More recently, the Department of Homeland Security’s CBP One application uses facial recognition technology to identify asylum seekers, but like many other such systems, it is weak at identifying asylum seekers with darker skin tones. This has caused understandable public outcry.

The Department of Homeland Security has a heavy mandate that includes protecting the border, election infrastructure and cyberspace. But unlike other federal agencies, it has many public-facing missions, such as Transportation Security Administration agents at airports. This also provides the department with a unique opportunity to work with the public to ensure responsible use of technology.

Many Americans are not completely polarised in their views on government use of technology and are open to being persuaded one way or another.

Slowness is often considered to be a characteristic of the Government. For example, in order to conduct our survey, we had to go through a 15-month approval process. There are consequences of this slowness: by the time we get approval, large-scale language models will have emerged, but because they are outside the scope of our survey, we will not be able to tell people about them.

However, it is important to proceed with caution when deploying new technologies and to have a clear understanding of the benefits and risks, especially from the perspective of the most affected groups. This means that a thoughtful process can be a feature, not a flaw; slowness can be an asset, not a hindrance.

If agencies such as the Department of Homeland Security take the time to understand how to make the public feel more comfortable with the use of technology, the public may gain confidence. Even better: if agencies using technology to spy on Americans pull back the curtain, explain how and why they are doing so, similar to a careful and thoughtful deployment process. As our research has shown, people may be less interested in understanding how technology works, but they want to know how it will be used – for them and for society.

There is increased scepticism about the broadest uses of facial recognition, such as monitoring protests or monitoring polling stations.

The department understands this, which is why it asked us – researchers who study how technology intersects with public life – to survey Americans to find insights about using technology in ways the public is more likely to support. The biggest takeaway from the 2021 survey of a representative sample of 2,800 adults was that Americans care more about how technology is being used than what technology is being used.

For example, we asked people whether they support the government using facial recognition to investigate crime, track immigrants, or identify people in public places like stadiums or polling stations. Respondents were more supportive than others of using the technology in certain ways, such as identifying victims and potential crime suspects. People were more sceptical about the broadest uses of facial recognition, such as monitoring protests or monitoring polling stations. This is true for different AI technologies.

Another important factor is the safeguards surrounding the use of a particular technology. In our survey, these safeguards included providing alternatives to the use of the technology, conducting regular audits to ensure that the technology is accurate and does not affect different population groups differently, and providing notice and transparency about how the technology is being used. We found that Americans want safeguards that are sensitive to the context in which the technology is being used, such as whether it is being used at open borders or in densely populated cities, rather than a one-size-fits-all approach.

To its credit, the department has implemented some similar safeguards, but they have not always been uniformly enforced. For example, while facial recognition technology is optional for travellers going through airport security, some people have said they were unaware that it was not a requirement, including a US Senator. Such inconsistencies can be confusing and can lead to mistrust.

Nonetheless, there are opportunities for constructive engagement. Many respondents to our survey indicated that they were neutral or ambivalent about government use of technology, meaning that they had not yet decided whether the benefits of using a particular technology outweighed the risks. Far from being completely polarised on the issue, many Americans are willing to be persuaded one way or the other.

This may allow government agencies to work with this large group of “wobbly” Americans to build more trust in how the government is using new technologies on all of us. And, counterintuitively, the government’s reputation for moving slowly and cautiously may be an asset in this case.

作者 tanxuabc

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注