From Zero to Sixty_ How AI Can Alter the Course of Public Safety

Today, technological advances are being developed and distributed to much of the world at a pace never seen before. Smart phones and devices inform millions of people about events around the world in real time while smart sensors are providing deeper insight into everything from traffic patterns to shopping preferences. From this environment of ever-increasing data comes a need for rapid, smart analysis of the data, making the next frontier of technology increasingly relevant, artificial intelligence (AI).

Currently, machines are capable of analyzing massive amounts of data using machine learning (ML) algorithms and giving recommendations based on the context of the questions being asked by their human users. Even before the digital age, government agencies dedicated to supporting the criminal justice system collected and referenced massive amounts of documentation to support their job functions. Given the information needs within the dynamic environment of public safety, it’s reasonable to wonder why this emerging technology has not been as widely pursued in the public sector as it already has in many other industries.

Several reasons exist to account for the delay in incorporating AI into the public sector, but chief among them can be attributed to the fact that local governments are sometimes slower to adopt the new technology readily embraced by the private sector due to budgetary concerns and infrastructure limitations. However, given the revolutionary potential of this technology and the life-and-death decisions being made within public safety every day, there is much more to be considered.

Below are some common areas of consideration for the use of AI in public safety along with the potential advantages and pitfalls represented by the views of a typical tech pessimist and tech optimist.

What about data bias?  

The concern: Any AI analysis would be based on data that is entered or generated at some point by a human.

The Tech Pessimist: The data being analyzed by AI will retain the same bias as the individuals entering it. For example, if speed enforcement zones are set up in a specific community, the data would ultimately show that there are much more traffic violations within that community. In this case, the AI program might recommend sending even more traffic enforcement personnel to that community to look for traffic violations, reinforcing the original bias.

The Tech Optimist: Sophisticated ML algorithms can identify data bias and exclude it. A simple analysis could filter out self-initiated calls by the authorities and focus on the calls for service coming from the community at large as a basis for making recommendations. A more complex algorithm recognizes missing data elements and recommends the redeployment of units while also warning supervisors of possible bias in the data being reported. This data analysis also presents an opportunity for law enforcement (LE) and local community leaders to review the recommendations of AI together and use that as a basis for improving engagement between LE agencies and the communities they serve.

What about data manipulation?

The concern: As more people learn how the data is used in analysis, results could be influenced by external factors.

The Tech Pessimist: Even if LE and community leaders agree on the metrics of success, a desire to show improvements may skew how the results are presented. Additionally, bad actors within the public could manipulate the system for their own means. One example is the unlawful practice of “swatting” where an individual reports a false crime to deploy a SWAT team to another individual’s home. These falsely reported crimes could erroneously influence analysis of crime data.

The Tech Optimist: As AI is used to analyze data, it can also function as an additional layer of data verification. Complex ML algorithms can take data from multiple sources to identify anomalies that may indicate false reporting or unlawful patterns of behavior. In the example of swatting, since most of these incidents occur between an aggressor and a victim who are not in the same community, an AI program would be able to quickly identify several red flags indicating a false call. These include comparing area code or mobile towers associated with the incoming call versus the address of the reported incident, other incidents reported in the area during that time and location history. The program would then alert emergency telecommunicators of a possible falsely reported crime and provide recommendations for verification based on other available information.

What about system reliability?

The concern: Even the best computer systems crash and are subject to glitches.

The Tech Pessimist: Even the comparatively simple software systems common today often don’t work as designed, and patches are required to fix errors in code. This situation is made even more complicated in public safety where life and death decisions need to be made very quickly and various systems need to interface with each other to transfer data and comply with information and regulatory requirements. How can we expect AI in public safety to avoid making recommendations that could result in harm? What happens when we rely on these systems and they fail?

The Tech Optimist: The best AI integration is one where people do what they do better than machines, and the machines do what they do better than people. AI is simply a tool to enhance analysis for making recommendations. The ultimate decision on what action to take will remain with people. In theory, AI can also achieve a level of analysis through ML where it can detect bugs in a simulated environment and anticipate other possible issues before being deployed in a live production system. This will minimize errors in code that have long been an issue for human programmers.


Police car at night

Regardless of how optimistic or pessimistic someone is about these emerging technologies, the artificial intelligence genie is already out of the bottle. To ensure positive outcomes for AI programs, different community and public safety groups need to build a coalition of trust, with each community determining what a successful AI program in public safety would look like. Success should be defined collectively across the spectrum of public safety as part of the broader health of the community.

With a proactive focus on and thoughtful approach to incorporating AI and ML into public safety, communities will be better prepared to leverage these new technologies as productive tools rather than hindrances. The ultimate goal for merging public safety with AI is to create an environment of mutual trust in order to prevent victimization when possible, as that will be the true marker of success.