Feb 6 2019
With the ongoing development of self-driving cars, sufficient amounts of data will be collected through the navigational technologies of cars.
As companies collect and exploit heavily guarded mapping data, concerns regarding ownership, privacy, public safety, and cybersecurity emerge. However, using the geospatial data, new maps can be drawn that identify the spaces where people live and travel. Presently, that data is housed in technological and corporate black boxes. Conversely, according to a Dartmouth study published in Cartographic Perspectives, given the effects of such information as well as the social relevance, such black boxes need greater transparency.
As self-driving cars struggle to come to grips with the world around them, they collect huge quantities of information, such as details like where pedestrians cross the street, traffic and congestion patterns, which businesses and houses have Wi-Fi, and so on, which can possibly be monetized. Although companies may have various economic interests, including intellectual property, in safeguarding geospatial data, private citizens, local governments, and other individuals also have a vested interest in utilizing that data for making informed decisions on controlling traffic, allocating public funds, planning urban infrastructure, and other similar projects, all of which could be of public interest.
Self-driving cars have the potential to transform our transportation network and society at large. This carries enormous consequences given that the data and technology are likely to fundamentally reshape the way our cities and communities operate. Right now, the geospatial data obtained by a self-driving car exists in technological and corporate black boxes. We don’t know who can see the data, appropriate it or profit from it. With insufficient government regulation of data from self-driving cars, this raises significant concerns regarding privacy, security and public safety.
Luis F. Alvarez León, Study Author and Assistant Professor, Department of Geography, Dartmouth College.
The author discussed how open source design, hacking, and legislation are avenues that can be exploited to help open the black box, allowing the government and consumers alike to have access to this data collected by corporates. Although all of these three methods come with possible rewards and risks, they can assist in framing the public debate regarding the use and ownership of geospatial data from autonomous cars.
- It is a well-known fact that autonomous cars depend on computerized systems to run. When these systems are locked in closed networks managed by automobile manufacturers, it can be difficult for users to gain access to this data. The study explores how legislation can potentially help in making such data more accessible. Generally, car manufacturers consider themselves as the only arbiters of the data relating to their vehicles, asserting that they “own the data”; however, legislation has offered pushback and the author cites a number of instances, like arguments around the right to repair.
- When self-driving vehicles, including their assembly, components, data, and operation, are engineered via an open source framework, the public could gain access to the data more easily and thus achieve a better understanding about its potential applications and implications, suggests the author.
According to the study, companies like Udacity—an online education company—provides a Self-driving Car Engineer Nanodegree program, wherein students learn, create, and improve codes for autonomous systems. While the manufacturers may face intellectual property and economic tradeoffs, open source design has an integral role to play in enabling greater transparency.
- Apart from the open source design and legislation, hacking presents a systemic risk for autonomous cars and is also a method that has been deployed to make automated systems and car data more transparent and, at the same time, holding autonomous car firms more accountable. Earlier in 2013 and 2015, a couple of security experts remotely hacked into a 2014 Jeep Cherokee, and a Ford Escape and Toyota Prius, respectively, highlighting the security flaws in non-autonomous vehicles. This means security risks can probably run much deeper in the case of fully autonomous vehicles.
Since hacking is a generalized hazard for self-driving vehicles, specific examples of hacking in the context of advocacy and research have demonstrated the significance of developing secure systems. New security breaches with Facebook and Equifax highlight the various security risks in relation to the digital data of consumers.
If we're going to adopt self-driving cars, then we should really make absolutely sure that they are as secure as they can be. This requires input from parties outside of the corporations who are building those very systems, such as government, advocacy groups and civil society at large.
Luis F. Alvarez León, Study Author and Assistant Professor, Department of Geography, Dartmouth College.
When it comes to self-driving vehicles, Michigan, California, and Arizona in the United States are presently some of the most hospitable states, acting as testing areas for firms like Waymo, which began as Google’s Self-Driving Car Project. Undoubtedly, local regulatory battles exist and most often there is pushback from advocacy groups and citizens; however, other states may be conducive to this latest mode of transportation in the coming days. A couple of weeks ago, Waymo declared that it will be constructing a manufacturing center in southeast Michigan, as it looks to expand its fleet. As pointed out by the study, oversight of the autonomous car sector cannot be left to the vehicle manufacturers themselves, and it is up to the government and public to help define how this novel technology and following mapping of the communities will impact the society.