Better Bombing with Machine Learning
If you haven’t noticed that Machine Learning, or Artificial Intelligence depending on your particular project, has come for OpenStreetMap, then you’ve spent the last couple of years under a rock.
Practically every major business wanting to prove their IT engineering salt have come up with some kind of project that uses Machine Learning to process aerial imagery, and OpenStreetMap is an integral part in many of these experiments – at the very least, OpenStreetMap data is used for training the algorithms, but frequently the results are also made available to OpenStreetMap, with the hope that our community will provide valuable feedback that will further improve the machines, or provide some ethical legitimacy (“we’re doing this for good!”).
Working with organisations that use OpenStreetMap for humanitarian purposes, the purveyors of such machine-made data will often generate favourable headlines, about how their algorithm-detected building outlines helped in the aftermath of a flood here, or how their machines were able to distinguish schools from other buildings with so-and-so accuracy and thereby save lives there.
In my view, what we’re dealing here is clearly military technology. Being able to detect buildings from the air, to perhaps even trace roads, power grid, and communications lines, to automatically distinguish hospitals from schools from government buildings, are essential ingredients of future bombing technology. This is every general’s wet dream. There is absolutely no question in my mind that the algorithms that are today built and trained with OpenStreetMap will soon help guide bombs and missiles – be that to avoid the schools or to target them on purpose, that’s not going to be the algorithm’s concern.
Now every technology can be used by the military, and not everything the military does is about killing people. But nothing we, in OpenStreetMap, have ever collaborated in had such a direct link to bombing as the automated evaluation of aerial imagery. I am taken aback at the utter naïveté with which humanitarian organisations partner with the purveyors of machine learning. What I’m seeing is a bunch of engineers happily building the moral equivalent of the next nuclear bomb, at best not thinking about possible consequences, at worst being directed to ignore possible consequences for financial gain. I find it disingenuous to claim that you’re developing some sort of machine learning thing to aid humanitarian purposes. No you’re not – you’re abusing some humanitarian project as a fig leaf for your military research.
Now we at OpenStreetMap have no influence over what our data is used for – the open license does not allow us to discriminate against any field of endeavour. (Machines that have been trained with our share-alike data should fall under the share-alike provisions of the license but that’s a point for a separate discussion.)
What we can control is just how jubilantly we welcome the results of this military research. I think we should be very skeptical when people reach out to us and offer us any form of cooperation that deals with automatic processing of aerial imagery. Before we applaud their efforts, give them a platform to whitewash their research with a humanitarian fig-leaf, or even participate in training their machines by adding their data to our database, we should ask very tough questions. We should ask if the business in question is aware of the dual use of these algorithms, and what ethical guidelines are in place to ensure that “humanitarian” work done in and with OSM is not actively contributing to the creation of better bombing bots.
OpenStreetMap has never been an automatic image recognition project. There is innocence in having individual human beings trace their neighbourhood buildings from aerial imagery. This approach works but it works slowly, and hence has opened us to seduction by the purveyors of weaponizable automatic algorithms.
Let us be aware that every time we allow automatically traced data into our database, we’re complicit in someone, somewhere, building the better killing machine.
There's 7 Comments So Far
June 6th, 2019 at 4:43 pm
Having worked for & with electronic companies with military interests in the 1980s, I’d be rather surprised if the state-of-the-art is not rather better than it appears from corporate activity in OSM. Virtually all the techniques currently in use were active parts of projects (EU or UK and French government funding) back then: low-level processing such as edge detection, DEM extraction (from paired images), neural networks & other AI for feature detection. No doubt there have been technical advances, but the single biggest factor is probably hardware improvements. The interesting thing from these projects is that the bits which got commercialised, despite the obvious military interest, were: using paired images to build models for facial reconstruction surgery; inter-visibility for location of mobile phone masts.
Another military technology was object-oriented simulation (basically for battlefields), the same type of technology was (perhaps is) used for detailed road traffic simulation models (e.g. Paramics in Edinburgh).
The bottom line is that although things have military possibilities, there are often many other uses. Unfortunately, it was my experience that people could always thing of a military use, even when civilian possibilities were much greater. I’d think the same applies now: especially as many militaries don’t give a damn about not targeting hospitals.
June 9th, 2019 at 10:37 am
Is there any specific project which ‘inspired’ you to write this or was it just a general feeling of wanting to draw attention to the topic?
June 9th, 2019 at 11:51 am
While I agree that maps are sadly useful also for evil things, specifically for military use I am not convinced that imports based on machine learning are especially problematic.
“every time we allow automatically traced data into our database, we’re complicit” – is adding automatically detected buildings in any way worse than mapping buildings as humans? If anything I would expect that it would lower quality of OSM data for training automatic pattern matchers. Now it is necessary to recognize what was actually added or reviewed by human and what is imported and never verified.
It may change in cases where such imports are curated by humans but in almost? always buildings are dumped into database to be never reviewed or edited.
Though it may explain part of “we will map buildings, all buildings and solely buildings” present in some organized mapping.
I agree that “whitewash their research with a humanitarian fig-leaf” may be a real problem.
June 9th, 2019 at 11:37 pm
These are interesting thoughts and insights. But – I think you are somewhat underestimating the capacity and status of military applications, and/or perhaps overestimating the importance of the current works you are referring to.
I wasn’t directly working with these things in the military, but let me tell you, just on the top of my head and out of memory, what my neighbors were doing.
Massive amounts of imagery and other data were taken by an aircraft with special sensors and equipment. It was downloaded to the ground in real-time through secure encrypted datalink, which meant analysts were working with the data well before the plane landed.
Images were automatically analyzed and rendered in ways much more advanced than anything you mention or probably have seen or imagined. Every building, fence, tree, stone was rendered electronically in 3D. The data was run along and analyzed agains earlier imagery, different maps, satellite imagery and different bandwidth radar scans. Also certain wavelength photography, that could for example distinguish chlorophyll, which made it easy to distinguish real plants from camouflage nets. The radar imagery gave the material footprint so that the resulting format output would give away the material in the picture – metal, organic or whatever. It also meant you had a map where tunnels and structures many meters below ground were clearly visible. The data was merged with signals, and even acoustic and seismic data. All this was run as a timeline, automatically analyzed and giving away trend and movements. The enormous, copious amount of data, that could cover areas as huge as entire countries, was processed in seconds.
And this was exactly twenty years ago.
June 10th, 2019 at 8:15 am
Sorry Mr. Ramm,
how naive must one be to think that military are depending on OSM when it comes to security or improving their methods.
Best
June 10th, 2019 at 2:30 pm
Better stop using satellite imagery, GPS, the internet, computers, and well maps themselves.
Personally, in terms of plausible future risk, I think we should worry more about the carbon footprint of machine learning and other technologies. https://www.fastcompany.com/90360528/the-code-that-powers-our-lives-has-a-hidden-environmental-toll
June 11th, 2019 at 9:00 pm
I spent a while trying to understand what you want to communicate but in the end I found your blog post to be alarmist: spreading fear, uncertainty, and doubt.
Instead of trying to argue here I would love to talk about these and related topics in a small group of interested people for example at the next hack weekend or geo conference.
Let me finish with this small anecdote: the algorithm behind osrm’s graph partitioner was once used to figure out which streets to block and which bridges to destroy in order to win wars. Now it’s powering sub-millisecond routing queries in the open source routing machine making users — and your clients — happy.
Disclaimer: I used to work on osrm and robosat; used to work for Mapbox before I quit last year; all opinions here are mine and mine only, though.
Share your thoughts, leave a comment!