If you’ve ever used Google’s mapping products, at some point you’ve probably dropped into Street View. This year marks ten years since those 360° street level images were first released, and since then, the company has “snapped more than 80 billion photos in thousands of cities and 85 countries”.
Wired reports that Google has now begun deploying their next generation Street View rigs, featuring cameras with much higher resolution. But their ultimate goal is not just offering a better look at your next vacation destination.
[The cameras are] there to feed clearer, closer shots of buildings and street signs into Google’s image recognition algorithms.
Those algorithms can pore over millions of signs and storefronts without getting tired. By hoovering up vast amounts of information visible on the world’s streets—signs, business names, perhaps even opening hours posted in the window of your corner deli—Google hopes to improve its already formidable digital mapping database. The company, built on the back of algorithms that indexed the web, is using the same strategy on the real world.
The detailed analysis of signs in store windows, however, is just the beginning of what Google and others will be able to do with this visual data.
How much more could Google extract from Street View using image processing algorithms? A lot.
Earlier this year Stanford researchers, including professor Fei-Fei Li, now chief scientist at Google’s cloud division, showed they could predict income, race, and voting patterns for US cities with software that logs the make, model, and year of cars in Street View photos.
Ok, that’s a little creepy.
Anyway, on this tenth anniversary of Street View, there’s really not much that can be done about taking pictures in public. Everyone has a camera and cameras are everywhere, not just in cars from huge data collection companies. And, as a photographer (strictly amateur), I believe in minimal restrictions when it comes to photography.
When those millions (billions?) of discrete pictures are turned into massive data sets and processed by complex, invisible algorithms, it’s another story. A story that is very much still been written.