The Surveillance Classroom

During the 2016 holiday season, Amazon’s Alexa devices were huge sellers. Google was second in the category with Home. Apple just started shipping their Siri-enabled Homepod and they will probably sell a bunch of them.

So now tens of millions of homes have always-listening internet-connected microphones listening to every sound, and more are coming. This despite the many cautions from privacy experts about allowing large corporations to have access to a new continuous stream of auditory data. 

But who cares if the artificially intelligent software powering these devices is buggy? Does it matter that Amazon, Google, and Apple are vague about how they are using that information and who has access to it? Let’s bring these boxes into the classroom!

Michael Horn, co-author of Disrupting Class, the hot education-change book from a decade ago, says Alexa and her friends is “the next technology that could disrupt the classroom”.

It’s not entirely clear why Horn believes a “voice-activated” classroom would improve student learning. Other than that the superintendent he has interviewed is concerned that kids “will come in and will be used to voice-activated environments and technology-based learning programs”.

That’s nothing new. For a few decades (at least) we have been throwing technology into the classroom based on the premise that kids have the stuff at home. That approach hasn’t been especially successful, and Alexa is not likely to change that.

But these days, a major reason for using many, if not most, new classroom technologies is collecting and analyzing data.

These devices could also send teachers real-time data to help them know where and how they should intervene with individual students. Eastwood imagines that over time these technologies would also know the different students based on their reading levels, numeracy, background knowledge, and other areas, such that it could provide access to the appropriate OER content to support that specific child in continuing her learning.

Maybe I’m wrong but I think it’s better to have a teacher or other adult listening to kids.

Anyway, Horn presents a lot of questions about the use of Alexa and her peers in the classroom but his last one is probably the most salient: “What is the best use of big data and artificial intelligence in education?” Before ending, he also very briefly touches on the security of that data – “And there are bound to be privacy concerns.”. As I said, briefly.

But the bottom line to all this is whether we want Amazon, Google, or Apple surveillance devices collecting data on everything that happens in the classroom. Horn seems to think the technology could be disruptive. It sounds creepy and rather invasive to me.

The image is from an article about a contest Amazon is running for developers, with cash prizes for the best Alexa apps that are “educational, fun, engaging or all of the above for kids under the age of 13”.

Your Attention. Now!

A man walks onto the TED stage and introduces himself: “I was a design ethicist at Google, where I studied how do you ethically steer people’s thoughts.”.

My first thought was, how is it possible to “ethically” steer people’s thoughts? However, I think this particular speaker, now billed as a “design thinker”, may be worth listening to.

In his TED talk from last spring, Tristan Harris wants us to know about the “handful of people working at a handful of technology companies” who are working very hard to attract our attention and hang onto it for as long as possible. The better to sell that attention – us – to their advertisers. And they want to leave nothing to chance.

Because it’s not evolving randomly. There’s a hidden goal driving the direction of all of the technology we make, and that goal is the race for our attention. Because every new site — TED, elections, politicians, games, even meditation apps — have to compete for one thing, which is our attention, and there’s only so much of it. And the best way to get people’s attention is to know how someone’s mind works. And there’s a whole bunch of persuasive techniques that I learned in college at a lab called the Persuasive Technology Lab to get people’s attention.

Teachers especially need to understand what he’s talking about since they work with some of the primary targets of these companies looking for attention. If you teach high school students, possibly middle school, maybe even play this in class and follow it with a discussion. We need to help students understand what these adults are doing to them.

Finally, this is a good time to remember that, if you are not paying for a service, chances are you are the product, not the customer. Everything comes with a price and, on the web, that price is very often your information.

The Price of Privacy

Sign about not having anything to hide


I don’t think most people understand online privacy.

They’re pretty sure the NSA and other government agencies are sucking up their data, and probably have been for years. But they largely adopt the philosphy in the image and assume their phone calls and internet traffic are not important enough for anyone to notice or care about.

Their information is inconsequential by being buried in a giant pile with trillions of other bits. Or they are resigned to the matter and offer a “what can you do” shrug. Or worse, they support the idea of that giving up some of their privacy will result in greater protection from the bogey man being presented to frighten them this week.

On the other hand, whatever the attitude towards government surveillance, most people seem quite complacent when it comes to Facebook, Google, Amazon, and other tech companies collecting their data2. In fact, they upload their personal information to these sites at a furious pace every day. And a growing number of people are happily adding to that data pile by buying devices that keep a running record of what they say and do.

For some reason, people seem to assume these billion dollar corporations (and a vast of array of cool startups) have their best interest at heart. The settings Facebook provides by default will assure their privacy. Amazon won’t tell anyone about the products I’ve bought. Google will keep my search history and email contents secret.

Sure. Except for the information provided to the marketing departments of thousands of companies. Companies who pay large amounts of money so you can have “free” services. When you pour all of that data together into some increasingly sophisticated algorithms, they gain some very valuable information. About what you buy (or want to buy), where you travel, who communicate with, and much, much more.

Now, I could very easily drop into conspiracy theory territory in this post, and I don’t want to go there. Many of these free online services have great value. In fact, I add many little bits of personal info to Google’s massive data pile every day. I even teach classes on their map-related resources to help teachers use them with students, and I have no illusion that Google isn’t collecting information from interactions with those maps.

But I’m also very picky about which services I use and what information I will provide. For example, I don’t post anything to Facebook (I do have an account) for a variety of reasons, not the least of which is their TOS. And I’ve actually read more than few terms of service documents, and keep a short list of interesting translations and sites that help others understand what they are agreeing to.

Regardless of my personal preferences, however, this is a topic that all teachers need to better understand. They must help their students learn to protect themselves online, as well as doing a better job of evaluating the software and web services they bring into the classroom.

It may appear that Google, Microsoft, and the other companies (big and small) are keeping their free/cheap education web services “closed” to the outside world. But students (and you) are still providing data with everything they do. (Just look at what can be learned from a single photograph.) And those corporations are getting better everyday at monetizing your information.

Applying a Little Magic Sauce

Speaking of artificial intelligence, how well can an algorithm really understand someone today?

Companies like Facebook and Google have hundreds of coders working hard in the back room to build bots that can analyze the online behavior of their members. Their goal: to better understand them in the “real” world.

Ok, the actual goal is to understand how to sell them more stuff and increase profits in the next quarter.

Anyway, a recent series on the Note To Self podcast looks at the Privacy Paradox and what the online user can do to retain as much of it as possible when confronted with all those upcoming social media bots.

During one segment, they mentioned the Magic Sauce project from the University of Cambridge, which is defined on their main page as “[a] personalisation engine that accurately predicts psychological traits from digital footprints of human behaviour”.

So, how accurate is their British magic?

I skipped the choice of having it dig through and analyze the pages I like on Facebook, but not because I’m afraid of what it might reveal. I have an account but never “like” anything4 and only rarely comment on the posts of others. The bot wouldn’t have enough stuff to work with.

The other choice is to paste a sample of writing from a blog or other source, and I have 14 years worth of that crap. So I selected a more-than-200-word post from this space, one without any quotations which would mix in someone else’s personality.

And this is what I got.

screenshot of magic sauce results

Big miss on the age, but thank you, bot. The rest, I have to admit, leans towards the accurate side, even if I don’t consider myself artistic or organized.

Of course, that was based on just one small sample of my life. The Cambridge Psychometrics Centre has a whole battery of tests to peel back your psychological profiles, including some “Fun” tests (ten minutes to discover your personality disorders?).

But that’s more than enough AI bot training for now.