Hey, Alexa. Explain Your Algorithms.

AI Cover

Lately I seem to reading a lot about artificial intelligence. Between all the self-driving car projects and many, many predictions about robots coming for our jobs (and our children), the topic is rather hard to avoid. The topic is interesting but also somewhat scary since we’re talking about creating machines that attempt to replicate, and even improve upon, the human decision making process.

One of the better assessments of why we need to be cautious about allowing artificially-intelligent systems to take over from human judgement comes from MIT’s Technology Review, whose Senior Editor for AI says “no one really knows how the most advanced algorithms do what they do”.

If a person makes a decision, it’s possible (theoretically) to learn how they arrived at that choice by simply asking them. Of course it’s not as easy with children, but most adults are able to offer some kind of logical process explaining their actions. Even if that process is flawed and they arrive at the wrong conclusion.

It’s not so easy with machines.

There’s already an argument that being able to interrogate an AI system about how it reached its conclusions is a fundamental legal right. Starting in the summer of 2018, the European Union may require that companies be able to give users an explanation for decisions that automated systems reach. This might be impossible, even for systems that seem relatively simple on the surface, such as the apps and websites that use deep learning to serve ads or recommend songs. The computers that run those services have programmed themselves, and they have done it in ways we cannot understand. Even the engineers who build these apps cannot fully explain their behavior.

We’ve never before built machines that operate in ways their creators don’t understand. How well can we expect to communicate—and get along with—intelligent machines that could be unpredictable and inscrutable?

However, far below the level of television-ready robots and whatever Elon Musk is up to now, AI is a topic that educators should probably be watching.

More and more edtech companies are developing (and your school administrators are buying) “personalized” learning systems that include complex algorithms. These applications may fall short of being intelligent but will still collect huge amounts of data from students and then make “decisions” about the course of their educational life. 

It’s unlikely the salespeople can offer any clear explanation of how the systems work. Even the engineers who wrote the software may not have a good understanding of the whole package. Or know if there are errors in the code that could result in incorrect results.

And it’s not like you can ask the computer to explain itself.


The image is the logo from Steven Spielberg’s 2001 film AI: Artificial Intelligence, a somewhat mediocre example of the genre. And a movie would have been far different if Stanley Kubrick had lived to direct it.

Facing the Future

Person of Interest scene

Apple is heavily promoting the feature in their top line iPhone X that scans and recognizes the owner’s face to access the device. I won’t be getting one.

Although there are probably a few bugs in their Face ID system, I’m not especially worried about any potential security issues of someone opening my phone because the software mistakes their face for mine. It’s just that the 2-1/2 year old phone I have now works fine, thank you.

However, on the broader topic of face recognition technology in the real world, a recent edition of the podcast IRL suggests we all need to pay attention.

We aren’t quite at the level of the techies in police and spy TV shows who can access almost any camera in the world and then identify faces with near 100% accuracy, but that future is closer than you might think.

For example, China is creating a database containing the faces of their entire population – 1.3 billion people – and a system that can “match a person’s face to his or her photo ID within three seconds and with 90% accuracy”. They plan to have it in place by 2020, just three years off.

But some applications are much closer to home. The photo management software that comes with most computers does a pretty good job of matching faces in your pictures. Google’s cloud-based Photos application has already collected several hundred million photos and you gotta wonder what they’re learning from all that data.

Anyway, the podcast episode, produced by the Mozilla Foundation, is worth a half hour of your time.


The picture is a promotional scene from the television series Person of Interest. Their computer could do a whole lot more than just identify faces.

Go Ahead, I’m Listening…

During the holiday season, connected devices containing voice-activated assistants from Amazon (Alexa) and Google were among the most popular gifts. This week at the giant Consumer Electronics Show (CES), lots of companies demonstrated many more future products infused with Alexa, Google, and Apple’s Siri. Including “smart” shower hardware.

But, according to the ACLU blog, you may want to think twice about placing an always-on, internet connected microphone in your home.

Overall, digital assistants and other IoT devices create a triple threat to privacy: from government, corporations, and hackers.

It is a significant thing to allow a live microphone in your private space (just as it is to allow them in our public spaces). Once the hardware is in place, and receiving electricity, and connected to the Internet, then you’re reduced to placing your trust in the hands of two things that unfortunately are less than reliable these days: 1) software, and 2) policy.

The constant potential for accidental recording means that users do not necessarily have complete control over what audio gets transmitted to the cloud.

Once their audio is recorded and transmitted to a company, users depend for their privacy on good policies—how it is analyzed; how long and by whom it is stored, and in what form; how it is secured; who else it may be shared with; and any other purposes it may be used for. This includes corporate policies (caveat emptor), but also our nation’s laws and Constitution.

Lots of pieces, technical and legal, that all have to work together to protect your information and privacy. I’m not convinced we’re there yet.

Heading off on an only slight detour, this issue of artificially intelligent assistants is something all of us educators need to watch. I’ve read of a few teachers who have placed Alexa and Google Home devices in their classrooms, although I’m not at all clear on the instructional purpose.

However, beyond that, many edtech companies are already building some form of data-collecting AI into their products. I fully expect to see always-listening, education-related devices being pitched to schools in the very near future, very likely with many of the same issues raised in this article.

Alexa: Don’t Screw Up My Kid

Articles about new technologies in the general media usually fall into one of two categories: breathless, this-is-the-coolest-thing-ever puff pieces or those it’s-gonna-kill-you-if-you’re-not-careful apocalyptic warnings. Occasionally writers manage to do both at the same time, but that’s rare.

A recent piece in the Washington Post leans toward that second theme by letting us know right in the headline that millions of kids are being shaped by know-it-all voice assistants. Those would be the little, connected, always-listening boxes like Amazon’s Alexa and Google’s Home that sit unobtrusively on a side table in your home waiting to answer all your questions. Or order another case of toilet paper.

Many parents have been startled and intrigued by the way these disembodied, know-it-all voices are impacting their kids’ behavior, making them more curious but also, at times, far less polite.

Wow. Must be something in a new study to make that claim, right?

But psychologists, technologists and linguists are only beginning to ponder the possible perils of surrounding kids with artificial intelligence, particularly as they traverse important stages of social and language development.

Siri 800x300

I would say we’re all beginning to ponder the possibilities, good and bad, of artificial intelligence. For society in general in addition to how it will affect children as they grow.

But are the ways kids interact with these devices any different from technologies of the past?1

Boosters of the technology say kids typically learn to acquire information using the prevailing technology of the moment — from the library card catalogue, to Google, to brief conversations with friendly, all-knowing voices. But what if these gadgets lead children, whose faces are already glued to screens, further away from situations where they learn important interpersonal skills?

I don’t think you need to be a “booster” of any technology to understand that most children, and even some of us old folks, have the remarkable ability to adapt to new tools for acquiring and using information. If you look closely, you might see that many of your students are doing a pretty good job of that already. And those important interpersonal skills? Kids seem to find ways to make those work as well.

Anyway, the writer goes on trying to make his case, adding a few antidotes from parents, some quotes from a couple of academics, and mentioning a five-year old study involving 90 children and a robot.

However, in the matter of how children interact with these relatively new, faceless, not-very-intelligent voices-in-a-box, there are a few points he only hints at that need greater emphasis.

First, if your child views Alexa as a “new robot sibling”, then you have some parenting to do. Start by reminding them that it’s only a plastic box with a small computer in it. That computer will respond to a relatively small set of fact-based questions and in that regard is no different from the encyclopedia at the library. And if they have no idea what a library is, unplug Alexa, get in the car and go there now.

Second, this is a wonderful opportunity for both of you to learn something about the whole concept of artificial intelligence. It doesn’t have to get complicated, but the question of how Alexa or Home (or Siri, probably the better known example from popular culture) works is a great jumping off point for investigation and inquiry. Teach your child and you will learn something in the process.

Finally, stop blaming the technology! If a parent buys their child one of these…

Toy giant Mattel recently announced the birth of Aristotle, a home baby monitor launching this summer that “comforts, teaches and entertains” using AI from Microsoft. As children get older, they can ask or answer questions. The company says, “Aristotle was specifically designed to grow up with a child.”

…and then lets it do all the comforting, teaching, and entertaining, the problem is with a lack of human intelligence, not artificial kind.

Applying a Little Magic Sauce

Speaking of artificial intelligence, how well can an algorithm really understand someone today?

Companies like Facebook and Google have hundreds of coders working hard in the back room to build bots that can analyze the online behavior of their members. Their goal: to better understand them in the “real” world.

Ok, the actual goal is to understand how to sell them more stuff and increase profits in the next quarter.

Anyway, a recent series on the Note To Self podcast looks at the Privacy Paradox and what the online user can do to retain as much of it as possible when confronted with all those upcoming social media bots.

During one segment, they mentioned the Magic Sauce project from the University of Cambridge, which is defined on their main page as “[a] personalisation engine that accurately predicts psychological traits from digital footprints of human behaviour”.

So, how accurate is their British magic?

I skipped the choice of having it dig through and analyze the pages I like on Facebook, but not because I’m afraid of what it might reveal. I have an account but never “like” anything2 and only rarely comment on the posts of others. The bot wouldn’t have enough stuff to work with.

The other choice is to paste a sample of writing from a blog or other source, and I have 14 years worth of that crap. So I selected a more-than-200-word post from this space, one without any quotations which would mix in someone else’s personality.

And this is what I got.

screenshot of magic sauce results

Big miss on the age, but thank you, bot. The rest, I have to admit, leans towards the accurate side, even if I don’t consider myself artistic or organized.

Of course, that was based on just one small sample of my life. The Cambridge Psychometrics Centre has a whole battery of tests to peel back your psychological profiles, including some “Fun” tests (ten minutes to discover your personality disorders?).

But that’s more than enough AI bot training for now.