Legislating Government Censorship

In May 2014, the high court of the European Union declared that EU citizens had a “right to be forgotten” online, derived from the Union’s stringent personal privacy laws. The information is actually forgotten, of course, just removed from our collective memories, also known as Google.

The “right to be forgotten” in the European Union originated from a court ruling demanding Google and search engines remove links to a story that embarrassed a Spanish man because it detailed a previous home repossession. The story was not factually inaccurate. He insisted it was no longer relevant and that it embarrassed him, and the court agreed he had the right to have the information censored from search engines.

Recently courts in the EU have found exceptions to that absolute right, but here in the US many lawmakers and pundits have speculated as to whether we should have the same right and the Europeans.

This week, two members of the New York legislature decided the answer is yes, and have introduced their own interpretation that actually goes beyond the rights granted to European citizens. Because if anything is worth doing, it’s worth overdoing.

Censorship 1

Their bill would require the removal of “content about such individual, and links or indexes to any of the same, that is ‘inaccurate’, ‘irrelevant’, ‘inadequate’ or ‘excessive’”, from both search engines and the original website, within 30 days of a request.

Basically, with some exceptions for information about certain crimes and matters of “significant current public interest”, the law requires anything posted on the web that someone claims is “no longer material to current public debate or discourse” must be forgotten. Under penalty of some heavy fines.

What could possibly go wrong with a poorly defined (at what point does content become “excessive”?) law like that?

So, under this bill, newspapers, scholarly works, copies of books on Google Books and Amazon, online encyclopedias (Wikipedia and others) — all would have to be censored whenever a judge and jury found (or the author expected them to find) that the speech was “no longer material to current public debate or discourse” (except when it was “related to convicted felonies” or “legal matters relating to violence” in which the subject played a “central and substantial” role). And of course the bill contains no exception even for material of genuine historical interest; after all, such speech would have to be removed if it was “no longer material to current public debate.” Nor is there an exception for autobiographic material, whether in a book, on a blog or anywhere else. Nor is there an exception for political figures, prominent businesspeople and others.

I’m not a Constitutional expert, but even I realize a law like this would never survive a many First Amendment challenge.

But beyond the legal issues there is a far more concerning 800-pound gorilla. Right now we have far too many “leaders” who lust for tools that would allow the government to review and censor the online discussions of it’s citizens.

We don’t need a right to be forgotten in the US as much as we do a right to be left alone.

Alexa: Don’t Screw Up My Kid

Articles about new technologies in the general media usually fall into one of two categories: breathless, this-is-the-coolest-thing-ever puff pieces or those it’s-gonna-kill-you-if-you’re-not-careful apocalyptic warnings. Occasionally writers manage to do both at the same time, but that’s rare.

A recent piece in the Washington Post leans toward that second theme by letting us know right in the headline that millions of kids are being shaped by know-it-all voice assistants. Those would be the little, connected, always-listening boxes like Amazon’s Alexa and Google’s Home that sit unobtrusively on a side table in your home waiting to answer all your questions. Or order another case of toilet paper.

Many parents have been startled and intrigued by the way these disembodied, know-it-all voices are impacting their kids’ behavior, making them more curious but also, at times, far less polite.

Wow. Must be something in a new study to make that claim, right?

But psychologists, technologists and linguists are only beginning to ponder the possible perils of surrounding kids with artificial intelligence, particularly as they traverse important stages of social and language development.

Siri 800x300

I would say we’re all beginning to ponder the possibilities, good and bad, of artificial intelligence. For society in general in addition to how it will affect children as they grow.

But are the ways kids interact with these devices any different from technologies of the past?1

Boosters of the technology say kids typically learn to acquire information using the prevailing technology of the moment — from the library card catalogue, to Google, to brief conversations with friendly, all-knowing voices. But what if these gadgets lead children, whose faces are already glued to screens, further away from situations where they learn important interpersonal skills?

I don’t think you need to be a “booster” of any technology to understand that most children, and even some of us old folks, have the remarkable ability to adapt to new tools for acquiring and using information. If you look closely, you might see that many of your students are doing a pretty good job of that already. And those important interpersonal skills? Kids seem to find ways to make those work as well.

Anyway, the writer goes on trying to make his case, adding a few antidotes from parents, some quotes from a couple of academics, and mentioning a five-year old study involving 90 children and a robot.

However, in the matter of how children interact with these relatively new, faceless, not-very-intelligent voices-in-a-box, there are a few points he only hints at that need greater emphasis.

First, if your child views Alexa as a “new robot sibling”, then you have some parenting to do. Start by reminding them that it’s only a plastic box with a small computer in it. That computer will respond to a relatively small set of fact-based questions and in that regard is no different from the encyclopedia at the library. And if they have no idea what a library is, unplug Alexa, get in the car and go there now.

Second, this is a wonderful opportunity for both of you to learn something about the whole concept of artificial intelligence. It doesn’t have to get complicated, but the question of how Alexa or Home (or Siri, probably the better known example from popular culture) works is a great jumping off point for investigation and inquiry. Teach your child and you will learn something in the process.

Finally, stop blaming the technology! If a parent buys their child one of these…

Toy giant Mattel recently announced the birth of Aristotle, a home baby monitor launching this summer that “comforts, teaches and entertains” using AI from Microsoft. As children get older, they can ask or answer questions. The company says, “Aristotle was specifically designed to grow up with a child.”

…and then lets it do all the comforting, teaching, and entertaining, the problem is with a lack of human intelligence, not artificial kind.

Applying a Little Magic Sauce

Speaking of artificial intelligence, how well can an algorithm really understand someone today?

Companies like Facebook and Google have hundreds of coders working hard in the back room to build bots that can analyze the online behavior of their members. Their goal: to better understand them in the “real” world.

Ok, the actual goal is to understand how to sell them more stuff and increase profits in the next quarter.

Anyway, a recent series on the Note To Self podcast looks at the Privacy Paradox and what the online user can do to retain as much of it as possible when confronted with all those upcoming social media bots.

During one segment, they mentioned the Magic Sauce project from the University of Cambridge, which is defined on their main page as “[a] personalisation engine that accurately predicts psychological traits from digital footprints of human behaviour”.

So, how accurate is their British magic?

I skipped the choice of having it dig through and analyze the pages I like on Facebook, but not because I’m afraid of what it might reveal. I have an account but never “like” anything2 and only rarely comment on the posts of others. The bot wouldn’t have enough stuff to work with.

The other choice is to paste a sample of writing from a blog or other source, and I have 14 years worth of that crap. So I selected a more-than-200-word post from this space, one without any quotations which would mix in someone else’s personality.

And this is what I got.

screenshot of magic sauce results

Big miss on the age, but thank you, bot. The rest, I have to admit, leans towards the accurate side, even if I don’t consider myself artistic or organized.

Of course, that was based on just one small sample of my life. The Cambridge Psychometrics Centre has a whole battery of tests to peel back your psychological profiles, including some “Fun” tests (ten minutes to discover your personality disorders?).

But that’s more than enough AI bot training for now.

Reading The Digital Fine Print

Have you ever read Facebook’s Terms of Service and Privacy Policy? How about Twitter, Instagram, YouTube, or any of the many other social networking sites you likely interact with?

Those user agreements are not just a formality in the registration process. They are legally binding contracts between you and the company, which apply whether you read and understood the whole thing or not.

A lawyer who has read many of these documents explains some of the little pieces you missed, using Instagram as an example.

Instagram’s Terms of Service is a long document, most of which is pretty straightforward and reasonably fair. You agree not to harass other users, not to try to hack their code, and other things that I think we can all agree are pretty necessary to keep things functioning.

The licensing section, though, is what I’d like to examine a little more closely. Particularly, this paragraph:

Instagram does not claim ownership of any Content that you post on or through the Service. Instead, you hereby grant to Instagram a non-exclusive, fully paid and royalty-free, transferable, sub-licensable, worldwide license to use the Content that you post on or through the Service, subject to the Service’s Privacy Policy, available here http://instagram.com/legal/privacy/, including but not limited to sections 3 (“Sharing of Your Information”), 4 (“How We Store Your Information”), and 5 (“Your Choices About Your Information”).

That’s pretty clear, right? Well, maybe not.

Here, in one sentence, is what you’ve just agreed to: “You’ve just granted Instagram the right to do anything at all with your photos, without ever paying you a dime for any of it.”

But that’s not all.

Instagram and the others are not non-profit, public services. They are trying to make a lot of money. And since you didn’t give them any, the companies find other ways: “They provide a service for free, and in return you give them some information about you, which they sell to advertisers.”

Whether you know it or not, your data is pretty valuable stuff. Your browsing history provides a lot of information about you and, through the use of cookies and other technologies, Instagram and others are able to collect and share those particulars with anyone willing to pay for it.

The privacy policy for these services often talk about “anonymizing” your data, removing any specific information about you before passing it along to a third party. However, there are no promises.

Does Instagram anonymize data in this way? Probably so, although it’s difficult if not impossible to verify that. But under the terms laid out in Instagram’s TOS, they are under no obligation to do so, and if they suddenly decided to stop and just straight-up sell all your personal info to advertisers, (1) they would be perfectly within their legal rights, and (2) you would probably never know about it.

Ok, if I just delete my account, all will be well, right? Probably not.

But under the TOS, Instagram is perfectly free to keep sharing those photos regardless of whether they’re deleted. The TOS is most likely worded that way to cover those times when a user deletes something, but cached copies continue to appear for a while. Nonetheless, the intent is irrelevant when the language is clear, and that language in the TOS is unambiguous.

Although this post (on a blog for photographers) examines Instagram, the terms of service and privacy policies on other social media sites are very similar, if not the same in the case of Facebook which owns Instagram. And the writer is not recommending you abandon these services.

Nothing I’ve written here is meant to say that you shouldn’t use Facebook, Google, Instagram, Twitter, or any of the others. I use them myself, and just like you, when I signed up I clicked “OK” without reading the user agreement. Nonetheless, it’s worth taking a moment to examine why you’re using them and what you’re getting out of it, and consider how you can prepare now for contingencies that may arise down the road.

I’m not suggesting you quit either.

However, knowing what you’ve agreed to with a simple click of the mouse/tap of the screen is always a good thing. And once you understand, explain it to your students, colleagues, and your own children as well. They deserve to know what they’ve legally agreed to as well.

Blogging Still Matters

Returning to the idea of a domain of one’s own, I ran across a post from long time blogger Andy Baio who mourns the “decline of independent blogging”, but still believes “they’re still worth fighting for”.

Ultimately, it comes down to two things: ownership and control.

Here, I control my words. Nobody can shut this site down, run annoying ads on it, or sell it to a phone company. Nobody can tell me what I can or can’t say, and I have complete control over the way it’s displayed. Nobody except me can change the URL structure, breaking 14 years of links to content on the web.

Ok, so none of us own a domain – we only rent it. And few people own the web server that distributes their work.

But by blogging at our own domain – outside of corporate platforms like Facebook, Tumblr (Verizon, by way of Yahoo), and Blogger (Google) – we still own and control our ideas and how they are first presented to the world.

Echoing Andy’s desire to see more independent bloggers, I firmly believe more educators should be posting out there on the open web. On their own domains. Telling the world what’s going on in their classrooms, schools, and districts (charter companies?). Discussing their ideas about learning. Reflecting on problems standing in their way. Contributing their unique voices to the mix on the web.

However, blogging is not enough. We also need to help each other build an audience and build communities around those educators who are willing to share in the open. And, on the other end, to teach our colleagues, parents, and even students why reading blogs is important, where to find the good ones, and how to easily build them into their routines (RSS still lives!).

How does that happen? I don’t quite know. Others have tried and largely failed (top 100 lists and trivial awards do not a community make). But I think it’s worth more effort, and I’m open to suggestions.

Right now, all of this is just an idea buzzing around my warped little mind. We’ll see if anything develops from it.