Thursday, June 2, 2016

Zombies of the Internet


Zombies, the living dead,  permeate the internet. They shamble through corridors of information sustained by content they committed to websites, blogs, videos and platforms when they were vibrant creatures. Some are great heroes, some are great villains and some never existed at all.

Most people, in theory, seek to avoid zombiehood.  Wannabe Internet zombies crave the promise of mindless immortality. Some to perpetuate the good they pursued in life, some to leave behind a legacy of the hate and destruction they lived.

The zombies of fiction are flesh eating, single-minded, corrupted humans.  On the internet we are talking about the flesh of society at-large, the single-mindedness of the written word and the corruption of ideas and personas.

Hitler's internet zombie may well achieve the Thousand Year Reich the madman could not attain in life. Even now, in various corners of the internet,  Hitler's zombie is being remade into a less malignant creature for various agendas. At the same time the personas of Martin Luther King and other righteous people are undergoing constant attempts to recast them as evil and destructive. You can't control your zombie or what other people do to it. It is tempting to put Hitler in a tutu and wearing a clown nose, but history is already getting distorted enough.

Zombies are durable, but not sturdy. They may last but bits easily fall-off and get lost. Worse yet, as artificial intelligence advances, they will look bad and rotting, but will sound like the rest of us.


Sunday, May 15, 2016

The Problem with the Internet - People

And God said, “Let there be bytes,” and there were bytes. God saw that the bytes were good...Then God said, “Let us make mankind in our image" ... and everything started going wrong.



The Internet is not inherently bad, broken or evil. The technology and platforms are neutral. Facebook, Google, Twitter, Yahoo are not evil. Reddit maybe a little evil and 4Chan - well there will always by problem areas. Ah, but you say you have seen bad things on Yahoo News, Bing and Google search results. Yes. And that is not malice, just a program or algorithm doing impartially what it is meant to do.

All the technology on the internet works on variations of a simple formula; if "A then  B". If you make an event "A" (click a box, enter a URL, search a word), an action "then" is taken (open a form, redirect  user, deliver an ad) and a result "B" happens (form for new members delivered, sent to a traffic report website or get an ad for ice cream). This formula why a picture on your phone goes straight to your Instagram or why Google knows what you ordered from Amazon. The reality is much more mind bendingly complex. Hopefully you get the idea. 

It all works fairly well until you add people into the equation. People cannot leave well enough alone. Comment sections on news articles and forums were once hailed as the gateway to a new form of democracy, then people learned they could curse, be abusive and post links to get rich quick schemes. 

Malefactors come in all shapes, sizes, ages and sexes. Agendas range from lofty and altruistic to mercenary, exploitative and truly bizarre.  We The People clog the internet with hate, racism, rumors, innuendo, bad financial opportunities and cat pictures. Us. Not the technology. Not the companies. Not the platforms. 

But the internet cannot exist without people. Maybe Facebook's news feed had bias. So what. Some human bias infects whatever we touch;  Fox News, MSNBC, New York Times and any content with editors, moderators or curators. We are human. We bring our biases with us, no matter what. 


Thursday, April 21, 2016

In Celebtation of the Imperfect Internet

The internet is not perfect, the platforms are not perfect and we all know internet users are not perfect. Users expect the platforms to be perfect, the platforms expect the users to be perfect, but the imperfection of the internet makes the flaws of everyone involved absolutely ideal.



It is good and important that the internet is flawed. Perfect things don't evolve. You can't make perfect better. Improvement is the path to perfection.  We have seen the internet, apps, technology, processes and aspirations improve because they are not perfect. Better does not mean perfect. None of the platforms are perfect. They all have flaws that need to be fixed.

Users, arguably have even more room for growth than the platforms. Inexperience, misconceptions, bad habits and attitudes are rampant in civil society, government, institutions and even start-up internet companies. But this too is a good thing. It motivates and energizes users to demand something that more conforms to their own ideas of a better internet. Sometimes, when it doesn't exist, they make it themselves.

We want perfect. We want safe. We need dynamic and innovative. The only way to get what we need while getting closer to what we want is to accept the wonderfully imperfect way we have to get there.


Monday, April 4, 2016

25 Faces of Hate

We don't rate or judge web content  in any meaningful, objective way; not Blogs, YouTube channels, Facebook pages, Instagram, Snapchat or Twitter accounts. For the vast majority of people web content is good or bad and has x number of likes, shares or views. There have been no successful attempts to establish the parameters that define web content.

The sad result is that web content is viewed very narrowly as good or bad with  likes or dislikes. Equally the reaction is often polar and subjective. The internet isn't that simple.



Not as a solution, but as a place to start the thought process; here are five parameters to give context to web content. For the sake of argument and simplicity, lets assign each parameter a positive, neutral or negative value.

#1- intent - benevolent to malevolent
#2- depth - pictures, articles or combinations
#3- width - how many followers, readers, re-posts
#4- frequency - number of posts, quantity of content
#5- noise - original or  shared content

This means, given the five parameters with plus, neutral and minus values, there are approximately 243 potential  combinations of criteria. Clearly, the criteria are not scientific, there are far more variations that exist for each parameter and not all 243 represent what might be considered hateful content.  So, to be fair, let's say that 10 percent of the 243 criteria variation represent hateful content. That means there are about 25 variations of hate. Each one different in its components, dynamics and impact. Each variation has its own optimum tactics the public can use for addressing, blunting, de-fanging and healing from that specific variation of hate.

Of the 25 faces of hate, no two are the same and no two have the same answer, solution or response. If it were easy to fight hate, there wouldn't be so much of it.

Friday, March 25, 2016

What Tay Taught us When the Internet Taught Her Hate Speech

http://blog.adl.org/extremism/what-tay-taught-us-when-the-internet-taught-her-hate-speech

It’s tough being born as a teenager. Yesterday, Microsoft launched its new artificial intelligence (AI) computer bot — named Tay and envi­sioned as a teenage girl – and she had a very rough first day.  She was imme­di­ately besieged by excited techies, the curi­ous and the haters. In a few hours, she was drawn into tens of thou­sands of exchanges. In the process, racists, anti-Semites, misog­y­nists and other haters manip­u­lated her into repeat­ing some highly offen­sive state­ments.  Microsoft may have taught Tay to con­verse and to retweet, but they failed to rec­og­nize that she would need to engage in some crit­i­cal think­ing, and to know how to rec­og­nize when some­one else was say­ing some­thing offensive.

tay
Microsoft should have prob­a­bly antic­i­pated the prob­lems Tay might encounter. How­ever, Microsoft did not pro­gram Tay to spew hate.  It was clearly the Internet’s dark forces who came out to meet Tay and do their damage.
Microsoft and Tay  are not alone in fac­ing this type of prob­lem.  Every major Inter­net plat­form, inter­ac­tive app and online busi­ness has expe­ri­enced some­thing sim­i­lar at some time.  These hic­cups are all learn­ing expe­ri­ences. In this case, Tay taught Microsoft and all of us a les­son. We need to be bet­ter aware of how quickly things can get ugly on the Inter­net, how impor­tant crit­i­cal think­ing is to all tech users, and  how, despite our best efforts, the worst big­ots and haters online are never far from the surface.
Inno­va­tion, exper­i­men­ta­tion and adven­ture in tech­nol­ogy are nec­es­sary and impor­tant, and should never be dis­cour­aged. Tay’s first expo­sure to peo­ple didn’t go as well as it might have.  But we hope every­one has learned some­thing along the way. Tay 2.0 should be very interesting.

Saturday, March 19, 2016

It's Not an Information Highway

The Internet is not an information Highway.  It is not that orderly.

It is an information ocean.  Vast and chaotic. There are monsters and submerged obstacles below. Storms above.

Changes in technology, culture, companies, law, policy and events  create tides, currents and swells. We find ourselves blown, unexpectedly, into unfriendly ports.

We need to float and navigate. Sometimes we need to swim and sometimes to hold our breath so we don't drown.Unforeseen challenges are always just over the horizon and coming from every angle.

This is not a highway, not even close.

But as bad as it can be, it is important. It is the ultimate source of food for our minds and hearts.


Monday, January 11, 2016

Tech-ing as Good as the Bad Guys

Are you frustrated by hate on the Internet? Are you frustrated by not getting the response you want from the platforms? Do you know what your problem is? You're not as good at using the Internet as the bad guys.

The haters, terrorists and malefactors seem to have all the time in the world figure out technology, the Internet and the best way to confound the platforms. They have to. Manipulating and circumventing tech companies prohibited use policies and mechanisms is critical to distributing their vitriol.

It seems like we are always chasing after the bad. This is, in part, because we stubbornly cling to the idea that people are looking for the best uses of technology, not the worst. Of course, if you are someone  intent on creating mayhem, destruction and distress, that is what you would define as the best use of technology.  The bad players are out there and trying to use the best tech in the worst ways, and with great dedication.

Our most common response to hate is to turn it off, shut it down or shout it down. That is simply not enough.  The bad guys don't just shout down the good, they work diligently to out-maneuver it. They collaborate and build on each other's work to create targeted false narratives, dialogues and distortions.  The lesson we need to learn from them starts there.

We need to anticipate how good products will be manipulated and prohibited content definitions will be conflated. Current events and history, good and bad, need to be viewed as potential weapons against truth and common sense. They need to be defended before they are assailed. We cannot just promote good speech, inspiring messages and social good. Out maneuvering bad speech, anticipating its emergence and developing collaborative efforts need to be a genuine priority - perhaps even beyond liking,  friending and selfies.

This is not the fun side of technology. It is not something that would occur to most people to be concerned about. Unfortunately, it is an all too reflexive behavior for those that are compromising the the promise of technology. We need to use tech, policy and our determination as well as the bad guys do, maybe even better.








Thinking Faster than the Speed of Hate

  Jonathan Vick, Acting Deputy Director, International Network Against Cyber Hate (INACH)  Why can’t the internet get ahead of hate? Why h...