Sunday, February 23, 2020

Bananas in Kaunas and the Illusion of a Safe Internet - Anything Can Look Like Progress.




In 1995, just a few short years after Lithuania reasserted its independence from Russia, I traveled to Kaunas. It still very much had the look and feel of a Soviet Bloc country, with small signs of change peeking through the cracks in the old authoritarianism. The most obvious sign of change were bananas. You couldn’t walk through Old Town Kaunas without seeing several tables of Banana vendors and banana peels in every public trash receptacle. We were told that this was the immediate visible impact of the Russian’s retreat. For all the propagandistic “glory” of the Russian governing system, they could not manage to effectively transport the delicate and quick perishing banana to market in the northern countries. Bananas started arriving when the Russians left.

Of course, in 1995, the fifty-year legacy of Russian domination was not automatically dispelled with the arrival of bananas. This indicator of a new transportation and commerce capability was not a goal instantly achieved, but the start of a long road to reconstruction. However, because the Soviet systems in Lithuania were so dysfunctional, bananas seemed like a great victory.

Small things, when so little has been done before, can seem like a great accomplishment, a success. It took almost another twenty years for Lithuania to reclaim its identity and economy after more than fifty years of Soviet neglect. Entrenched systems, habits and practices, regardless of how obviously ineffective, are hard to shed. We often opt to accept little victories as complete solutions in order to avoid the truth that heavy, difficult real solutions often require.

Sometime after 1995, with the Internet achieving critical mass to go commercial, it became obvious to observers that something was seriously wrong. Destructive, harmful and vicious content was emerging on every platform which allowed public content. Content moderation, filtering and meaningful policies about prohibited material were also nonexistent.  Worse yet, when the problem was pointed out, it was dismissed, excused, minimized and ignored by platforms, companies and government. An emerging problem was given all the room it needed to become established, and it did.

 By 2006, David Duke, National Socialist Movement, many chapters of the KKK and various other racist, neo-Nazi and fascist groups have websites, YouTube channels storefronts on Amazon, aggressively exploring the then new Facebook and Twitter. Extremist news websites Vanguard News Network and Stormfront are being cited on Google newsfeed as sources. When brought to the attention of the companies there is again an unwillingness to look into the face of the problem. A refusal to consider that extremists were mounting a coordinated effort to exploit the entire internet ecosystem. A refusal to consider supporting research into the problem.

The burden of proving hate was prevalent on the Internet and becoming normalized was left to civil society groups. It would need to be done without the access to deep data, or the financial and technical support of the major industry players. In 2006, this meant proving the range and depth of the problem would be difficult and expensive at best.

Companies began to slowly improve their policies in 2010, largely in response to the threat of legislation in the EU and lawsuits in the US. By 2015 the movement to improve the Internet appeared to be gathering steam. However, hate was still prevalent, evolving, but had largely been accepted as a necessary evil.

In August 2017, the Unite the Right Rally in Charlottesville, Virginia finally demonstrated to the internet industry and America the result of the hate that had been spreading online. Although officially called to protest the potential removal extremist groups openly called for armed confrontation. In the end, one person died and many were injured when an extremist group member rammed a crowd with a car.

Finally, more concerted efforts were put into action. This is twenty years after the first indications that hateful content on the internet was poisoning the web, and ten years after a conclave with the tech industry asked for help studying the problem. In that time hate and misinformation continued to permeate the medium.

The efforts now in place appear to be significant. During an announced 6 week “monitoring exercise” conducted by trusted flagger organizations in  the EU, on request of the European Commission (EC), to check if social media were upholding their end of agreements made regarding enforcement of Terms of Service,  the enforcement by the platforms complied with the requirements. However, when the International Network Against Cyber Hate (INACH), an EC monitoring organization, conducted a similar exercise but this time unannounced, the outcomes were completely different.

As recently as this week Facebook and Twitter refused to remove maliciously altered videos showing Nancy Pelosi incorrectly tearing up a copy Donald Trump’s State of the Union address during a tribute to veteran servicemen.

This reticence by major platforms to act consistently against destructive content diminishes their efforts to date and makes any ongoing efforts seem disingenuous. Are the platforms truly fighting for a safer internet or is this just like seeing Bananas in Kaunas in 1995 – a nice symbol, but not really addressing the problem.

No comments:

Post a Comment

Thinking Faster than the Speed of Hate

  Jonathan Vick, Acting Deputy Director, International Network Against Cyber Hate (INACH)  Why can’t the internet get ahead of hate? Why h...