Saturday, March 28, 2020

Internet Victim's Fund




Is anyone in favor of cyber hate? Is anyone in favor of hate online targeting people for their physical characteristics, age, sexual identity or religion? I doubt it.  So why does this type of abusive, exploitative, degrading content persist? The answer is simple, and basic to our current attitude toward the internet. Hate and falsehoods online are orders of magnitude easier to create than they are to challenge or remove. There are no mechanisms that level the playing field between facts and fictions, truth and falsehood, attack and defense. Hate and lies can take minutes to post, but days or months, if ever, to remove.

Some of the more responsible internet platforms, especially since the 2016 election, have instituted forms, programs, policies and departments to try and address the problem. But the problem is not just isolated to elections, and not just isolated to one platform.  Targeting of an individual or group is often cross medium and platform. When one channel or account is deleted, another backup account immediately takes over. This is all done with a few easy clicks. The only way for victims to fight this battle is with a massive investment of time by investigation, filing reports or civil lawsuits to gain information of the perpetrators. This all takes expertise, lawyers, and paperwork. All of which involves money.

The result of pushing back against multi-platform abuse, exploitation or targeting is that the victim is faced with a huge burden while the instigator may only have an account suspended, if at all. The few cases where there has been legal action represents a very small fraction of the real problem.

It is almost impossible to strike a balance between the posting and challenging of bad content. It is possible to give victims and targets of abuse a better set of tools to respond to bad situations. This must include offering experts, advocates and, when necessary financial support to oppose abusers and exploiters. Not all aggressive speech merits or requires punitive action, but for far too long we have failed to err on the side of the c=victim and given the victimizers almost free rein.

Once bad actors realize that anti-social, abusive, targeted, aggressively caustic and destructive behavior will be met with responses supported by industry and may involve serious consequences, then and only then will progress be made against cyber hate.

Thursday, March 26, 2020

A Fire Drill Conceived by Steven King




One thing I have heard repeatedly in conversations about the Corona Virus pandemic or the U.S. national and local responses is, “at least we will be ready for next time.” This is a horrifying thought. It’s as if Steven King was asked to organize a fire drill to prepare for the end of the world.

I’m not sure which part of the newfound awareness from this practice disaster is least comforting. That segments of the national government are more concerned with money than life? That our fellow citizens are prone to panic buying of stupid things? That it took over a month for the federal government to admit there was actually a serious problem? That it took a near catastrophe to realize segments of the news media and major internet platforms have no sense of what a valid information source looks like? Maybe it was disturbing that, rather than focus on the problem, there was a distinct undertone of trying to blame the virus’s origin on “someone?” Or perhaps that there is a real sentiment by some Americans that people should be allowed to die as long as they are not my neighbors or in my community.

It almost feels like the revelation of an obvious yet important lesson.  Like, that next time, as a hurricane approaches, I won’t let the cat out. Never did see that cat again after hurricane Sandy!
Worst of all, to me, is the complacent acknowledgement that this will happen again in some form. This time it was maybe actually good. It was not Ebola, bubonic plague or something bad that killed people.  

Ultimately and sadly, just like out of a Steven King novel, it is not the disease that posed the worst danger, but us. That there are those people and companies who shamelessly profit from disaster or guard their profits by refusing to admit there is a disaster.

Yes, I learned a few things. I need to read more Steven King. And I need to think about who I want to cough on first when I catch whatever the next plague turns out to be.

Wednesday, March 25, 2020

Time to Take the Internet to Court





Almost all of the user safety measures implemented by major online companies have been put in place out of the platform’s fear.  Fear of litigation, regulation, legislation – fear. Altruism has very rarely carried the day when it comes to discussions with platforms about user protection.  Companies have a history of refusing to act with an abundance of caution when it comes to user safety, or even act with concern for victim safety until something disastrous happens and they have no choice. This has been true of many industries over the years, but the internet industry has always maintained it was something different and worked from a higher moral standard for society.

For the bulk of the internet’s existence platforms have offered users a Terms of Service (ToS) or other policy outlining the standards of behavior for users, repercussions for violations of those standards and protections for user’s information.  Unfortunately, ToS are not usually considered legally binding and many companies have seen fit to ignore stated obligations or modify outlined policies in their ToS to eliminate any embarrassing or inconvenient clauses.

Are the ToS a legally binding commitment or not?  More than a few platform’s opinions cite that the ToS, for the majority of services, are not offered in exchange for money, goods or service, which is one of the main characteristics of a binding agreement. However, that logic ignores that most ToS allow platforms to sell or use the user’s data. This makes the users a form of money, service or a product.  It certainly ignores that fact that it is the users who enable the platforms exist and prosper.  Platforms may well maintain that they protect the user data, but without protecting the real people behind the data, the results will eventually go wrong.

Why haven’t ToS been tested in court?  It is a risky strategy. If the court finds in the company’s favor, ToS become meaningless and companies will have nothing legally compelling them to enforce their policies or respond to requests to do so. If the court decides against the companies…the burden of formulating and enforcing livable policies this late in the game would be daunting for any industry. Either way a court decision in ToS would have extensive impact regardless of which way the court goes. As it should. This is issue that has been left unresolved for far too long.

The solutions are not simple, but the first order of business is to establish, in court, by legislation or by mutual and binding agreement, that ToS and other similar user safety and assurance policies are considered legally binding. That failure for companies to enforce their stated policies and standards is the equivalent of a breach of warrantee. Until there is a time when internet users can have reasonable faith in a platform’s policies, and know they have recourse if the platforms fail to enforce their policies, then all the moderators, bots and artificial intelligence content watchdogs in the world cannot truly fix the problems of abuse, exploitation, hatred, propaganda and racism the world is subjected to daily.

Thinking Faster than the Speed of Hate

  Jonathan Vick, Acting Deputy Director, International Network Against Cyber Hate (INACH)  Why can’t the internet get ahead of hate? Why h...