Thursday, March 26, 2020

A Fire Drill Conceived by Steven King




One thing I have heard repeatedly in conversations about the Corona Virus pandemic or the U.S. national and local responses is, “at least we will be ready for next time.” This is a horrifying thought. It’s as if Steven King was asked to organize a fire drill to prepare for the end of the world.

I’m not sure which part of the newfound awareness from this practice disaster is least comforting. That segments of the national government are more concerned with money than life? That our fellow citizens are prone to panic buying of stupid things? That it took over a month for the federal government to admit there was actually a serious problem? That it took a near catastrophe to realize segments of the news media and major internet platforms have no sense of what a valid information source looks like? Maybe it was disturbing that, rather than focus on the problem, there was a distinct undertone of trying to blame the virus’s origin on “someone?” Or perhaps that there is a real sentiment by some Americans that people should be allowed to die as long as they are not my neighbors or in my community.

It almost feels like the revelation of an obvious yet important lesson.  Like, that next time, as a hurricane approaches, I won’t let the cat out. Never did see that cat again after hurricane Sandy!
Worst of all, to me, is the complacent acknowledgement that this will happen again in some form. This time it was maybe actually good. It was not Ebola, bubonic plague or something bad that killed people.  

Ultimately and sadly, just like out of a Steven King novel, it is not the disease that posed the worst danger, but us. That there are those people and companies who shamelessly profit from disaster or guard their profits by refusing to admit there is a disaster.

Yes, I learned a few things. I need to read more Steven King. And I need to think about who I want to cough on first when I catch whatever the next plague turns out to be.

Wednesday, March 25, 2020

Time to Take the Internet to Court





Almost all of the user safety measures implemented by major online companies have been put in place out of the platform’s fear.  Fear of litigation, regulation, legislation – fear. Altruism has very rarely carried the day when it comes to discussions with platforms about user protection.  Companies have a history of refusing to act with an abundance of caution when it comes to user safety, or even act with concern for victim safety until something disastrous happens and they have no choice. This has been true of many industries over the years, but the internet industry has always maintained it was something different and worked from a higher moral standard for society.

For the bulk of the internet’s existence platforms have offered users a Terms of Service (ToS) or other policy outlining the standards of behavior for users, repercussions for violations of those standards and protections for user’s information.  Unfortunately, ToS are not usually considered legally binding and many companies have seen fit to ignore stated obligations or modify outlined policies in their ToS to eliminate any embarrassing or inconvenient clauses.

Are the ToS a legally binding commitment or not?  More than a few platform’s opinions cite that the ToS, for the majority of services, are not offered in exchange for money, goods or service, which is one of the main characteristics of a binding agreement. However, that logic ignores that most ToS allow platforms to sell or use the user’s data. This makes the users a form of money, service or a product.  It certainly ignores that fact that it is the users who enable the platforms exist and prosper.  Platforms may well maintain that they protect the user data, but without protecting the real people behind the data, the results will eventually go wrong.

Why haven’t ToS been tested in court?  It is a risky strategy. If the court finds in the company’s favor, ToS become meaningless and companies will have nothing legally compelling them to enforce their policies or respond to requests to do so. If the court decides against the companies…the burden of formulating and enforcing livable policies this late in the game would be daunting for any industry. Either way a court decision in ToS would have extensive impact regardless of which way the court goes. As it should. This is issue that has been left unresolved for far too long.

The solutions are not simple, but the first order of business is to establish, in court, by legislation or by mutual and binding agreement, that ToS and other similar user safety and assurance policies are considered legally binding. That failure for companies to enforce their stated policies and standards is the equivalent of a breach of warrantee. Until there is a time when internet users can have reasonable faith in a platform’s policies, and know they have recourse if the platforms fail to enforce their policies, then all the moderators, bots and artificial intelligence content watchdogs in the world cannot truly fix the problems of abuse, exploitation, hatred, propaganda and racism the world is subjected to daily.

Sunday, February 23, 2020

Bananas in Kaunas and the Illusion of a Safe Internet - Anything Can Look Like Progress.




In 1995, just a few short years after Lithuania reasserted its independence from Russia, I traveled to Kaunas. It still very much had the look and feel of a Soviet Bloc country, with small signs of change peeking through the cracks in the old authoritarianism. The most obvious sign of change were bananas. You couldn’t walk through Old Town Kaunas without seeing several tables of Banana vendors and banana peels in every public trash receptacle. We were told that this was the immediate visible impact of the Russian’s retreat. For all the propagandistic “glory” of the Russian governing system, they could not manage to effectively transport the delicate and quick perishing banana to market in the northern countries. Bananas started arriving when the Russians left.

Of course, in 1995, the fifty-year legacy of Russian domination was not automatically dispelled with the arrival of bananas. This indicator of a new transportation and commerce capability was not a goal instantly achieved, but the start of a long road to reconstruction. However, because the Soviet systems in Lithuania were so dysfunctional, bananas seemed like a great victory.

Small things, when so little has been done before, can seem like a great accomplishment, a success. It took almost another twenty years for Lithuania to reclaim its identity and economy after more than fifty years of Soviet neglect. Entrenched systems, habits and practices, regardless of how obviously ineffective, are hard to shed. We often opt to accept little victories as complete solutions in order to avoid the truth that heavy, difficult real solutions often require.

Sometime after 1995, with the Internet achieving critical mass to go commercial, it became obvious to observers that something was seriously wrong. Destructive, harmful and vicious content was emerging on every platform which allowed public content. Content moderation, filtering and meaningful policies about prohibited material were also nonexistent.  Worse yet, when the problem was pointed out, it was dismissed, excused, minimized and ignored by platforms, companies and government. An emerging problem was given all the room it needed to become established, and it did.

 By 2006, David Duke, National Socialist Movement, many chapters of the KKK and various other racist, neo-Nazi and fascist groups have websites, YouTube channels storefronts on Amazon, aggressively exploring the then new Facebook and Twitter. Extremist news websites Vanguard News Network and Stormfront are being cited on Google newsfeed as sources. When brought to the attention of the companies there is again an unwillingness to look into the face of the problem. A refusal to consider that extremists were mounting a coordinated effort to exploit the entire internet ecosystem. A refusal to consider supporting research into the problem.

The burden of proving hate was prevalent on the Internet and becoming normalized was left to civil society groups. It would need to be done without the access to deep data, or the financial and technical support of the major industry players. In 2006, this meant proving the range and depth of the problem would be difficult and expensive at best.

Companies began to slowly improve their policies in 2010, largely in response to the threat of legislation in the EU and lawsuits in the US. By 2015 the movement to improve the Internet appeared to be gathering steam. However, hate was still prevalent, evolving, but had largely been accepted as a necessary evil.

In August 2017, the Unite the Right Rally in Charlottesville, Virginia finally demonstrated to the internet industry and America the result of the hate that had been spreading online. Although officially called to protest the potential removal extremist groups openly called for armed confrontation. In the end, one person died and many were injured when an extremist group member rammed a crowd with a car.

Finally, more concerted efforts were put into action. This is twenty years after the first indications that hateful content on the internet was poisoning the web, and ten years after a conclave with the tech industry asked for help studying the problem. In that time hate and misinformation continued to permeate the medium.

The efforts now in place appear to be significant. During an announced 6 week “monitoring exercise” conducted by trusted flagger organizations in  the EU, on request of the European Commission (EC), to check if social media were upholding their end of agreements made regarding enforcement of Terms of Service,  the enforcement by the platforms complied with the requirements. However, when the International Network Against Cyber Hate (INACH), an EC monitoring organization, conducted a similar exercise but this time unannounced, the outcomes were completely different.

As recently as this week Facebook and Twitter refused to remove maliciously altered videos showing Nancy Pelosi incorrectly tearing up a copy Donald Trump’s State of the Union address during a tribute to veteran servicemen.

This reticence by major platforms to act consistently against destructive content diminishes their efforts to date and makes any ongoing efforts seem disingenuous. Are the platforms truly fighting for a safer internet or is this just like seeing Bananas in Kaunas in 1995 – a nice symbol, but not really addressing the problem.

Monday, January 6, 2020

Intolerance for Hate






For far too many years most of the anti-hate groups have preached respect and acceptance for abhorrent, corrosive and destructive beliefs. That is what free speech is all about, isn’t it? Where has that gotten us? Not to a good place - That is obvious. In our best effort to defend freedom of speech, we have instead enabled overt sectarianism in government, blatantly racist and hateful internet content, and social divisions founded on extremist propaganda. This is not the purpose of free speech, but here we are.

The twisted interpretations of free speech and freedom of religion that have been promoted by right-wing forces is as much a perversion of the founding father’s intent as Al-Qaida’s ethos is a perversion of Islam. For starters, free speech was never meant as a weapon against religious or social groups, but meant as a protection for speaking out against an unjust government. Equally, freedom of religion was meant to protect personal religious practice and never intended as a vehicle for imposing religious strictures on segments of society, groups or individuals.  Anyone saying freedom of religion is there to protect their beliefs at the expense of others is attempting to twist our founding principles for their own purposes.

Free speech in a government context is a law, which most U.S. jurisdictions have a fairly good handle on. In a civil context, free speech is a social contract agreed upon by fellow citizens as a foundation for frank interaction.

In a social context, free speech is not a law. It is limited by the society and, although it may extend beyond social conventions, it is not unlimited in itself.  In that sense, anyone who invokes free speech as an excuse to be dangerous, abusive or hateful surrenders their right to that protection under the social contract. Increasingly, interpretations of free speech laws are leaning in this direction.

Freedom works similarly and is yet more complex at the same time. “Love they neighbor…”, “Do unto others…” may not be the most important principles of religion for some people, but they are cornerstones of every major religion in some way. At the heart of the Constitution is the Amendment respecting the establishment and practice of any religion. As a nation we have always been committed to supporting the practice of religion in its full spectrum.  We accept that to have faith people to not need to follow rules defined by others; about the calendar, worship, clothes, food or sexuality. The implication by anyone that any practice invalidates a person’s religiosity is a negation of the denier’s freedom of religion. “Do unto others as you would have them do unto you” is not called the "Golden Rule" for nothing.

Yet, in our efforts to defend free speech and freedom of religion, we have inadvertently allowed horrible hate, propaganda and incitement against our neighbors. In trying to prove sunlight is the best disinfectant, we have gotten burned. Extremists and hatemongers made the seemingly reasonable argument that censoring their hate would damage the principles of free speech or freedom of religion, while all the time, damaging the founding principles of our democracy was their actual goal. We blinked. We were not brave or bold. We erred on the side of caution.

The result has not been good.

Voicing intolerance to hate, bigotry, propaganda, distortions and falsehoods is the ultimate exercise of free speech. This challenge comes with great responsibility. We must be ready to know how to defend truth, how to define hate speech, how to define our principles and defend what we say. This is all new to most of us and we may get it wrong. We need to start teaching the children how to recognize and advocate truth. The need to acknowledge and celebrate honesty. We may now need to be intolerant of hate as never before so that there can be a future where free speech is not weapon but is embraced as the gift it was intended to be.

Monday, November 25, 2019

Planning For History



The Casualties of a War Without Heroes

The community of internet users is starving for a leader. Not a leader in size, technology, finance, data, algorithms. A leader in thought, imagination and daring vision.

We had such things in simpler times, but when things got complicated in 2005–2006 no one wanted to discuss policy, threats, impact or how we should protect users. No one wanted to be that leader. The internet would self-regulate. Users would be the leaders. That was the idea anyway.

In the internet world we all strive to be the first, the best, the successful, the brightest. Yet it seems, none of us wants to take on the responsibilities inherent in being the leader — taking responsibility for the bad and the good. Even doing the assessment of the bad and good takes a thick-skin that much of our digital world seems to have lost overnight. Those few industry thought leaders that do exist lead in finding ways to allow dangerous, hateful and unregulated content. Very few companies in the web world are pro-actively self-examining their impact on the young, the vulnerable or the susceptible segments of the world.

Being the leading social media platform, search engine, blog or forum, the leading e-commerce, fund raising or currency exchange all carry an inherent responsibility. In part, that responsibility comes from the information each company manages and the success they have achieved as a result of marketing their user’s data. They owe the users. You owe them a safe, secure stable product. Ultimately, if you plan to keep your users, you owe them your best efforts to remain current and grow the product itself. There is also the obligation to the industry community and staff. However, there are other obligations to owners, investors, stockholders and profit, which are often prioritized over all others.

There has been progress. Money is now being actively funneled towards the study, detection and removal of hateful, exploitative and dangerous content. But the solution to the problem is not as easy as saying, “we are on the right path.” The new research, the new monitoring and enforcement efforts, the money now being spent all shows that there was indeed a problem. For over 10 years internet users, young and old, have been subject to seriously problematic content. Just as some people have benefited greatly from the internet, others have been greatly harmed. Personal lives, careers, self-esteem and even basic human judgement and trust, for some, has suffered. In these cases, the damage was often inflicted with a few mouse-clicks. Repairing the harm is not nearly as easy. The ability for the average person to address the damage from cyber-stalking, exploitation, reputation assassination, bullying and abuse can easily take orders of magnitude more effort to undo that it took to inflict in the first place.

It is time to look towards healing, repairing and rebuilding. Trying to bring back to life some of what has died in so many internet users. Maybe even making them whole again.

As it is so easy to inflict pain on others online, it was equally easy to not address the problem. Just as it takes many more times the effort to address the hate than to make it, so healing the residual damage caused by the hate will take significant effort.

The time to start is now. We need to begin enabling people, not just to flag hateful content, but to level the playing field by making those who post hateful content responsible for defending it with the same effort required by their targets to have it removed.

This starts with making resources available to targets and victims of bullying, abuse and cyberhate. Those most often victimized are predominantly, marginalized, underrepresented and under resourced.
Targeted Groups and People Need to be Empowered.

Funds and guidance for victims to hire experts to help undertake critical tasks required to make a case for their plight to the Internet platforms. In order to gain the attention of the platforms and to establish credibility for their situations, it is necessary for victims to identify the volume, frequency, nature and character of the problem material as well as the underlying connection or source the material may have to established hate groups, organizations or political movements. No small undertaking. This type of research often requires technology, resources and experience not available to many. The documentation, presentation and implementation of solutions is often a specialized practice. If legal assistance is required there are certainly expenses. The resources needed can easily represent tens of thousands of dollars in time and money, which the targets must endure and that the perpetrator did not.

Another option is for targets of abuse to acquire training in how to handle the tasks necessary to identify, characterize, communicate it to platforms and mitigate cyber abuse. This may not be as expensive, but there is a learning curve. Not everyone is equally capable of doing the types of research necessary and sometimes, time is of the essence.

Counseling for victims must be made available. Regardless of how they have been affected, whether by bullying, revenge porn or scams, they all need to know they are not alone. Support groups led by industry sponsored experts could provide an immeasurable benefit to victims. The perpetrators of abuse must also be made aware that their victims have allies and resources. Just as victims often give-up trying to fight the hate, perhaps abusers will be dissuaded when they realize victims are well equipped and supported to thwart them.

As with physical abuse, cyber-abuse is also cyclical. Victims become angry, resentful and desensitized, which makes it easy for them to become the next generation of abusers.

Investing in the future of the Internet, not just the technology, or the applications, but in developing better users as well benefits everyone. This is not something that should be left to third parties alone.
This is not just about making a more civil internet, but about the survival of the internet. About keeping regulation limited and responsibility high. About making room for all opinions, but safeguarding facts, reality and history. Democracy is about speaking up without fear. Democratization of the internet starts, first and foremost, with users being able to participate without fear. Users who need a parachute should have a parachute, users who need a safety net should have a safety net and users who want a trapeze should have one. None of this is beyond our capabilities. None of this is beyond justification.

Tuesday, October 15, 2019

Open letter to Google, Facebook and Twitter in response to Gizmodo article “Google, Facebook, and Twitter Tell Biden Campaign They Won't Remove Defamatory Trump Ad”

“Google, Facebook, and Twitter Tell Biden Campaign They Won't Remove Defamatory Trump Ad”
https://gizmodo.com/google-facebook-and-twitter-tell-biden-…
I just saw the various platform responses refusing to remove a false and misleading ad regarding Joe Biden and his family which was placed by the Trump campaign.
Although it is easy to appreciate that Facebook, Google, Twitter and others are not directly responsible for the truthfulness of ads run on their services, it is also inappropriate for anyone to do nothing when damaging falsehoods are clear. The perpetrators of false advertising are exploiting the credibility of platforms. Equally, lies foisted on the public with the apparent complicity of internet platforms, intentionally or not, degrades the public trust in all online content.
Platforms can take a few simple steps to fulfill a basic obligation to users, in order to contextualize the ads in question, and to facilitate possible correction of the problem.
1) A disclaimer placed on all political advertising warning the audience that the content has not been reviewed for accuracy and may contain misleading information.
2) All advertising submissions should require that the originator aver that, to the best of their knowledge, the content they are providing is correct and truthful, and if found to be wrong, will correct the ad, or not object to its removal.
3) A warning that repeated submission of false and misleading ads may result in banning of the advertiser, product or sponsor.
4) An advisory that all false ads will be reported to appropriate regulatory and law enforcement agencies.
The problem with manipulation of the 2016 election was not just that the public was manipulated by foreign governments, agencies, individuals or agents, but that that the U.S. population was manipulated at all. The abuse of internet advertising and platforms during an election for political gain, regardless of political affiliation, is just wrong. The U.S. voters learned a lot in the wake of the 2016 election. Did Silicon Valley?

Sunday, September 2, 2018

Desperately Seeking Digital Salvation




Our love affair with the internet is ending. It is as if we have discovered a long standing lover has  hidden being arrested numerous times for DWI and then says "it's nothing to worry about."

The US only woke up to the cultural infidelities of the internet when it threatened democracy. It seems abusing blacks, women, Jews, Muslims, LBGTQ or the physically challenged for the past 10 years wasn't really considered much of a problem. The companies were glad to let it go and the US public defended the digital industry's free speech assertion. But the Europeans were far less sympathetic and, it seems, basically correct.

Abuse and exploitation by hate groups, extremists, foreign antagonists and a variety of malcontents has certainly taken the shine off the internet. When you add toxic online social and gaming environments, over-reaching data mining and user activity tracking, the entire picture looks bad. While testifying in Europe and Washington D.C. everyone from ISP, hosts, game developers to corporate giants are left with the task of trying to paint a friendly face on an ugly canvas.

Being realistic, the internet is not entirely evil or bad, but where it has been bad, it has been awful. Some of those bad places have been very large, very influential and have done some serious damage. It is no longer practical or possible to ignore the downside of an unregulated, unmoderated internet.

Companies are not ignorant of the problem or the business implications. Users can be fickle, ask MySpace. The same goes for advertisers. Politicians are always looking for issues to build or support careers. Companies know all this and are now scrambling to make corrections that are long overdue.

Users and communities have been told numerous times about wonderful new adjustments that will greatly improve the livability of the internet. Yet old issues left dormant and unattended emerge unexpectedly to unleash new daunting problems.

Some things take allot of effort to reverse. It's hard to change the recipe once the cake has been baked. In those cases, it is important to make the recipe sound good and make the icing attractive.  You really want to get it right the next time. Meanwhile, all your guests are asking if there's ice cream or fruit or something else because everybody knows bad cake when they taste it.



Thinking Faster than the Speed of Hate

  Jonathan Vick, Acting Deputy Director, International Network Against Cyber Hate (INACH)  Why can’t the internet get ahead of hate? Why h...