Monday, November 25, 2019

Planning For History



The Casualties of a War Without Heroes

The community of internet users is starving for a leader. Not a leader in size, technology, finance, data, algorithms. A leader in thought, imagination and daring vision.

We had such things in simpler times, but when things got complicated in 2005–2006 no one wanted to discuss policy, threats, impact or how we should protect users. No one wanted to be that leader. The internet would self-regulate. Users would be the leaders. That was the idea anyway.

In the internet world we all strive to be the first, the best, the successful, the brightest. Yet it seems, none of us wants to take on the responsibilities inherent in being the leader — taking responsibility for the bad and the good. Even doing the assessment of the bad and good takes a thick-skin that much of our digital world seems to have lost overnight. Those few industry thought leaders that do exist lead in finding ways to allow dangerous, hateful and unregulated content. Very few companies in the web world are pro-actively self-examining their impact on the young, the vulnerable or the susceptible segments of the world.

Being the leading social media platform, search engine, blog or forum, the leading e-commerce, fund raising or currency exchange all carry an inherent responsibility. In part, that responsibility comes from the information each company manages and the success they have achieved as a result of marketing their user’s data. They owe the users. You owe them a safe, secure stable product. Ultimately, if you plan to keep your users, you owe them your best efforts to remain current and grow the product itself. There is also the obligation to the industry community and staff. However, there are other obligations to owners, investors, stockholders and profit, which are often prioritized over all others.

There has been progress. Money is now being actively funneled towards the study, detection and removal of hateful, exploitative and dangerous content. But the solution to the problem is not as easy as saying, “we are on the right path.” The new research, the new monitoring and enforcement efforts, the money now being spent all shows that there was indeed a problem. For over 10 years internet users, young and old, have been subject to seriously problematic content. Just as some people have benefited greatly from the internet, others have been greatly harmed. Personal lives, careers, self-esteem and even basic human judgement and trust, for some, has suffered. In these cases, the damage was often inflicted with a few mouse-clicks. Repairing the harm is not nearly as easy. The ability for the average person to address the damage from cyber-stalking, exploitation, reputation assassination, bullying and abuse can easily take orders of magnitude more effort to undo that it took to inflict in the first place.

It is time to look towards healing, repairing and rebuilding. Trying to bring back to life some of what has died in so many internet users. Maybe even making them whole again.

As it is so easy to inflict pain on others online, it was equally easy to not address the problem. Just as it takes many more times the effort to address the hate than to make it, so healing the residual damage caused by the hate will take significant effort.

The time to start is now. We need to begin enabling people, not just to flag hateful content, but to level the playing field by making those who post hateful content responsible for defending it with the same effort required by their targets to have it removed.

This starts with making resources available to targets and victims of bullying, abuse and cyberhate. Those most often victimized are predominantly, marginalized, underrepresented and under resourced.
Targeted Groups and People Need to be Empowered.

Funds and guidance for victims to hire experts to help undertake critical tasks required to make a case for their plight to the Internet platforms. In order to gain the attention of the platforms and to establish credibility for their situations, it is necessary for victims to identify the volume, frequency, nature and character of the problem material as well as the underlying connection or source the material may have to established hate groups, organizations or political movements. No small undertaking. This type of research often requires technology, resources and experience not available to many. The documentation, presentation and implementation of solutions is often a specialized practice. If legal assistance is required there are certainly expenses. The resources needed can easily represent tens of thousands of dollars in time and money, which the targets must endure and that the perpetrator did not.

Another option is for targets of abuse to acquire training in how to handle the tasks necessary to identify, characterize, communicate it to platforms and mitigate cyber abuse. This may not be as expensive, but there is a learning curve. Not everyone is equally capable of doing the types of research necessary and sometimes, time is of the essence.

Counseling for victims must be made available. Regardless of how they have been affected, whether by bullying, revenge porn or scams, they all need to know they are not alone. Support groups led by industry sponsored experts could provide an immeasurable benefit to victims. The perpetrators of abuse must also be made aware that their victims have allies and resources. Just as victims often give-up trying to fight the hate, perhaps abusers will be dissuaded when they realize victims are well equipped and supported to thwart them.

As with physical abuse, cyber-abuse is also cyclical. Victims become angry, resentful and desensitized, which makes it easy for them to become the next generation of abusers.

Investing in the future of the Internet, not just the technology, or the applications, but in developing better users as well benefits everyone. This is not something that should be left to third parties alone.
This is not just about making a more civil internet, but about the survival of the internet. About keeping regulation limited and responsibility high. About making room for all opinions, but safeguarding facts, reality and history. Democracy is about speaking up without fear. Democratization of the internet starts, first and foremost, with users being able to participate without fear. Users who need a parachute should have a parachute, users who need a safety net should have a safety net and users who want a trapeze should have one. None of this is beyond our capabilities. None of this is beyond justification.

Tuesday, October 15, 2019

Open letter to Google, Facebook and Twitter in response to Gizmodo article “Google, Facebook, and Twitter Tell Biden Campaign They Won't Remove Defamatory Trump Ad”

“Google, Facebook, and Twitter Tell Biden Campaign They Won't Remove Defamatory Trump Ad”
https://gizmodo.com/google-facebook-and-twitter-tell-biden-…
I just saw the various platform responses refusing to remove a false and misleading ad regarding Joe Biden and his family which was placed by the Trump campaign.
Although it is easy to appreciate that Facebook, Google, Twitter and others are not directly responsible for the truthfulness of ads run on their services, it is also inappropriate for anyone to do nothing when damaging falsehoods are clear. The perpetrators of false advertising are exploiting the credibility of platforms. Equally, lies foisted on the public with the apparent complicity of internet platforms, intentionally or not, degrades the public trust in all online content.
Platforms can take a few simple steps to fulfill a basic obligation to users, in order to contextualize the ads in question, and to facilitate possible correction of the problem.
1) A disclaimer placed on all political advertising warning the audience that the content has not been reviewed for accuracy and may contain misleading information.
2) All advertising submissions should require that the originator aver that, to the best of their knowledge, the content they are providing is correct and truthful, and if found to be wrong, will correct the ad, or not object to its removal.
3) A warning that repeated submission of false and misleading ads may result in banning of the advertiser, product or sponsor.
4) An advisory that all false ads will be reported to appropriate regulatory and law enforcement agencies.
The problem with manipulation of the 2016 election was not just that the public was manipulated by foreign governments, agencies, individuals or agents, but that that the U.S. population was manipulated at all. The abuse of internet advertising and platforms during an election for political gain, regardless of political affiliation, is just wrong. The U.S. voters learned a lot in the wake of the 2016 election. Did Silicon Valley?

Thinking Faster than the Speed of Hate

  Jonathan Vick, Acting Deputy Director, International Network Against Cyber Hate (INACH)  Why can’t the internet get ahead of hate? Why h...