Sunday, January 28, 2018

Time for a Universal Online Code of Conduct

Is there anyone who feels we don't need a Universal Code of Conduct (UCoC)for the internet? Probably. But those are most likely people who make a point of facilitating the abuse of others. Almost every civil society, community protection, civil rights or anti-hate group has considered, drafted or published such a thing. Many government agencies have created such documents and even some companies have policies stated as universal across their services.

These policy statements go by various names; Terms of Service, Code of Conduct, User Agreement, and Community Guideline. With minor variations, it all boils down to the same concept; an agreement on what the companies, users, community and government can expect from each other - or it should.


All the groups, agencies and companies who build such agreements do it from a highly subjective and self serving perspective.  Everyone has an agenda. This immediately compromises any hope of creating an agreement with parity. Understandably, no one is readily willing to accept a Code of Conduct they had no hand in crafting.  Despite the ultimate advantage of a combined effort, there will always be companies, people, groups or governments who will never accept anything but their own concept of free speech. That is ironically, an example of free speech too.

There are companies ready and willing to cooperate and experiment in collaboration on a UCoC. Let's call them Tier 1 companies. The next group of companies agrees that some form of standards for on-line behavior is needed, but have reservations about anyone but them defining those limits on their platform. Then there are the Tier 3 companies who don't feel such a UCoC applies to them or object to such an overreaching policy on principals of free speech and freedom of choice. Ultimately companies are businesses and only responsible for acting within the law, but they also need keep to customers in order to stay in business

Communities need to stay aware of the power they have over companies and the responsibility that power contains. Enough users and public sentiment can make of break an on-line company. Any company. Consumers can push companies to mold policy in a desired direction. The trap is pushing a company to where their service is no longer competitive or the policies are so restrictive as to be unmanageable. Unprofitable businesses have a habit of closing.

Terms of Service as a contract has never been successfully tested in court. The companies don't want to lose and find the that ToS or other term are legally enforceable. Civil agencies and consumers don't want to lose and find that ToS are meaningless.  Odds are a court challenge is just a matter of time.

It is a far better idea for a congress of involved parties to establish an agreed, mutual UCOC than to wait for some disaster to push us there.









Saturday, January 27, 2018

Deleting Hate Does Not Stop Hate

The EU is, rightly, very excited about their efforts to push the internet companies to better address hate speech has resulted in an estimated 70% removal rate. But the grim reality is that removing hate speech does not stop hate.

If only it was that simple.



Hate speech on-line is a manifestation of real world sentiments.  Removing it does not, by itself, change the reality creating it. During the period of EU's increased online enforcement,  far right groups and candidates still flourished in the European political arena and anti-Muslim, anti-immigrant and anti-Semitic displays blossomed across EU. The American experience over the past year, with neo-Nazi groups re-branded as alt-right, marching in average communities, also shows that muffling hate speech alone does not address the root problem of racist propaganda and divisive politics.

By focussing on the outward manifestations and not the underlying issues, the public and the governments are handing the companies a job they can never complete.

Perhaps the most productive aspect of the EU efforts is to demonstrate to the tech companies that more investment was always needed. Now it is more obvious than ever that investing in better ideas and more advanced automated methods is far more cost effective and manageable than any army of reviewers.  The companies have always maintained that the answer  was in social solutions. Finally perhaps we are arriving at a place where  the two tactics are, for the first time, meeting. 

Governments and society are finally beginning to understand how vast and complex the problem is. Companies are coming to grips with the scope of the impact and damage a weaponized internet can cause. Everyone sees that no single solution, and certain no one approach alone is working. The problem mutates and adapts too rapidly.

We have managed to come together in the past to defeat a daunting enemy. Looks like we need that attitude again.



Bring Out Your Bots - We Can Play This Game Too



Weaponized social media is being used to destabilize cultures, societies and governments. Yes, by Russia, but many others as well. It is obvious that physical invasion or overthrow is too expensive. Forget spies and infiltration, that is too unpredictable.  The internet is a cheap, powerful and effective weapon.



People and governments investigate what is happening, when it started and how it was done. There is no question it happened and continues to happen. These weapons are equally available to everyone.

Rather than study, discuss or ague, perhaps it is time to initiate our own bot armies.  Engaging in botwars comes with responsibilities. These are powerful tools of war. How much destruction and collateral damage we can stomach. As Westerners, once we embrace weaponized bots, we will become very good at using them.

Think about  nuclear weapons.  We determined early on that the only sane, conceivable use was defensive and hopefully as nothing more than a deterrent. What about bots? What are the limits? How far are we willing to go?




Hate Is Not An Accident


Ignorance of what is hate is somehow expected to be an acceptable apology. When someone significant, influential or just "known" gets caught saying something awful, hateful or racist the most popular excuse is, "I didn't know that was hate. It was an accident. I didn't mean it." Such ignorance is only achieved through intentional inattention. 



Giving air to hate is promoting hate.

Admitting ignorance of what is hate does not un-type or un-say the words. Anyone who objects to hate speech has an obligation to know what it is. If someone is truly sorry about uttering hate, they need to work against hate. The apology is in the actions.

In the world of hate and racism, speaking hate is considered a slip which betrays the speakers true feelings. Retraction or denial is seen as bowing to pressure from "society's masters."  So when Donald Trump, Mel Gibson, Roy Moore or Louis Farrakhan post hateful videos or say things bolstering racism, whether they claim it is an accident, a mistake, misspoken or not, makes little difference. The hate  immediately becomes spray-painted on a psychological wall in our society. Removing it takes cleaning not simply disowning.

Sixty days after Donald Trump re-posted anti-Muslim videos from the hate group Britain First does he now admit it "might" have been a mistake. If it was unclear if the videos were hate, the source he pulled them from certainly clears up any confusion for anyone paying attention.

It is hard to believe an intelligent "least racist person ever" could have done something this problematic, not addressed it for two months and then expect any apology to be accepted as genuine.

Hate is not accident. 

Monday, January 1, 2018

A Critically Realistic New Year To All.

Things Go Wrong On The Internet.

No other industry in history has had such an impact, grown as fast or involve such complexity as the internet, smart and mobile devices.  The next best example is the auto industry which equally transformed the nation. The difference is that it took the car industry 70 years to reach the dominance the internet has achieved in 20 years.

Moving at such a high speed, things are going to go wrong.

The first automotive safety standards were self-imposed by the industry in the 1930's when Ford made safety glass mandatory in all it's vehicles, GM started doing crash test and Chrysler recessed dashboard controls.  Government-mandated safety standards in the U.S. and in most of the world did not occur for another 20 years.

Was it appropriate for government to impose regulations? Absolutely. Was it right that consumers had input? Certainly. Until the government created a workable regulatory framework and the public collaborated on their needs, the industry took matter into their own hands.

So it goes with the internet.

Unfortunately, most safety measures, in almost every industry, are in response to knowledge gained through unfortunate incidents.

Today's internet looks and acts nothing like the internet of 1995, 2005 or even 2015! The speed with which government would have to enact safety regulations regarding today's internet is very uncharacteristic, if it is even possible.

The public is much more interested in finger-pointing and embarrassing companies than they are in understanding the underlying causes of the issues and the complexity of solutions.  They are not interested in understanding that solutions can, by themselves, cause more problems.

Solutions must involve the companies. Whether the companies, the public or government like it or not. If left unassisted, the companies will make their own decisions.

There is no magic wand, but there is an answer.

Responsibility - all the way around.



Companies need to take responsibility for the content on their services. Even if they are not legally responsible, they are still profiting from the content, good or bad. Public education is also critical. 

Users need to take responsibility for what they post. They need to take responsibility for being informed about the platforms they use.

Governments need to take responsibility for being informed and protecting consumers from undue threats and harm. Train law enforcement properly on cyber issues and develop laws to protect our vulnerable populations.

Our one voice can be a chorus, or it will be a mob.






Friday, December 15, 2017

Fighting online hate groups: How an organization is stopping propaganda from spreading

Steve Spriester sits down with Jonathan Vick from the Anti-Defamation League

By Steve Spriester - Anchor

https://www.ksat.com/news/spriester-sessions/fighting-online-hate-groups-how-an-organization-is-stopping-propaganda-from-spreading 

SAN ANTONIO - Jonathan Vick’s job at the Anti-Defamation League is to track and stop hate groups that spread their propaganda online, making it harder for those who peddle hate and easier for those who need help.
“In my mind, anyone who can justify victimizing or targeting any one group can turn that into an ability to target absolutely every group, and that's the kind of fight that we're fighting,” Vick said.

Vick and his team had been tracking online chatter and warned state officials of the violence to come before the torch march through the University of Virginia campus in Charlottesville, which echoed anti-Jewish rallies in Nazi Germany.
“I wouldn't call it alarming. I'd call it incredibly disappointing and very sad,” Vick said. “What happened in Charlottesville pointed out that it could happen in relatively smaller communities, and that everybody at this point has experienced this sort of phenomenon in one way, shape or form.”
Founded in 1913 to stop discrimination against the Jewish people, the Anti-Defamation League’s stated goal is to secure justice and fair treatment for all people. The internet is its latest challenge, as it sometimes serves as a modern day megaphone for those spreading hate.
Vick and his team try to disrupt or even shut down hate groups, appealing to companies such as Google, Facebook and Twitter to take down hateful sites and posts.
“The reality is that most of them come to a real human civilized choice, and then that tends to be the issue that disrupts some of these groups rather significantly,” Vick said.
Vick said he has ways of dealing with hate on a daily basis.
“(I) spend time with family, engage in the community, talk to people, hear people, share my ideas and have a dialogue,” he said.
Vick jokes that fighting hate is kind of like a family business. His father-in-law was a Holocaust survivor and a Nazi hunter. He faced the world’s worst people toe to toe. However, Vick’s battles play out online, as the internet has become a place where many people spread hate.
“I feel we've made tremendous progress,” Vick said.
The Anti-Defamation League fights anti-Semitism groups, anti-Muslim groups, bigotry, racism, homophobia and other hate groups. It also helps people who are targets of hateful memes.

Monday, November 13, 2017

Recreational Racists



We live for clicks, likes, followers and the validation they bring. Even if they are a two dimensional, hollow click validation of strangers rather than the substantive approval of our family or community online or off.

Under pressure from social media companies and friends, people open accounts only to find their social circle and popularity does not necessarily soar as it has for others. Out of frustration, a user may something inappropriate, only to find it gets more attention than anything else they had posted before. Despite having done something bad, the attention feels good. The user may start looking for the best things to be horrible about in order to gain the most attention possible.

Even if they might not start out as inherently bad people, that is who soon becomes their community. Click me, you like me. People go where they are liked, even if that place is bad.

We, the internet community, don't help. We share the worst things we see and follow accounts and causes we despise (who does that?). Maybe  for the right reasons, but a share is a share and a follower is a follower. Some people would rather be hated and followed than not followed at all. We have seen numerous cases of haters, when unmasked, demand they have a right to say what they want and then beg not to be outed (look up examples @sweepyface and Violentacrez ) and promise never to do it again. They claim they didn't mean the horrible things they said. Andrew Anglin, who runs the Daily Stormer website and is now being sued for his online viciousness, recently claimed his website was an experiment and not the rampant anti-Semitism it appeared to be - a hobby Nazi. 

Of course there are genuine haters, racists and bigots who would love nothing more than to lead a lynch mob, beat immigrants with impunity or burn minority-owned businesses. We certainly have enough of those people. We don't need to foster weekend warriors for hate.

In a world measured in likes and views, people will resort to any online behavior, as long as it gets reposted, retweeted or shared. Sensationalism, distortion, artifice becomes king. Attention getting behavior is nothing new, just ask any third grade teacher. It's all about insecurity. 


We need to call out bad behavior and hate online from those we know. On some issues it is NOT OK to agree to disagree. We need to acknowledge and value of the people around us, so people feel real validation and are not forced into distructive behavior to find acceptance online. 


Thinking Faster than the Speed of Hate

  Jonathan Vick, Acting Deputy Director, International Network Against Cyber Hate (INACH)  Why can’t the internet get ahead of hate? Why h...