The topic that involves everyone, and that has become a serious problem in modern society. Rachel Brown presented what kinds of tools we can use to fight dangerous speech.
Photos by: Vanja Čerimagić
Intergroup conflict is always linked through direct or indirect communication between these groups. This is particularly noticed in conflict in communication through social media. There are two parts to dissolving this: first is how technologies allow these conflicts to spark, and second is directly related both political and social climate from where the conflict arises.
The phones, as a new technology, were used in conflicts in Kenya during the elections in 2012. This is where we see how new technologies change dynamics between conflicting groups and prevention of spreading of misinformation through speed communication. There are critical components which involve new information technologies, speed, distance, and coordination. Through fear in certain social groups information was shared almost instantaneously. This can be rectified by tapping into existing behaviours through use of current ecosystem, and building a social trust in these groups. This was done through door to door action, and personally speaking to these individuals. To counter the speed of fear we tried to build up subscription service, this was done in order to strength the trust and prevent spreading of fear in these circles. The biggest challenge was that rumours and misinformation are hard to counter once they are out, the main tactic is to try and be proactive in order to stop misinformation from spreading.
Real time coordination with monitoring teams on the field is necessary so that we can prevent the worst scenarios from happening. Only through support to positive action and breaking the power of peer pressure it is possible to counter misinformation and dangerous speech in field.
By understanding social norms of certain society and recognising the key influencers and understanding community hubs that are mostly used for spreading information. These are all key elements in order to understand how both societies work and how to properly fight against dangerous speech through information technologies.
The risk mitigation is extremely important when it comes to dangerous information and direct steps to prevent its effect. The main problem is that misinformation is very sticky, and it is hard to get rid of the avalanche effects that it can produce. By eliminating of our own social group biases, which are closely tied to our geographical borders, it will give us a broader spectrum of critical thinking, and open our minds to different positive stories that are happening around the world. This is where information technologies can help to directly fight against dangerous speech.
One big step is building communities where people who are under subject to dangerous speech can feel they are not alone and will seek help from that same community.
The new technologies raise a couple of ethical questions: is manipulation good if it will bring peace and is using bots and internet trolls legitimate way to fight dangerous speech. These questions are always in grey area of morality and we have to question each of us are we willing to cross this line, even if it is for greater good.
Presentations from this POINT 7.0 session are available HERE