free webpage hit counter

How to fix Twitter and all social media

Those who discuss the future of Twitter and other social-media platforms often fall into two opposing camps. One supports the complete freedom of speech of individuals; The other is to modulate speech by moderating content and tweaking the ways in which information is transmitted.

This seems like an old-fashioned confrontation between idealists and realists, but in this case both sides have an equally pessimistic view. While the current major social-media platforms generally try to moderate speech, their efforts are never enough. His subject is capable of spreading both personal and social harm and has promoted totalitarian movements in Hungary, Brazil and elsewhere in the US. Meanwhile, social-media platforms with more aggressive, focused speech control contribute to the success of authoritarian regimes such as China.

My purpose here is to suggest a logical third option, which can be tested and tested on a platform such as Twitter. In this method, the platform requires users to create groups through the free association and then post them only with the group Imprimature through those groups. It is not immediately clear why this simple, powerful idea helps us escape the dilemma of supporting online speech. Let me explain.

Think of a seemingly unrelated problem: How can we use money to improve the lives of deeply impoverished people? Finance depends on faith, but very poor people do not have a credit history. Banks do not have the resources necessary to evaluate each individual with any initial information.

Muhammad Yunus, the pioneer of microlending, found the answer: ask people to find one another. Groups created through free association — not individuals — applied for loans. Members of these groups demonstrated trust based on knowing each other. The creation of what we might call “quality” originated from the bottom, rather than from the top. When one of the group members got into honest trouble, the other members were prompted to help.

Microlending is a well-deserved success. It helps people get out of extreme poverty, but doesn’t do much of it. However, we are only interested here in the mechanism of bottom-up quality control by having a shared investment in the group. In that sense, microlending works: loans are repaid more reliably than traditional financing.

Microlending is a trendy topic in idealistic tech circles and a constant trope at TED and Davos conferences. I believe this is partly due to the idea that user reviews should guide online commerce. But one of the main ideas of microlending was to get people together in groups. How can groups be formed in a social-media platform? It is like starting a zine, band or partnership. You find some people you match, people you trust, and then you work together to create a brand – a name to apply to your group’s general feed of posts. Only these types of groups will be able to post, not individuals, but individuals will still identify themselves, whether playing in a band or writing in a magazine. Individuals can join multiple groups and groups can self-govern; This is not a heavy thought.

Platforms like Facebook and Reddit have similar structures – groups and subreddits – but they are for people who share notifications and invitations to view and post in certain places. The groups I am talking about, sometimes called “intermediaries of personal data” or “data trusts,” are different: members share good and bad consequences, just as a group shares the benefits and obligations of a loan. Microlending. This mechanism has emerged to some extent in some good, small subreddits, and even in a large software-development platform. The broader movement that incorporates this notion of so-called “data dignity” has emerged in places around the world and in new legal frameworks. My proposal here is to formalize the use of data trusts in code and make them platforms.

Groups can be any size, visible on existing platforms. Some are in the millions. The type of groups in my mind is very small as a rule. The point is that people in groups know each other well enough to take on faith and quality quest and get rid of their group bots. Perhaps the size limit should be in the low hundreds, depending on our cognitive ability to track friends and family. Or should be smaller than that. 60 people or 40 people are likely to get better. I say, examine these ideas. Let’s find out.

Whatever its size, each group is self-governing. Some items have a review process before posting. Others allow members to post as they see fit. Some groups have strict membership requirements. Others may have loose standards. It is a repetition of the old story of people building social institutions and dealing with the inevitable trade-offs, but people do it on their own terms.

What if a group of horrible people decide to form a group? Their collective speech is as bad as their first personal speech, only now it is accepted in a different and better-social-cognitive environment. Nazi magazines existed before the internet, but they were labeled as themselves and were not confused with ambient social perception.

We perceive our world in part through social cues. We rely on people around us to help us detect danger and draw attention. (Try to show something imaginative on a crowded road and you’ll see the effect.) Facebook’s scientists have famously claimed that tweaking their content feeds algorithm in a peer-reviewed journal can make people miserable. The sufferers did not know what was happening. Although data from such trials are not fully available for public scrutiny, evidence suggests that negative emotions (eg, vanity or paranoia) are more easily produced than positive ones (eg, optimism or self-esteem). This is why social media is such a tempting tool for psychological warfare: it can be used to poison society, perhaps with the help of a bot army.

The sheer number of people who can post things outweighs our personal and organizational abilities to understand the context of the speech that flows around us. When horror speech is mixed with ambient food, the world is terrible. But when the online experience comes only from branded sources — and again, these groups are formed through free association — then we can divide what we see. The number of groups of people on the social platform of the type I imagine may be one hundredth of the number of individuals. Groups redefine the experience of online society so that it matches the cognitive abilities of individuals.

Groups encourage better posting. When individuals post online, they are motivated to get attention — or, more charitably, to find relevance — but it requires constant posting. The virtual hamster cycle makes people more abrasive, generating hatred between one’s followers and opposing groups. After all, you have to hook up your followers every day. As a group member, one can post less often — and think more time — and see the brand succeed.

When someone in the group starts to feel awkward or weird, other members of that group have the motivation to talk. We all act like jerks online. In a group, our fellow members pay a price for our behavior. You may not want your friends to bother you about how you are behaving — but it is at least annoying or coercive to make online society less malicious. If you get too annoyed, you can leave the group and join the other; Or you may be willing to be self-centered in order to stick with a good brand.

Groups are encouraged to make sure their members are genuine and to purify the bots, as all of them share any benefits of membership. I have my own hope for how it works: I’d like to see people in the group agree to ease the uncertainty of the ritual by splitting the prizes – micropayments, subscriptions or incentive money, for example – everyone has some benefits to help the group, enhancing compensation for even more contributing individual members. . How we manage reward in tech companies.

But each group must formulate its own rules. Any reward, be it money or anything less tangible, should be distributed among the members according to the zero-sum logic. Since anything that goes on the bot doesn’t go to the actual members, individuals are often prompted to eject fake accounts from their groups, instead of hoping the platform provides that service.

In referring to all this, I am arguing against my own character. I don’t want to be a member of anything. I want to be unique and hard to categorize. And yet, even though I say I should, I always find that my practice gets better when I am in some niche group, practically. I publish books through publishers, get my tech designs through tech companies, publish scientific papers in established journals, etc.

Engaging with groups did not dull my personal weirdness and improved the quality of what I was doing. You can rely on other people without losing your identity. It may seem strange to make this point, but the tech culture is rooted in the cowboy myth that celebrates the individual. (I grew up around real cowboys in rural New Mexico and they worked in teams, so this myth is about movie cowboys.)

Tech culture has created a wild west of real and imitated personalities and infiltrated its territory with mania, bias and embarrassment. I’m not suggesting that data dignity is a perfect or complete solution, or that it should replace all other ideas in the game. But despite the urgent need, no idea is working well enough right now, and data dignity is quite similar to social structures – and we need to try it in the pre-internet world. Does resetting a platform like Twitter into smaller, self-governing groups make it better? Let’s find out.

Leave a Reply

Your email address will not be published.

Previous post Livestream, TV, How to Watch – Deadline
Next post Meet NOT THE TWOS, whose debut single arrived on Kendrick Lamar’s new album