Platformer Archives - Science and Nerds https://scienceandnerds.com/category/platformer/ My WordPress Blog Fri, 26 Aug 2022 15:12:27 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 203433050 Reddit’s CEO explains why he’s still big on the blockchain https://scienceandnerds.com/2022/08/26/reddits-ceo-explains-why-hes-still-big-on-the-blockchain/ https://scienceandnerds.com/2022/08/26/reddits-ceo-explains-why-hes-still-big-on-the-blockchain/#respond Fri, 26 Aug 2022 15:12:25 +0000 https://scienceandnerds.com/2022/08/26/reddits-ceo-explains-why-hes-still-big-on-the-blockchain/ Source: Crypto critics have had a very good year. Much of what they predicted has come to pass: that high-profile projects would be revealed as Ponzi schemes; that security lapses would lead to massive thefts; and that average people would be left holding the bag. Bitcoin, which was trading around $67,000 in November, is at […]

The post Reddit’s CEO explains why he’s still big on the blockchain appeared first on Science and Nerds.

]]>
Source: https://www.theverge.com/23323098/reddit-steve-huffman-interview-blockchain-avatars


Crypto critics have had a very good year. Much of what they predicted has come to pass: that high-profile projects would be revealed as Ponzi schemes; that security lapses would lead to massive thefts; and that average people would be left holding the bag. Bitcoin, which was trading around $67,000 in November, is at $21,570 today; plenty of other cryptocurrencies are much worse off.

As I noted here last month, though, on social networks it can feel like the crypto crash really never happened. Over the past few months, Facebook and Instagram added ways for users to showcase non-fungible tokens, and Spotify let artists use NFTs as their profile pictures.

The origin of that piece was the news that Reddit, the venerable social news aggregator and discussion site, had released its first “blockchain-backed collectible avatars” — what other companies call NFTs. Created by a handful of artists, the avatars riff on Reddit’s “Snoo” alien mascot and can be used as profile pictures on the site.

In the video game industry, selling NFTs has often raised a furor among gamers angry at what they perceive to be pure greed. But while Reddit takes a 5 percent cut of sales from the NFTs, its announcement came and went without much fuss.

This month, the company went a step further: beginning on August 17th, it began to give away collectible avatars for free to 10 million users. About 50,000 have been given away so far to long-time Redditors who have made popular contributions to their communities; the remainder will be given out over the next few weeks. The only catch is that users have to set up a crypto wallet to receive it — the first step, Reddit hopes, in encouraging millions of users to begin exploring the concept of “portable identity.”

Some people hear phrases like that and their eyes roll back so far in their head that they may never recover. But a significant number of executives at tech platforms big and small continue to believe that somewhere in all this lies a solution to many of the criticisms we have of our current lives online: that a small number of companies have too much power; that our data is locked up and inaccessible to us; that platforms ask us to share more of ourselves than we are comfortable with.

Steve Huffman is one of those executives. A co-founder of Reddit and its CEO since 2015, Huffman says he has been fascinated by blockchain technologies since the early days of Bitcoin, when he would marvel at the fact that some people were willing to pay as much as seven cents for one.

Avatars were a natural starting point for Reddit’s blockchain explorations, he said, since people use the feature differently on Reddit than they might on iMessage or Snapchat. There, avatars typically represent users somewhat accurately, as cartoon versions of themselves. Reddit avatars, on the other hand, are aliens, meant to highlight aspects of a user’s personality without giving away their true identity.

Launched about a year and a half ago, roughly one-third of logged-in Reddit users have created avatars so far, he said.

“You can reveal your hobby without revealing your race. You can reveal some of your style without revealing your age or your gender,” Huffman told me at the company’s offices in the Mid-Market neighborhood of San Francisco. “This idea of capturing some of your own identity there, on your own terms, is a very Reddit thing to do. I wish we had done this 10 years ago — because it really fits Reddit.”

Multiple variations of Snoo the Reddit mascot in different themes and outfits.

Reddit’s artist-made collectible avatars.
Image: Reddit

Letting artists sell avatars on the platform was linked to a separate goal of helping Redditors earn money from other users, Huffman said. But he also wants to move away from a world where digital goods are controlled by the platform that creates them.

“What crypto adds is the ability to own digital things,” he said. “Before that, you could create digital goods, like in a game. But those goods ultimately exist at the pleasure of the company that creates them, or runs the database. And so it’s not actual ownership — I’d like to get closer to actual ownership.”

Already, he notes, a user can choose to display their digital Reddit avatar as part of a collection on Instagram, or use it as a profile picture on Twitter. The companies didn’t sign a partnership deal to make that happen — it just works because the blockchains run on an open standard.

“We don’t work with any of these companies,” Huffman said. “I can’t say we’re the best of friends. But now we have this third-party database — an interoperability layer. I think that’s really cool.”

Of course, that’s not to say that many people will be racing to bring their Snoos to Twitter. But it does hint at what could be possible in the future, Huffman said. People may start by bringing a profile picture with them around the internet; later, other bits of identity could become portable as well. Reddit users reward each other with “karma” points for sharing high-quality links or making good comments; what if you could log in to other websites with your Reddit karma?

The company took a step in that direction in 2020, when it announced a still-ongoing experiment called “community points.” Based on Reddit’s existing karma model, community points enable subreddits to award users tokens that exist on the Ethereum blockchain. The company has proposed a variety of uses for these tokens, including polls that give more weight toward active participants in the community, prizes for contests, and enabling users with good reputations to access extra features in other apps and websites.

The company’s avatar bet, in a way, is even bigger: an effort to get 10 million people to begin actively using crypto wallets — without ever calling them crypto wallets or (hopefully) using any crypto-related words at all.

“We try really hard not to use any crypto words — it just confuses people,” Huffman said. “The ecosystem is so confusing. I can’t figure out half this stuff.”

Huffman offered a metaphor to make his point.

“One of my frustrations with the crypto community is like, you know how everything in open source is just a little bit worse? But they’re like, ‘it’s amazing! Look at this Microsoft Word replacement, it’s just as good! No it’s not. Crypto’s like that, again.”

Like every crypto builder, Huffman’s challenge is to abstract away all those confusing terms and arcane user experiences, and replace them with something approachable and valuable. Few if any companies have hit that bar to date, despite years of trying and billions of dollars invested.

Reddit also has to conduct its explorations without triggering the sort of backlash that video game companies have experienced when they announce blockchain projects. Huffman told me he is confident that the company can — and said the recent crash makes it easier to focus on building utility.

“I was anti-profiteering, anti-bubble,” he said. “All of this crypto stuff I found extremely distasteful and counter-productive for years, which is why we don’t talk about it. Because I didn’t want to get sucked into the hype, which I think is parasitic and delusional. Our business model is never, let’s sell NFTs to crypto people to make a bunch of money. Our goal is to put identity and reputation into a third-party database.”

The next steps, then, are to get people using wallets, and from there see what other parts of Reddit might be devolved away from the platform to users and their communities.

I enjoyed talking to Huffman, who brings the right kinds of skepticism to a technology that has been poisoned by wide-eyed optimists and outright scammers. At the moment it can feel as if the tech world is investing most of its energy into a set of Legos that no one has figured out how to put together — using building blocks like identity, ownership, and decentralization to make … something, and hope that the end product is worth all the effort.

“The backlash usually comes from profiteering — we’re distinctly and intentionally not doing that,” Huffman told me. “The product may or may not work. But its success won’t come at the expense of our users.”



Source: https://www.theverge.com/23323098/reddit-steve-huffman-interview-blockchain-avatars

The post Reddit’s CEO explains why he’s still big on the blockchain appeared first on Science and Nerds.

]]>
https://scienceandnerds.com/2022/08/26/reddits-ceo-explains-why-hes-still-big-on-the-blockchain/feed/ 0 13208
How a viral NFT project survived the crypto crash https://scienceandnerds.com/2022/08/19/how-a-viral-nft-project-survived-the-crypto-crash/ https://scienceandnerds.com/2022/08/19/how-a-viral-nft-project-survived-the-crypto-crash/#respond Fri, 19 Aug 2022 15:14:34 +0000 https://scienceandnerds.com/2022/08/19/how-a-viral-nft-project-survived-the-crypto-crash/ Source: A year ago I told you about Loot, a collection of non-fungible tokens that had inspired an energetic community to form around it. A series of brief, text-based descriptions of fantasy genre items like swords and amulets, Loot captured the imaginations of builders and speculators who wondered if the underlying NFTs might someday serve […]

The post How a viral NFT project survived the crypto crash appeared first on Science and Nerds.

]]>
Source: https://www.theverge.com/23312790/loot-dom-hofmann-nft-project-developer-community


A year ago I told you about Loot, a collection of non-fungible tokens that had inspired an energetic community to form around it. A series of brief, text-based descriptions of fantasy genre items like swords and amulets, Loot captured the imaginations of builders and speculators who wondered if the underlying NFTs might someday serve as the basis for graphic novels, movies, video games and more — an open-source art project that could eventually become the foundation of a crypto-flavored Marvel Cinematic Universe.

Loot was also the first NFT project that I found personally compelling, at least as a subject for journalism. Its creator, Dom Hofmann, was well known to me as a co-founder of the short-form video pioneer Vine and the idiosyncratic social network Peach. It was an art project more than a startup — Hofmann made 7,777 randomized “bags” of loot available for free to anyone who would pay the transaction fees necessary to mint them on Ethereum. And the project blew up overnight: five days after launch, Coindesk reported, Loot bags had generated sales of $46 million, and had a market capitalization of $180 million.

Of course, in those days lots of NFTs were selling for eye-popping sums. What made Loot stand out was the way a community of builders stepped forward almost immediately to begin building out an ecosystem: creating art for the items contained in the bags; forming guilds for people who owned the same “rare” items; and writing smart contracts to let people trade the items held within their NFTs.

The dream was that all this would coalesce into something more than the sum of its parts. But 2022 has been catastrophic for the larger NFT market, wiping out the majority of the notional value for even the most richly valued collections. And so as we near the anniversary of Loot’s launch on August 27th, I got curious: what had happened to Loot in the year since it went viral?

“I think it’s doing well, in the sense that the things that are being built on top of it right now are really interesting, or really ambitious, and in some cases both,” Hofmann told me over the phone Thursday. “A lot of them are closer to the types of things that I thought we might see in the early days of the project.”

Hofmann had spent the previous hour and a half on the phone with a developer planning to launch a Loot-related project, something he said he does fairly frequently. His full-time job is running Sup, a 12-person game studio working on various projects that he mostly declined to discuss on the record. But Hofmann continues to lurk in Loot forums, piping up when he feels like he has something to offer, and messages frequently with Loot developers.

That development continues despite the fact that the speculative mania around Loot has cooled. The project’s market cap has fallen to just under $6 million, and the price of the cheapest Loot bag available for sale has fallen from about $20,000 when I wrote about it to about $1,438 today.

So what are developers building? One project all the people I spoke with pointed me to is HyperLoot, which its creators call a “visual building block on top of Loot.” Essentially, Loot owners log in to its website with their crypto wallets and HyperLoot generates a picture of whatever is in their bag on the body of a digital adventurer. That image might could then serve as your profile picture, the basis for a novel, a character in a game, and so on. Like the original Loot, HyperLoot creations are released under the CC0 “no rights reserved” license, effectively placing them in the public domain.

HyperLoot is building a software development kit that will allow people to build using these creations more easily, co-founder Thanakron Tandavas told me over direct messages. The company is also building its own video game, CCO Wars — essentially a Super Smash Bros.-style fighting game, populated by NFT characters in the public domain. Think Bored Apes vs. CryptoKitties, coming eventually to a console near you.

“I feel that the same success could happen to Loot like other successful open-source projects that embrace permission-less innovations, [like] Linux or Android,” Tandavas told me.

The artist Shahruz, who uses one name, told me about a similar public-domain project he’s working on called 32 Swords. It’s “an elimination reality series” in which players must have both Loot and another NFT known as a Manny. The game is human chess, with each team given 12 hours to discuss and vote on their next move. As players are eliminated, their Loot bags are given away to the other participants or spectators.

Shahruz, too, was drawn to the idea of creating art from the public domain.

“I think there’s a lot of unique kinds of fun to be had working with existing projects and communities this way, instead of only creating new properties all the time,” he told me.


So what is there to learn from Loot a year out?

One, Hofmann told me, the early wave of hype around the project wasn’t as helpful as you might have guessed. Because the project was public domain, the early days were essentially a free-for-all of people making cheap clones and other associated projects that distracted from the more serious developers. It took a while for the hype to subside and the better projects to rise to the top.

Two, early enthusiasts likely under-estimated just how challenging it is for public-domain projects to coalesce. At the height of Loot’s hype cycle, plenty of folks wrote blog posts and tweetstorms about how Loot could form the basis of a video game. But it’s a really long way from writing a list of magical items to shipping Elden Ring, and the entropic forces that can cause multi-year delays for even the best-funded video games are magnified exponentially for games built by part-timers and volunteers.

Even in 2021, it was obvious that turning Loot into a video game would be incredibly difficult. But some of the intermediate steps have proven difficult, too.

One of the aspects of NFTs that Hofmann finds appealing is “composability,” the idea that blockchain applications are modular and can be joined together like Legos. In the weeks after Loot launched, other developers “composed” various secondary elements meant to work with it in this way: a compendium of beasts in the Loot universe, a list of dungeons, some visual elements, and so on.

But the result was a kind of Frankenstein’s monster that was less than the sum of its parts. You can write a new smart contract that attempts to join the pieces together, but as soon as someone wants to add a new piece to the stack you might have to start over.

“It’s really, really, really difficult to make a bunch of pieces of something that are supposed to be related to one another, relate to one another — and also be open to future pieces that need to connect,” Hofmann said. “Being future-proof is sort of difficult.”

Three, while much of the focus on NFTs over the past few months has been on crashing prices, Hofmann said that one of the biggest reasons web3 projects aren’t making much progress lately is how expensive Ethereum is to use. Transaction fees routinely run $10 or more to take a single action, making projects inaccessible to anyone other than rich nerds.

“Because of that, I think there is a whole slew of use cases — I would even say the majority of use cases — that are not within the reach,” Hofmann said. “Under normal conditions, you’re talking about closer to $10 and maybe even $100 to perform a transaction. … The price is prohibitive.”

All that said, Hofmann remains deeply interested in the technology. Making films used to be expensive, too, he said — recalling the time when, as a young teenager, he bought a book about filmmaking and learned he was expected to scrape together $50,000 for an entry-level film camera. Today every smartphone comes with a fully capable camera; that barrier to entry has disappeared.

And he still likes the basic properties of NFTs: the fact that they represent a piece of data that lies outside an application; the fact that the data has a verified owner; the fact that the data can be transferred to another verified owner.

“Those are pretty big inversions of the way most of the internet works right now,” he said. “Removing the value part from it altogether, just thinking about what it means to be the owner of the data and being able to prove that you’re the owner of the data. What that enables, just as a technology primitive, it’s super interesting. That’s not something that exists right now.”

One project Hofmann is working on, Corruptions, attempts to build on those properties. It’s a role-playing game that takes place entirely on the blockchain — it doesn’t even have a website — with holders of the relevant NFTs playing by interacting with Discord bots. Among the game’s more outlandish features is the CPC, or “community-playable character,” whose actions are voted upon the community.

I don’t know if Corruptions — or Loot — will ever reach an audience of more than a few thousand people. (Hofmann estimates Loot has about 200 developers currently involved in some capacity.)

But I’m struck by the degree to which developers like him continue working to make something out of the NFT and its strange properties. No one I spoke with today suggested that developers have gotten particularly close to cracking the code — just that it all continues to feel interesting enough to keep working on.

A year from now, Hofmann said, there would likely be Loot-related fiction in the world, and perhaps the first Loot-related video games. And as other CC0 projects join the public domain, they will likely continue to collide with Loot in surprising new ways.

“It just moves so fast,” Hofmann said of the NFT landscape. “And it changes so much.”



Source: https://www.theverge.com/23312790/loot-dom-hofmann-nft-project-developer-community

The post How a viral NFT project survived the crypto crash appeared first on Science and Nerds.

]]>
https://scienceandnerds.com/2022/08/19/how-a-viral-nft-project-survived-the-crypto-crash/feed/ 0 12739
Why it’s taking so long to encrypt Facebook Messenger https://scienceandnerds.com/2022/08/12/why-its-taking-so-long-to-encrypt-facebook-messenger/ https://scienceandnerds.com/2022/08/12/why-its-taking-so-long-to-encrypt-facebook-messenger/#respond Fri, 12 Aug 2022 14:38:18 +0000 https://scienceandnerds.com/2022/08/12/why-its-taking-so-long-to-encrypt-facebook-messenger/ Source: After a high-profile incident in which subpoenaed Facebook messages led to felony charges for a 17-year-old girl and her mother in a Nebraska abortion case, Meta said Thursday that it would expand testing of end-to-end encryption in Messenger ahead of a planned global rollout. This week, the company will automatically begin to add end-to-end […]

The post Why it’s taking so long to encrypt Facebook Messenger appeared first on Science and Nerds.

]]>
Source: https://www.theverge.com/2022/8/12/23302807/facebook-messenger-encryption-abortion-kansas


After a high-profile incident in which subpoenaed Facebook messages led to felony charges for a 17-year-old girl and her mother in a Nebraska abortion case, Meta said Thursday that it would expand testing of end-to-end encryption in Messenger ahead of a planned global rollout.

This week, the company will automatically begin to add end-to-end encryption in Messenger chats for more people. In the coming weeks, it will also increase the number of people who can begin using end-to-end encryption on direct messages in Instagram.

Meanwhile, the company has begun to test a feature called “secure storage” that will allow users to restore their chat history when they install Messenger on a new device. Backups can be locked by a PIN, and the feature is designed to prevent the company or anyone else from being able to read their contents.

The global rollout is expected to be completed next year.

Meta told Wired that it had long planned to make these announcements, and that the fact that they came so soon after the abortion case came to light was a coincidence. I’m less interested in the timing, though, than the practical challenges of making encrypted messaging the default for hundreds of millions of people. In recent conversations with Meta employees, I’ve come to understand more about what’s taking so long — and how consumer apathy toward encryption has created challenges for the company as it works to create a secure messaging app that its user base will actually use.

It has now been three years since Mark Zuckerberg announced, amid an ongoing shift away from public feeds toward private chats, that going forward the company’s products would embrace encryption and privacy. At the time, WhatsApp was already encrypted end to end; the next step was to bring the same level of protection to Messenger and Instagram. Doing so required that the apps be rebuilt almost from scratch — and teams have encountered a number of roadblocks along the way.

The first is that end-to-end encryption can be a pain to use. This is often the tradeoff we make in exchange for more security, of course. But average people may be less inclined to use a messaging app that requires them to set a PIN to restore old messages, or displays information about the security of their messages that they find confusing or off-putting.

The second, related challenge is that most people don’t know what end-to-end encryption is. Or, if they’re heard of it, they might not be able to distinguish it from other, less secure forms of encryption. Gmail, among many other platforms, encrypts messages only when a message is in transit between Google’s servers and your device. This is known as transport layer security, and it offers most users good protection, but Google — or law enforcement — can still read the contents of your messages.

Meta’s user research has shown that people grow concerned when you tell them you’re adding end-to-end encryption, one employee told me, because it scares them that the company might have been reading their messages before now. Users also sometimes assume new features are added for Meta’s benefit, rather than their own — that’s one reason the company labeled stored-message feature “secure storage,” rather than “automatic backups,” so as to emphasize security in the branding.

When they company surveyed users earlier this year, only a minority identified as being significantly concerned about their privacy, I’m told.

On Tuesday, I wrote that companies like Meta should consider going beyond end-to-end encryption to make messages disappear by default. One employee told me this week that the company has considered doing so, but usage of the feature in Messenger to date — where it is available as an option — has been so low that making it a default has generated little enthusiasm internally.

On the contrary, I’m told, access to old messages is a high priority for many Messenger users. Messing with that too much could send users scrambling for communications apps like the ones they’re used to — the kind that keep your chat history stored on a server, where law enforcement may be able to request and read it.

A third challenge is that end-to-end encryption can be difficult to maintain even within Facebook, I’m told. Messenger is integrated into the product in ways that can break encryption — Watch Together, for example, lets people message each other while watching live video. But that inserts a third person into the chat, making encryption much more difficult.

There’s more. Encryption won’t work unless everyone is using an up-to-date version of Messenger; lots of people don’t update their apps. It’s also tough to pack encryption into a sister app like Messenger Lite, which is designed to have a small file size so it can be used by users with older phones or limited data access. End-to-end encryption technology takes up a lot of megabytes.

I bring all this up not to excuse Meta for failing to roll out end-to-end encryption up to now. The company has been working on the project steadily for three years, and while I wish it were moving faster, I’m sympathetic to some of the concerns that employees raised with me over the past few days.

At the same time, I think Meta’s challenges in bringing encryption to the masses in its messaging app raise real questions about the appetite for security in these products. Activists and journalists take it for granted that they should be using encrypted messaging apps already, ideally one with no server-side storage of messages, such as Signal.

But Meta’s research shows that average people still haven’t gotten — well, the message. And it’s an open question how the events of 2022, as well as whatever we’re in for in the next few years, may change that.

(Employees told me that Meta’s push to add encryption picked up after the invasion of Ukraine earlier this year, when stories about Russian military personnel searching captives’ phones drew attention to the dangers of permanently stored, easily accessible messages.)

For all the attention the Nebraska case got, it had almost nothing to do with the overturning of Roe vs. Wade: Nebraska already banned abortion after 20 weeks, and the medical abortion at the heart of this case — which took place at 28 weeks — would have been illegal under state law even had Roe been upheld.

Yes, Meta turned over the suspects’ messages upon being subpoenaed, but there’s nothing surprising about that, either: the company got 214,777 requests in the second half of last year, about 364,642 different accounts; it produced at least some data 72.8 percent of the time. Facebook cooperating with law enforcement is the rule, not the exception.

In another way, though, this has everything to do with Roe. Untold numbers of women will now be seeking abortion care out of state, possibly violating state law to do so, and they’ll need to communicate about it with their partners, family, and friends. The coming months and years will bring many more stories like the Kansas case, drawing fresh attention each time to how useful tech platforms are to law enforcement in gathering evidence.

It’s possible the general apathy toward encryption of most Facebook users will survive the coming storm of privacy invasions. But it strikes me as much more likely that the culture will shift to demand that companies collect and store less data, and do a better job educating people about how to use their products safely.

If there’s a silver lining in any of this, it’s that the rise in criminal prosecutions for abortion could create a massive new constituency organized to defend encryption. From India to the European Union to the United States, lawmakers and regulators have been working to undermine secure messages for many years now. To date, it has been preserved thanks in part to a loose coalition of activists, academics, civil society groups, tech platforms, and journalists: in short, some of the people who rely upon it most.

But with Roe overturned, the number of people for whom encrypted messaging is now a necessity has grown markedly. A cultural shift toward encryption could help preserve and expand access to secure messaging, both in the United States and around the world.

That shift will take time. But there’s much that tech platforms can do now, and here’s hoping they will.



Source: https://www.theverge.com/2022/8/12/23302807/facebook-messenger-encryption-abortion-kansas

The post Why it’s taking so long to encrypt Facebook Messenger appeared first on Science and Nerds.

]]>
https://scienceandnerds.com/2022/08/12/why-its-taking-so-long-to-encrypt-facebook-messenger/feed/ 0 12182
Meta might let anti-vax posts back onto Facebook and Instagram https://scienceandnerds.com/2022/07/27/meta-might-let-anti-vax-posts-back-onto-facebook-and-instagram/ https://scienceandnerds.com/2022/07/27/meta-might-let-anti-vax-posts-back-onto-facebook-and-instagram/#respond Wed, 27 Jul 2022 14:38:36 +0000 https://scienceandnerds.com/2022/07/27/meta-might-let-anti-vax-posts-back-onto-facebook-and-instagram/ Source: Today, let’s talk about a settled question that Meta has decided to re-open: what should the company do about misinformation related to COVID-19? Since the earliest days of the pandemic, Meta has sought to remove false claims about the disease from Facebook and Instagram. And for just as long, the company has faced criticism […]

The post Meta might let anti-vax posts back onto Facebook and Instagram appeared first on Science and Nerds.

]]>
Source: https://www.theverge.com/2022/7/27/23280404/facebook-instagram-covid-antivax-misinformation-oversight-board-review


Today, let’s talk about a settled question that Meta has decided to re-open: what should the company do about misinformation related to COVID-19?

Since the earliest days of the pandemic, Meta has sought to remove false claims about the disease from Facebook and Instagram. And for just as long, the company has faced criticism that it hasn’t done a very good job. A year ago this month, asked about the role “platforms like Facebook” played in spreading misinformation about the disease, President Biden said “they’re killing people” — though he walked his statement back a day later.

Still, Biden voiced a fear that is deeply held among Meta critics: that the platform’s huge user base and algorithmic recommendations often combine to help fringe conspiracy theories reach huge mainstream audiences, promoting vaccine hesitancy, resistance to wearing masks, and other public health harms.

The pandemic is not close to over — an estimated 439 people died of COVID in the past day, up 34 percent in the past two weeks. And highly infectious Omicron subvariants continue to tear through the country, raising fears of a surge in cases of long COVID — a condition that experts say has already been “a mass disabling event.” An estimated 1 in 13 American adults reported having long COVID symptoms earlier this month, according to the US Centers for Disease Control and Prevention.

Despite that, Meta is now considering whether to relax some of the restrictions it has placed on COVID-related misinformation, including whether to continue removing posts about false claims about vaccines, masks, social distancing, and related subjects. It has asked the Oversight Board — an independent group funded by Meta to help it make difficult calls relating to speech — for an advisory opinion on how to proceed.

Nick Clegg, the company’s president of global affairs, explained Tuesday in a blog post:

In many countries, where vaccination rates are relatively high, life is increasingly returning to normal. But this isn’t the case everywhere and the course of the pandemic will continue to vary significantly around the globe — especially in countries with low vaccination rates and less developed healthcare systems. It is important that any policy Meta implements be appropriate for the full range of circumstances countries find themselves in.

Meta is fundamentally committed to free expression and we believe our apps are an important way for people to make their voices heard. But some misinformation can lead to an imminent risk of physical harm, and we have a responsibility not to let this content proliferate. The policies in our Community Standards seek to protect free expression while preventing this dangerous content. But resolving the inherent tensions between free expression and safety isn’t easy, especially when confronted with unprecedented and fast-moving challenges, as we have been in the pandemic. That’s why we are seeking the advice of the Oversight Board in this case. Its guidance will also help us respond to future public health emergencies.

For all the criticism Meta has received over its enforcement of health misinformation, by some measures the steps it took clearly had a positive effect on the platform. The company estimates it has taken down more than 25 million posts under its stricter policies, which now require the removal of 80 separate false claims about the disease and its vaccines.

At the same time, the platform arguably has at times overreached. In May 2021, I wrote about Meta’s decision to reverse an earlier ban on discussing the possibility that COVID-19 leaked from a Chinese lab. The company made that decision amidst a spike in hateful violence against Asian people, fearing that conspiracy theories related to the disease’s origin could be used to justify further attacks.

But as debate about the virus’ origin intensified, Meta began allowing people to speculate again. (To date, no consensus on the issue has emerged.) I wrote at the time that the company probably should not have taken a position on the issue in the first place, instead using its existing hate-speech policies to moderate racist posts:

I generally favor an interventionist approach when it comes to conspiracy theories on social networks: given the harm done by adherents to QAnon, Boogaloo, and other extremist movements, I see real value in platforms reducing their reach and even removing them entirely.

On some questions, though, platform intervention may do more harm than good. Banning the lab-leak hypothesis gave it the appearance of forbidden knowledge, when acknowledging the reality — that it is unlikely, but an open question — may have been just dull enough to prevent it from catching fire in those fever swamps.

Last week, I asked Clegg why the company had decided to ask the board for a second opinion on health misinformation now. One, he said, Meta assumes there will be future pandemics that bring with them their own set of policy issues. The company wants to get some expert guidance now so it can act more thoughtfully the next time around. And two, he said, the Oversight Board can take months to produce an opinion. Meta wanted to get that process started now.

But more than anything, he said, the company wanted a check on its power — to have the board, with which this month it signed a new three-year, $150 million operating deal, weigh in on what have been some fairly stringent policies.

“This was a very dramatic extension of our most exacting sanction,” Clegg told me. “We haven’t done it on this scale in such a short period of time before. … If you have awesome power, it is all the more important that you exercise that awesome power thoughtfully, accountably, and transparently. It would be curious and eccentric, in my view, not to refer this to the Oversight Board.”

Indeed, weighing in on policies like this is one of the two core duties of the board. The primary duty is to hear appeals from users who believe their posts should be restored after being removed, or taken down after being left up in error. When the board takes those cases, its decisions are binding, and Meta has so far always honored its findings.

The board’s other key duty is to offer opinions on how Meta ought to change its policies. Sometimes it attaches those opinions to decisions in individual cases; other times, as with the COVID policies, Meta asks the board about something. Unlike cases about single posts, the board’s opinions here aren’t binding — but to date, Meta has adopted roughly two-thirds of the changes the board has proposed.

Some people continue to write the board off anyway. Since even before it began hearing cases in 2020, the board has been subject to withering complaints from critics who argue that it serves as little more than a public-relations function for a company so beleaguered it had to change its name last year.

And yet it’s also clear that Meta and other social platforms have a profound need for the kind of rudimentary justice system a board like this can provide. In its first year, the board received 1.1 million appeals from Meta’s users. Before the board existed, they had no recourse when Facebook made a mistake beyond some limited automated systems. And every tough question about speech was ultimately made by one person — Mark Zuckerberg — with no room for appeal.

It seems obvious to me that a system where these cases are heard by an expert panel, rather than a lone CEO, is superior, even if it still leaves much to be desired.

So what happens now?

One possibility is that Meta’s policy teams want to relax restrictions on speech related to COVID policy, but want the cover that a decision from the Oversight Board would give them. They have reason to believe the board might come to that conclusion: it was stocked with free-speech advocates, and generally when they have ruled against Meta it has been in the name of restoring posts that the board believes were wrongfully removed.

That said, the company will also likely be in for a drubbing from left-leaning politicians and journalists, along with some number of users, if the board gives them the go-ahead to relax its policies and the company does so. Clegg told me that, should that happen, Facebook and Instagram would use other measures to reduce the spread of misinformation — adding fact-checks, for example, or reducing the distribution of false posts in feeds. But the mere existence of anti-vaxx content on Meta will lead to new criticism — and possibly new harms.

Another possibility is that the board won’t take the bait. Members could argue that removing health misinformation, while a drastic step, continues to be a necessary one — at least for now. The board remains relatively new, and mostly unknown to the general public, and I wonder what appetite members have to stand up for people’s right to spread lies about vaccines.

Whatever the board decides, Clegg said, Meta will move cautiously with any changes. At the same time, he said, the company wants to be judicious in how it deletes user posts.

“I think you should deploy the removal sanction very carefully,” he said. “You should set the bar really high. You don’t want private-sector companies to be removing stuff unless it really is demonstrably related to imminent, real-world harm.”



Source: https://www.theverge.com/2022/7/27/23280404/facebook-instagram-covid-antivax-misinformation-oversight-board-review

The post Meta might let anti-vax posts back onto Facebook and Instagram appeared first on Science and Nerds.

]]>
https://scienceandnerds.com/2022/07/27/meta-might-let-anti-vax-posts-back-onto-facebook-and-instagram/feed/ 0 10899
Why Medium failed https://scienceandnerds.com/2022/07/13/why-medium-failed/ https://scienceandnerds.com/2022/07/13/why-medium-failed/#respond Wed, 13 Jul 2022 14:52:57 +0000 https://scienceandnerds.com/2022/07/13/why-medium-failed/ Source: Few tech CEOs can claim to have steered the course of online conversation more than Ev Williams. In 1999 he co-founded Blogger, which helped to take blogging mainstream with a well designed, free tool that sold to Google four years later. In 2006 Williams and his co-founder followed with Twitter, which remains one of […]

The post Why Medium failed appeared first on Science and Nerds.

]]>
Source: https://www.theverge.com/2022/7/13/23206711/medium-fail-new-ceo-ev-williams


Few tech CEOs can claim to have steered the course of online conversation more than Ev Williams. In 1999 he co-founded Blogger, which helped to take blogging mainstream with a well designed, free tool that sold to Google four years later. In 2006 Williams and his co-founder followed with Twitter, which remains one of the most influential social networks in the world.

Five years later, after a stint as Twitter CEO and much turmoil, Williams announced his next act: Medium, a publishing platform that sought to split the difference between blogs and tweets: medium-length posts, published occasionally rather than daily, that would seek to answer one question: “Now that we’ve made sharing information virtually effortless,” Williams wrote in 2012, “how do we increase depth of understanding, while also creating a level playing field that encourages ideas that come from anywhere?”

On Tuesday, a month short of 10 years of mostly fruitless attempts to answer that question, Williams is giving up. Here’s Ben Mullin at the New York Times:

Medium declined to make Mr. Williams available for an interview. He said in his post that he was leaving Medium because “change and renewal are healthy,” noting that August will be his 10th anniversary as chief executive.

“To be clear, Medium’s story is far from over,” Mr. Williams wrote.

Medium said that Mr. Williams would be stepping down as chief executive effective July 20 and that he would be replaced by Tony Stubblebine, the chief executive of the online coaching company Coach.me. Mr. Williams will become chairman of Medium’s board, a new position.

Williams said he would remain chairman of the board but also start a “a new holding company/research lab” to work on other projects.

I last wrote about Medium in March 2021, days after Williams had announced his latest about-face for the company. Two years after launching a subscription-based group of publications dedicated to high-quality original journalism, and just after its workers fell one vote short of forming a union, Williams offered buyouts to all of its roughly 75 editorial employees.

As I wrote at the time, by some measures Medium was succeeding. It had started 2021 with around 700,000 paid subscriptions, and was on track for more than $35 million in revenue from its $5 monthly subscription offering. At the same time, internal data showed that it largely was not high-quality journalism that was leading readers to subscribe: it was random stories posted to the platform by independent writers that happened to get featured by the Google or Facebook algorithms.

Williams frequently spoke of his desire to make Medium a place for high-quality writing, one that elevated the national conversation. His annual appearances in the New York Times proclaiming that the internet was broken and that he intended to fix it, became a running gag among a certain set of media obsessives. (OK, maybe just me.)

And yet by last year Williams was confronted by the fact that his revenues were being largely dictated by what performed well on search engines — the same business logic of the low-quality content farms of the late 2000s. It wasn’t a terrible business, exactly. But it wasn’t the business Williams had set out to build — nor was it the business he had hired for.

And so for the second time in his tenure as CEO — the first had been in 2015, when he laid off 50 people in a pivot away from advertising toward subscriptions — Williams upended the lives of dozens of journalists and flushed them off the platform.

Media is a famously unforgiving business, and Williams is far from the only CEO to struggle to build a sustainable company. And yet over the past decade, few have matched him for the sheer number of changes in direction he inflicted on investors, users, and employees.

Nieman Lab’s Laura Hazard Owen wrote an essential guide to Williams’ whipsawing in 2019. Among the things Medium tried during his tenure, from its launch to the present day:

There’s no shame in a startup trying lots of different ideas. But Medium’s ideas were often coupled to the livelihoods of journalists and the publications they worked for. It’s one thing to have a singular vision and change your tactics along the way; Williams vision for what Medium was transformed almost continuously.

“When I first started at Medium I was really inspired by Ev,” one former employee told me today. “He’s a Vision Guy™, and that really shone through for me in the beginning. But vision only goes so far, and over the years it gave way to my increasing perception of him as out of touch and restless for results at any cost (see: the one thousand pivots in strategy over the last five years).”

Williams went back and forth on whether Medium should host its own publications or serve as a platform for others to build on. And while he dithered, Medium got caught in the middle.

On the high end, well funded digital publishers from BuzzFeed to Vice to the Atlantic excelled at publishing high-quality journalism. And on the low end, Substack emerged to let solo creators develop thriving, sustainable careers by offering individual subscriptions. (See my ethics disclosure about Substack.) In such a world, Medium had no obvious advantage. With its owned and operated publications gone, it became a general-interest web magazine staffed by freelancers and dependent on Google.

Another former employee noted that, for all the pivots over the years, Williams always seemed a step behind.

“I’d say you could describe the Ev era of Medium as a series of digital publishing experiments that often felt of the zeitgeist without ever defining it,” the employee said. “A lot of the work Medium did over the years genuinely had an impact, but it often felt, for whatever reason, like Ev made it a point not to lean into this work. He meandered and never seemed satisfied. And eventually Twitter evolved to support more of the kind of publishing that had originally been native to Medium, and Substack came along and ate the platform’s lunch.”

“He’s a little bit of a mystery to me,” the employee aded. “I hope a leadership shakeup is good for the company and the people who work there.”

A third former employee told me my assessment of Williams — essentially, a callous dilettante — was unfair.

“I think he was trying to solve a really hard problem, it kept not working, and he screwed a lot of people over to varying degrees by continually changing his approach,” the employee said. “But he really did try a lot of things, and it wasn’t necessarily obvious that they’d fail until somebody with a ton of money tried it.”

Medium declined to comment, as did incoming CEO Stubblebine. (“I need to get my feet under me first,” he told me over direct message. “But after that I expect to be pretty chatty.”)

The challenge for Stubblebine is the same as it was for Williams: after 10 long years, there is very little that Medium hasn’t already tried. It was always grandiose to suggest that a humble blogging platform could fix the internet. Now Williams has kicked himself upstairs, and it will be up to someone else to try to fix his company.



Source: https://www.theverge.com/2022/7/13/23206711/medium-fail-new-ceo-ev-williams

The post Why Medium failed appeared first on Science and Nerds.

]]>
https://scienceandnerds.com/2022/07/13/why-medium-failed/feed/ 0 9783
Elon Musk’s plan is to run Twitter off the top of his head https://scienceandnerds.com/2022/06/17/elon-musks-plan-is-to-run-twitter-off-the-top-of-his-head/ https://scienceandnerds.com/2022/06/17/elon-musks-plan-is-to-run-twitter-off-the-top-of-his-head/#respond Fri, 17 Jun 2022 14:42:01 +0000 https://scienceandnerds.com/2022/06/17/elon-musks-plan-is-to-run-twitter-off-the-top-of-his-head/ Source: Elon Musk appeared at a virtual town hall for Twitter employees on Thursday to answer their questions, and by the end of the session one question loomed larger than most: can you really run Twitter off the top of your head? Over 60 or so freewheeling minutes, the world’s richest man fielded queries over […]

The post Elon Musk’s plan is to run Twitter off the top of his head appeared first on Science and Nerds.

]]>
Source: https://www.theverge.com/2022/6/17/23172380/elon-musk-twitter-meeting-employee-reactions


Elon Musk appeared at a virtual town hall for Twitter employees on Thursday to answer their questions, and by the end of the session one question loomed larger than most: can you really run Twitter off the top of your head?

Over 60 or so freewheeling minutes, the world’s richest man fielded queries over remote work, layoffs, content moderation, and other questions of pressing interest to Twitter’s roughly 8,000 employees. Time and again, he offered workers hungry for concrete answers a jumble of sentence fragments.

Will Twitter’s mostly remote workforce be able to continue working from home? Maybe, if they’re “excellent,” Musk said, according to Bloomberg’s live blog. But it’s “much better if you are on location physically,” he said, according to the New York Times’ Mike Isaac.

What sort of content moderation policies will he support? “We should allow people to say what they want,” he said. But “it’s important to make Twitter as attractive as possible,” Musk said, according to Insider’s Kali Hays and Hugh Langley. “Really, that means not showing people content that they would find offensive. Or even frankly, boring, is not good. TikTok does a great job of making sure you’re not bored.”

What should Twitter become? A Chinese-style super app, like WeChat, he said. But also “entertaining,” like TikTok. But also payments, and also subscriptions. Somewhere in this hodgepodge of half-baked ideas, Musk said, is the recipe for Twitter to attract 1 billion users — about 770 million more than it has today.

I talked to a half-dozen or so employees after the meeting. Reactions were mixed but mostly negative. Musk won some points for sounding genuinely enthusiastic about the prospect of owning and running Twitter, which had been in question since he began his war of attrition against the company’s board in an effort to renege on the $44 billion deal.

Mostly, though, employees told me that the conversation had reinforced the beliefs they had about Musk heading into the meeting. If I were to make a word cloud of employee responses, some bigger ones would be: incoherent; rambling; uninspiring.

(Also at one point Musk mentioned that he has seen no evidence of alien life, and no one really knew what to make of that.)

The people I spoke with know what it takes to run Twitter, having done so themselves for many years. And it was jarring, they said, to hear someone speak with so much confidence about a company that at best Musk is only beginning to understand.

Indeed, given the shallow nature of his thinking, it’s worth considering that the hour Musk spent with employees today might have been the longest sustained period he has thought about what it will mean to run Twitter since he bought it.

Questions selected for Musk to answer Thursday were notably non-threatening. No one was allowed to ask about his criticism of current Twitter executives, or of his efforts to blow up the deal he signed, or of the recent harassment allegations against him. He did get a question about potential layoffs, which he dodged, although he noted that Twitter is losing money and said “that’s not a great situation.” (Analysts believe Musk will likely cut about 20 percent of Twitter’s workforce.)

And as employees continue to wonder what the acquisition will mean for them, Musk offered bong-rip platitudes about the future of civilization. “I want Twitter to contribute to a better, long-lasting civilization where we better understand the nature of reality,” he said.

If there was any hint of Musk’s intended management style embedded in all this, it came as part of an answer to whether he would take the title of CEO upon the deal’s close. Musk takes a mischievous approach to corporate titles — he is the “Techno King” of Tesla, and his chief financial officer is “master of coin” — and so the question was more than fair.

And Musk’s answer was telling. From Insider again:

Regardless of his title, Musk does want to “drive the product in a particular direction” and intends to lead improvement on the platform’s software, product, and design.

“I don’t mind doing other things related to operating a company, but there are chores,” he said. “I really just want to make sure that the product evolves rapidly and in a good way. I don’t really care what title is, but obviously, people do need to listen to me.”

I read Musk here to say: running a company as CEO requires that you pay a lot of attention to the details, which I have no interest in doing. But I will still make a lot of pronouncements about what the product should be, and — “obviously”! — “people do need to listen to me.”

I found this interesting because, if you talk to people who have worked directly with Musk at Tesla, this is exactly how he runs that company, too. Musk sets priorities, often based seemingly on little more than a whim, and the details and any technological breakthroughs that might be required to achieve them are left to his work force.

And so while one way of looking at Musk’s answers to Twitter employees today is that they are under-baked, it may be worth considering that this is as baked as some of them will ever get. “Figure out payments,” “make it more like TikTok,” “add 770 million users” — these seem to me like the sort of instructions that Musk’s new head of product might be given on day one. And whichever employees remain at their desks will be charged with making it so.

There are historical analogs for this kind of leadership — Steve Jobs is the obvious one — but there’s a reason why it’s relatively rare. Even-keeled, detail-oriented founders tend to outperform charismatic shamans in the long run, in part because they’re so much easier to work with. That’s good for recruitment, retention, and innovation.

And I think that’s even truer today than it was 20 years ago — it’s hard to imagine Jobs’ reputation surviving the Slack screenshot era. (“Personally I find it very disappointing that our CEO continues to park in spaces reserved for employees with disabilities”; 800 reactions; 900 reactions, etc.)

For years now, the easy rebuttal to this criticism of Musk has been the success of Tesla. And indeed, in many ways the company is a marvel, selling as many cars as it can manufacture to the most satisfied customers in the entire auto industry.

But when I look at the growing number of investigations into its Autopilot assisted-driving software, I see the consequences of Musk’s “you figure it out” leadership. Musk makes the promise — full self driving! Coming this year! — and engineers scramble to make it a reality. And if they can’t, who will be brave enough to tell the Techno King? Might it not be easier to just ship it and deal with the consequences later?

But there are signs that this, too, is starting to fracture. On Thursday The Verge’s Loren Grush reported that employees at another Musk company, SpaceX, sent an open letter to their leadership denouncing Musk’s recent behavior. Grush writes:

The letter, reviewed by The Verge, describes how Musk’s actions and the recent allegations of sexual harassment against him are negatively affecting SpaceX’s reputation. […]

“Elon’s behavior in the public sphere is a frequent source of distraction and embarrassment for us, particularly in recent weeks,” the letter states. “As our CEO and most prominent spokesperson, Elon is seen as the face of SpaceX — every Tweet that Elon sends is a de facto public statement by the company. It is critical to make clear to our teams and to our potential talent pool that his messaging does not reflect our work, our mission, or our values.”

The sheer force of Musk’s personality has carried him a long way. But watching him stretch himself across ever more companies, while continuing to sow chaos via tweets, you wonder how long the act can go on. Tesla stock was down 8.5 percent on Thursday; it’s down 31 percent over the past six months. Even for a world’s richest man, the laws of gravity do occasionally apply.

Musk said he’ll be back for an encore Q&A with Twitter employees, and I’m sure they can’t wait. (“When he turned off his video at the end of the Q&A, his avatar appeared to be two hands in the shape of the number 69, an apparent reference to a sex position,” noted Reuters’ Sheila Dang.)

The employees I spoke with say they are doing their best to stay focused on work. Musk, no doubt, is focused on lowering the deal price. The future of Twitter hangs in the balance, and no one is less worried about it than Elon Musk. He has his whims, and they will be enough for now.



Source: https://www.theverge.com/2022/6/17/23172380/elon-musk-twitter-meeting-employee-reactions

The post Elon Musk’s plan is to run Twitter off the top of his head appeared first on Science and Nerds.

]]>
https://scienceandnerds.com/2022/06/17/elon-musks-plan-is-to-run-twitter-off-the-top-of-his-head/feed/ 0 8125
Uvalde shows why online threats need to be taken more seriously https://scienceandnerds.com/2022/06/01/uvalde-shows-why-online-threats-need-to-be-taken-more-seriously/ https://scienceandnerds.com/2022/06/01/uvalde-shows-why-online-threats-need-to-be-taken-more-seriously/#respond Wed, 01 Jun 2022 15:14:10 +0000 https://scienceandnerds.com/2022/06/01/uvalde-shows-why-online-threats-need-to-be-taken-more-seriously/ Source: A week ago today, an 18-year-old man walked into an elementary school in Uvalde, Texas, and committed the latest in our nation’s never-ending series of senseless murders. And in the aftermath of that horror — 19 children dead, two teachers dead, 18 more injured — attention once again turned to what role platforms might […]

The post Uvalde shows why online threats need to be taken more seriously appeared first on Science and Nerds.

]]>
Source: https://www.theverge.com/23149885/uvalde-online-threads-yubo-moderation-failure


A week ago today, an 18-year-old man walked into an elementary school in Uvalde, Texas, and committed the latest in our nation’s never-ending series of senseless murders. And in the aftermath of that horror — 19 children dead, two teachers dead, 18 more injured — attention once again turned to what role platforms might have played in enabling the violence.

This question can feel both urgently necessary and also somehow beside the point. Necessary because people (often teenagers) are constantly being arrested after making threats on social media, and the Uvalde case shows once again why those threats must be taken more seriously.

And yet it’s also clear that America’s gun violence problem will not be solved at the level of platform policy or enforcement. It can be solved only by making it harder for people to acquire and use guns, particularly the assault weapons that figure in every single story like this one.

But around here we focus on platforms. And with that in mind, let’s take a look at what we’ve learned about the shooter’s online behavior in the week since the shooting. It speaks to issues around child safety and platforms that I’ve reported on here before — and points to some clear steps that platforms (and, if necessary, regulators) should take next.

Aside from a handful of private messages, the Uvalde shooter appears not to have much used Facebook. That and Instagram were once the default platforms for making threats like these, but new platforms are growing in popularity with young people. The Uvalde shooter liked one called Yubo, created by a French company called Twelve App. It’s a “live chilling” app similar to Houseparty, the app that Meerkat became after helping to launch the live-streaming craze in the United States in 2015.

It’s also apparently quite popular, with more than 18 million downloads in the United States alone, according to the market research firm Sensor Tower.

Like Houseparty, Yubo lets users broadcast themselves live to a small group of friends. The twist is that Yubo focuses on making new friends — finding people with similar interests and letting them chat. Particularly young people. “Yubo is a social live-streaming platform that celebrates the true essence of being young,” the company says. (Perhaps for that reason, its seems to have attracted more than its share of older men and their unwanted sexual advances.)

In the days after the massacre, reporters discovered that Yubo appears to have been the shooter’s primary social app. He used it, among other things, to threaten rape — and school shootings. Here are Daniel A. Medina, Isabelle Chapman, Jeff Winter and Casey Tolan at CNN:

Three users said they witnessed Salvador Ramos threaten to commit sexual violence or carry out school shootings on Yubo, an app that is used by tens of millions of young people around the world.

The users all said they reported Ramos’ account to Yubo over the threats. But it appeared, they said, that Ramos was able to maintain a presence on the platform. CNN reviewed one Yubo direct message in which Ramos allegedly sent a user the $2,000 receipt for his online gun purchase from a Georgia-based firearm manufacturer.

At the Washington Post, Silvia Foster-Frau, Cat Zakrzewski, Naomi Nix and Drew Harwell found a similar pattern of behavior:

A 16-year-old boy in Austin who said he saw Ramos frequently in Yubo panels, told the Post that Ramos frequently made aggressive, sexual comments to young women on the app and sent him a death threat during one panel in January.

“I witnessed him harass girls and threaten them with sexual assault, like rape and kidnapping,” said the teen. “It was not like a single occurrence. It was frequent.”

He and his friends reported Ramos’s account to Yubo for bullying and other infractions dozens of times. He never heard back, he said, and the account remained active.

Yubo told the network that it is cooperating with the investigation, but declined to offer any details on why the shooter was able to remain on the platform despite having been reported for making threats over and over again.

It can seem shocking that a person who repeatedly makes violent threats, and is reported for doing so to the platform, fails to see any consequences. And yet for years now, children have been telling us that this is a regular occurrence for them.

In May of last year, I wrote about a report based on a survey of minors by Thorn, a nonprofit organization that builds technology to defend children from sexual abuse. Here are two findings from that survey that are relevant to the Uvalde case, from my column about it:

  • Children are more than twice as likely to use platform blocking and reporting tools than they are to tell parents and other caregivers about what happened: 83 percent of 9- to 17-year-olds who reported having an online sexual interaction reacted with reporting, blocking, or muting the offender, while only 37 percent said they told a parent, trusted adult, or peer.
  • The majority of children who block or report other users say those same users quickly find them again online: More than half of children who blocked someone said they were contacted again by the same person again, either through a new account or a different platform. This was true both for people children knew in real life (54 percent) and people they had only met online (51 percent).

In short: most kids use platform reporting tools instead of telling parents or other caregivers about threats online, but in most cases those reporting tools aren’t effective. In our interview last year, Julie Cordua, Thorn’s CEO, likened platform reporting tools to fire alarms that have had their wires cut. In the Uvalde case, we see what happens when those alarms aren’t connected to effective enforcement mechanisms.

If there’s any room for optimism here, it’s in the fact that criminals really do seem to be moving away from better-defended platforms to ones that are less established — and, in some cases, have fewer policy and enforcement tools. Surely part of that is simply evidence of changing tastes — Discord and Twitch are much more popular with the average teenager today than Facebook or perhaps even Instagram is. But part of it is also that Meta, YouTube, and Twitter in particular have invested heavily in content moderation, making it harder for bad actors to make threats with impunity and evade bans. That speaks to the value of content moderation, to both companies and the world at large.

Peruse Yubo’s website and history and you will see a company that appears to be committed to good stewardship. The app has clearly posted community guidelines, albeit ones that have not been updated since 2020. It has a policy on ban evasion. And it uses facial-recognition technology in an effort to prevent users younger than 13 from signing up.

The company also says that it uses machine-learning to scan live streams in an effort to find bad behavior, and scans text messages to look for private information that users might be about to share unwittingly, such as phone numbers. These are good, useful, and expensive tools that many other platforms do not offer.

At the same time, these are voluntary measures in a world where regulators still have not established minimum standards for content policy, moderation, enforcement, or reporting what they find — aka “transparency.” We know that Yubo had a policy against basically everything the Uvalde shooter did. We know that kids saw what he was doing online, grew concerned, and used the app’s reporting tools to try to prevent it from happening in the future.

And, as is usually the case in these situations, we know nothing about what happened next. Were the reports reviewed? By humans or machines? What did they find?

Platforms that allow users to create accounts should be required to let people report those accounts for bad behavior. (Did you know you still can’t report an account on iMessage, one of the world’s biggest communications services?) Platforms should also be required to let us know what they do with those reports, both individually (to the person who reported it) and in the aggregate (so we can understand bad behavior on platforms overall).

Doing so will sadly do nothing to stop the epidemic of gun violence in this country. But it will make good on the promise that apps like Yubo are making to their users when they let them report bad behavior — that they will take action when they receive them, and work to prevent further harm.

Nobody forced Yubo to build the systems that Thorn’s Cordua rightly called “fire alarms.” But it did. The least that Yubo and other platforms can do now is offer us some evidence that those alarms are actually plugged in.



Source: https://www.theverge.com/23149885/uvalde-online-threads-yubo-moderation-failure

The post Uvalde shows why online threats need to be taken more seriously appeared first on Science and Nerds.

]]>
https://scienceandnerds.com/2022/06/01/uvalde-shows-why-online-threats-need-to-be-taken-more-seriously/feed/ 0 6890
Why the company behind Pokémon Go is getting crypto-curious https://scienceandnerds.com/2022/05/25/why-the-company-behind-pokemon-go-is-getting-crypto-curious/ https://scienceandnerds.com/2022/05/25/why-the-company-behind-pokemon-go-is-getting-crypto-curious/#respond Wed, 25 May 2022 14:47:47 +0000 https://scienceandnerds.com/2022/05/25/why-the-company-behind-pokemon-go-is-getting-crypto-curious/ Source: It has been an extremely rough year for cryptocurrency projects, as crashing prices, daily revelations of scams, and the ongoing usability nightmare of blockchain-based computing has left venture capitalists’ next big bet looking more like a pipe dream than anything you could reasonably call “web3.” Crypto-based video games, which last year looked as if […]

The post Why the company behind Pokémon Go is getting crypto-curious appeared first on Science and Nerds.

]]>
Source: https://www.theverge.com/2022/5/25/23140932/niantic-crypto-web3-nft-spotx-john-hanke


It has been an extremely rough year for cryptocurrency projects, as crashing prices, daily revelations of scams, and the ongoing usability nightmare of blockchain-based computing has left venture capitalists’ next big bet looking more like a pipe dream than anything you could reasonably call “web3.”

Crypto-based video games, which last year looked as if they might attract a big audience to begin collecting non-fungible tokens, have instead drawn collective outrage at perceived developer greed. Game giants have largely been fleeing the space as a result.

And so, with that in mind, today let’s talk about a prominent company that’s going in the other direction.

“I think it’s something that could have the biggest impact on this industry — but it’s also probably the most controversial thing that we could talk about,” John Hanke said.

Hanke, CEO of Pokémon Go developer Niantic, was closing out the keynote address Tuesday morning at his company’s first-ever developer summit. Niantic had already unveiled the new version of its augmented-reality developer platform, Lightship, which includes a location-mapping feature called the Visual Positioning System. Hanke also announced Campfire, a social-networking app that opens to a map and lets people find and interact with players and events for Pokémon and its other games.

As his talk wound down, though, he wanted to talk about one more thing: Niantic’s early explorations around incorporating blockchain technologies into its games.

Earlier this year, Niantic had met the team behind SpotX Games, a Miami-based company that describes itself as “a web3 innovation studio for the real-world metaverse.” Its specialty is creating crypto-based scavenger hunts that turn the experience of playing into digital collectibles.

“When we met them they started talking about using blockchain as a way to inspire people to go outside, discover new places, have fun with friends,” Hanke said Tuesday. “It was kind of like we were talking to ourselves.”

Hanke was taken with SpotX’s work, and acquired the studio.

At South by Southwest in March, SpotX demonstrated a scavenger hunt game that offered cash prizes for visiting different locations in Austin and interacting with them via web-based AR tools on mobile phones. Anyone who finished the game could generate an NFT highlighting the places that players visited. It looked like this:

Hanke loved that the NFT was more than a “pretty picture — it’s a memento of what you did.”

This offers a hint of where Niantic might try to take crypto-based gaming in the future, using the technology to authenticate and commemorate experiences as players move through the world. The idea behind putting all this on the blockchain, a SpotX employee told me during a demo Tuesday, is that the data’s hard to fake. (I don’t know why anyone would fake a visit to Austin, or winning a scavenger hunt, but as usual with crypto the technology is still ahead of the use cases.)

“It’s early days here,” Hanke said on stage Tuesday. “You’re gonna be hearing more from us, I think, on this topic in the future.”


In the run-up to Tuesday’s event, I hopped on Zoom to discuss Niantic’s latest offerings with Hanke. The company proved with Pokémon Go that it could take a nascent technology like AR and take it massively mainstream, generating an estimated $5 billion in revenue in its first five years.

For that reason, I found myself interested in Hanke’s tentative embrace of web3. The company appears to be a long way from bringing the technology to a flagship property like Pokémon. But this year gamers have revolted at the mere suggestion that NFTs might eventually come to their favorite titles. For that reason, I wanted to know what about the blockchain appeals to Hanke and his team.

Like a lot of people, Hanke is drawn by crypto’s promise of decentralization — the idea that interacting with the web via wallets will make individual users more powerful at platforms’ expense.

On stage Tuesday, he said that today’s web3 debates reminded him of when he was a young founder in the dot-com era. In 2000, Hanke co-founded a company called Keyhole; Google acquired it four years later and turned it into Google Maps.

In the late 1990s, just as now, there were plenty of get-rich-quick schemes and hucksters, he said. But there were also important ideas on the verge of becoming big businesses.

“The potential for web3 is to move us back — to a more decentralized version of the internet,” he said. “And to recapture some of that spirit and vision that was there when it started.”

One question I had for Hanke is, even assuming that’s all true, why do you need a blockchain for it? His answer is that crypto can let you authenticate into websites using much more limited data than we give away today by logging in with Google, Facebook, and similar services.

Hanke told me:

Most people use an identity from one of the big companies. It’s our primary passport to everything that we do, to apps and online. In a sense, we’re selling our digital soul when we do that. It’s a habit that we’ve built up, and it’s kind of just the way things work right now. But web3 would enable us to have self-sovereign identities. So instead of using one of those buttons, you could use a web3 system that wouldn’t leak your personal information, and wouldn’t put a middleman between you and the service that you’re using that could intercept or store information in a way that you might not want to happen. So to me, that just really feels like the way it should work.

[…]

It’s uses like that that made me want to bet on it as an integral part of the future internet. Blockchains are actually useful in that context, because there’s no central authority.

I think in practice it could be quite difficult to make wallets more private and secure than our existing identity tools; Molly White has written persuasively that crypto wallets tend to share more data than we are comfortable with, since they are on public blockchains, and there are no guarantees anonymous wallets won’t be de-anonymized.

I also wonder whether consumer demand for decentralization is as strong as web3 founders are betting on. Centralized platforms enable many services that we have come to depend on, from password resets to transactions that can be reversed in the event of fraud. Decentralization so far has meant giving all that up, with disastrous consequences for the user experience. It’s no wonder so few people have set up a crypto wallet, relatively speaking.

At the same time, frustration with giants like Apple, Google, and Facebook is real, Hanke said:

Solving the wallet onboarding problem — everybody sees that as a huge thing. I don’t think it’s an impossible thing to solve. I do think that setting up and making a wallet today is not really for the faint of heart. [But] the reward there is very large. So we’ll see if consumers continue to care about those things — if they care more about them over time, and continue to be wary about people sort of looking over their shoulder and looking at their personal information continuously or not.

For what it’s worth, I think the issue here is less about whether consumers would be interested in more private methods of conducting business online than whether those methods are safe and convenient. Hanke told me he does see challenges around designing a good user interface, and that some “healthy skepticism” is appropriate.

My last web3 question for Hanke was why he thought average people would be excited to see NFTs and other crypto products in their games. He said those products let players reward creators more directly, with middlemen taking a much smaller cut than before, and that people love to back independent projects. (Equally as interesting to me is what he did not say — that adding NFTs to video games would make them more fun to play.)

On one hand, it may be a little unfair to press Niantic this hard on what are clearly some very early plans. Hanke was straightforward about the fact that the company has more questions than answers about web3. NFTs may very well never come to Pokémon Go or any of its other big titles.

On the other, Hanke sure sounded serious to me. And Niantic has something that very few other crypto-curious game developers do: tens of millions of users. That gives the company an influence that its rivals lack. And, assuming that anyone can figure out how to make crypto useful or fun, the user base could give the still-private company a significant opportunity as well.

“Frankly, it’s really easy to dismiss that whole set of technologies, just based on some of the things we read about,” Hanke told developers. “I think that would be a huge mistake. I think there’s something really important about this technology.”



Source: https://www.theverge.com/2022/5/25/23140932/niantic-crypto-web3-nft-spotx-john-hanke

The post Why the company behind Pokémon Go is getting crypto-curious appeared first on Science and Nerds.

]]>
https://scienceandnerds.com/2022/05/25/why-the-company-behind-pokemon-go-is-getting-crypto-curious/feed/ 0 6539
How Facebook undercut the Oversight Board https://scienceandnerds.com/2022/05/12/how-facebook-undercut-the-oversight-board/ https://scienceandnerds.com/2022/05/12/how-facebook-undercut-the-oversight-board/#respond Thu, 12 May 2022 15:15:22 +0000 https://scienceandnerds.com/2022/05/12/how-facebook-undercut-the-oversight-board/ Source: Today let’s talk about the highest-profile conflict to date between Meta and its Oversight Board, an independent organization the company established to help it navigate the most difficult questions related to policy and content moderation. Since before the board was created, it has faced criticism that it primarily serves a public-relations function for the […]

The post How Facebook undercut the Oversight Board appeared first on Science and Nerds.

]]>
Source: https://www.theverge.com/23068243/facebook-meta-oversight-board-putin-russia-ukraine-decision


Today let’s talk about the highest-profile conflict to date between Meta and its Oversight Board, an independent organization the company established to help it navigate the most difficult questions related to policy and content moderation.

Since before the board was created, it has faced criticism that it primarily serves a public-relations function for the company formerly known as Facebook. The board relies on funding from Meta, it has a contractual relationship with it governing its use of user data, and its founding members were hand-picked by the company.

Aiding in the perception that it’s mostly a PR project is the fact that to date, Meta and the board have rarely been in conflict. In the first quarter of its existence, of 18 recommendations the board made to Meta, the company implemented 14. And even though it often rules against Facebook’s content moderators, ordering removed posts to be restored, none of those reversals has generate any significant controversy. (Also, from Facebook’s perspective, the more the board reverses it, the more credible it is, and thus the more blame it can shoulder for any unpopular calls.)

That’s what made this week’s statements, published by both sides, so noteworthy.

After Russia’s invasion of Ukraine in February, Meta had asked the board to issue an advisory opinion on how it should moderate content during wartime. The conflict had raised a series of difficult questions, including under what circumstances users can post photos of dead bodies or videos of prisoners of war criticizing the conflict.

And in the most prominent content moderation question of the invasion to date, Meta decided to temporarily permit calls for violence against Russian soldiers, Vladimir Putin, and others.

All of which raised important questions about the balance between free expression and user safety. But after asking the board to weigh in, Meta changed its mind — and asked board members to say nothing at all.

From the company’s blog post:

Late last month, Meta withdrew a policy advisory opinion (PAO) request related to Russia’s invasion of Ukraine that had previously been referred to the Oversight Board. This decision was not made lightly — the PAO was withdrawn due to ongoing safety and security concerns.

While the PAO has been withdrawn, we stand by our efforts related to the Russian invasion of Ukraine and believe we are taking the right steps to protect speech and balance the ongoing security concerns on the ground.

In response, the board said in a statement that it is “disappointed” by the move:

While the Board understands these concerns, we believe the request raises important issues and are disappointed by the company’s decision to withdraw it. The Board also notes the withdrawal of this request does not diminish Meta’s responsibility to carefully consider the ongoing content moderation issues which have arisen from this war, which the Board continues to follow. Indeed, the importance for the company to defend freedom of expression and human rights has only increased.

Both statements were extremely vague, so I spent a day talking with people familiar with the matter who could fill me in on what happened. Here’s what I’ve learned.

One of the most disturbing trends of the past year has been the way that authoritarian governments in general, and Russia in particular, have used the intimidation of employees on the ground to force platforms to do their bidding. Last fall, Apple and Google both removed from their respective stores an app that enabled anti-Putin forces to organize before an election. In the aftermath, we learned that Russian agents had threatened their employees, in person, with jail time or worse.

Life for those employees — and their families — has only become more difficult since Putin’s invasion. The country passed draconian laws outlawing truthful discussion of the war, and the combination of those laws and sanctions from the United States and Europe has forced many platforms to withdraw services from Russia entirely.

In the wake of Meta’s decision to allow calls for violence against the invaders, Russia said that Meta had engaged in “extremist” activities. That potentially put hundreds of Meta employees at risk of being jailed. And while the company has now successfully removed its employees from the country, the extremism language could mean that they will never be allowed to return to the country so long as they work at Meta. Moreover, it could mean that employees’ families in Russia could still be subject to persecution.

There is precedent for both outcomes under Russia’s extremism laws.

So what does the Oversight Board have to do with it?

Meta had asked for a fairly broad opinion about its approach to moderation and Russia. The board has already shown a willingness to make expansive policy recommendations, even on narrower cases submitted by users. After asking for the opinion, the company’s legal and security teams became concerned that anything the board said might somehow be used against employees or their families in Russia, either now or in the future.

Technically, the Oversight Board is a distinct entity from Meta. But plenty of Westerners still refuse to recognize that distinction, and company lawyers worried that Russia wouldn’t, either.

All of this is compounded by the fact that tech platforms have gotten little to no support to date, from either the United States or the European Union, in their struggles to keep key communication services up and running in Russia and Ukraine. It’s not obvious to me what western democracies could do to reduce platforms’ fears about how Russia might treat employees and their families. But discussions with executives at several big tech companies over the past year have made it clear that they all feel like they’re out on a limb.

All that said, the news still represents a significant blow to the Oversight Board’s already fragile credibility — and arguably reduces its value to Facebook. The company spent several years and $130 million to create an independent body to advise it on policy matters. To ask that body for its advice — advice that would not even be binding on the company — and then decide belatedly that such advice might be dangerous calls into question the point of the entire enterprise. If the Oversight Board’s only role is to handle the easy questions, why bother with it at all?

Facebook and the board declined to comment to me beyond their statements. It’s fair to note that despite the reversal here, the company has stood up to Russia in some important ways — including standing by that decision to let Ukrainians call for Putin’s death. Meta could have rolled over for Russia on that one, and chose not to.

At the same time, once again we find that at a crucial moment, Facebook executives fail to properly understand risk and public perception. Russia has been threatening platform employees since at least last September. Whatever danger there was for employees and their families existed well before the moment that Facebook sought an opinion from its board. To realize that only weeks later… well, talk about an oversight.

I’m on record as saying that the Oversight Board has changed Facebook for the better. And when it comes to authoritarians threatening platform employees, tech companies have distressingly few options available to them. The Russia case, in this as in so many other situations, was truly a no-win situation.

But that doesn’t mean it won’t have collateral damage for both Meta and its board. Critics always feared that if the stakes ever got high enough, Facebook would blink and decide to make all the relevant decisions itself. And then Vladimir Putin went and invaded his neighbor, and the critics were proven right.



Source: https://www.theverge.com/23068243/facebook-meta-oversight-board-putin-russia-ukraine-decision

The post How Facebook undercut the Oversight Board appeared first on Science and Nerds.

]]>
https://scienceandnerds.com/2022/05/12/how-facebook-undercut-the-oversight-board/feed/ 0 5586
Meta, TikTok and YouTube may finally have to start sharing data with researchers https://scienceandnerds.com/2022/05/05/meta-tiktok-and-youtube-may-finally-have-to-start-sharing-data-with-researchers/ https://scienceandnerds.com/2022/05/05/meta-tiktok-and-youtube-may-finally-have-to-start-sharing-data-with-researchers/#respond Thu, 05 May 2022 15:19:13 +0000 https://scienceandnerds.com/2022/05/05/meta-tiktok-and-youtube-may-finally-have-to-start-sharing-data-with-researchers/ Source: On Wednesday, Congress was treated to the unfamiliar spectacle of highly intelligent people, talking with nuance, about platform regulation. The occasion was a hearing, titled “Platform Transparency: Understanding the Impact of Social Media,” and it served as a chance for members of the Senate Judiciary Committee to consider the necessity of legislation that would […]

The post Meta, TikTok and YouTube may finally have to start sharing data with researchers appeared first on Science and Nerds.

]]>
Source: https://www.theverge.com/2022/5/5/23058139/meta-tiktok-youtube-senate-transparency-hearing-research-us-europe


On Wednesday, Congress was treated to the unfamiliar spectacle of highly intelligent people, talking with nuance, about platform regulation. The occasion was a hearing, titled “Platform Transparency: Understanding the Impact of Social Media,” and it served as a chance for members of the Senate Judiciary Committee to consider the necessity of legislation that would require big tech platforms to make themselves available for study by qualified researchers and members of the public.

One such piece of legislation, the Platform Transparency and Accountability Act, was introduced in December by (an ever-so-slightly) bipartisan group of senators. One of those senators, Chris Coons of Delaware, led the Wednesday hearing; another, Sen. Amy Klobuchar of Minnesota, was present as well. Over a delightfully brisk hour and forty minutes, Coons and his assembled experts explored the necessity of requiring platforms to disclose data and the challenges of requiring them to do so in a constitutional way.

To the first point — why is this necessary? — the Senate called Brandon Silverman, co-founder of the transparency tool CrowdTangle. (I interviewed him here in March.) CrowdTangle is a tool that allows researchers, journalists and others to view the popularity of links and posts on Facebook in real time, and understand how they are spreading. Researchers studying the effects of social networks on democracy say we would benefit enormously from having similar insight into the spread of content on YouTube, TikTok, and other huge platforms.

Silverman was eloquent in describing how Facebook’s experience of acquiring CrowdTangle only to find that it could be used to embarrass the company made other platforms less likely to undertake similar voluntary measures to improve public understanding.

“Above all else, the single biggest challenge is that in the industry right now, you can simply get away without doing any transparency at all,” said Silverman, who left the company now known as Meta in October. “YouTube, TikTok, Telegram, and Snapchat represent some of the largest and most influential platforms in the United States, and they provide almost no functional transparency into their systems. And as a result, they avoid nearly all of the scrutiny and criticism that comes with it.”

He continued: “That reality has industry-wide implications, and it frequently led to conversations inside Facebook about whether or not it was better to simply do nothing, since you could easily get away with it.”

When we do hear about what happens inside a tech company, it’s often because a Frances Haugen-type employee decides to leak it. The overall effect of that is to paint a highly selective, irregular picture of what’s happening inside the biggest platforms, said Nate Persily, a professor at Stanford Law School who also testified today.

“We shouldn’t have to wait for whistleblowers to whistle,” Persily said. “This type of transparency legislation is about empowering outsiders to get a better idea of what’s happening inside these firms.”

So what would the legislation now under consideration actually do? The Stanford Policy Center had a nice recap of its core features:

*Allows researchers to submit proposals to the National Science Foundation. If the NSF supports a proposal, social-media platforms would be required to furnish the needed data, subject to privacy protections that could include anonymizing it or “white rooms” in which researchers could review sensitive material.

*Gives the Federal Trade Commission the authority to require regular disclosure of specific information by platforms, such as data about ad targeting.

*Commission could require platforms create basic research tools to study what content succeeds, similar to the basic design of the Meta-owned CrowdTangle.

*Bars social-media platforms from blocking independent research initiatives; both researchers and platforms would be given a legal safe harbor related to privacy concerns.

To date, much of the focus on regulating tech platforms has found members of Congress attempting to regulate speech, at both the individual and corporate level. Persily argued that starting instead with this kind of forced sunlight might be more effective.

“Once platforms know they’re being watched, it will change their behavior,” he said. “They will not be able to do certain things in secret that they’ve been able to up till now.” He added that platforms would likely change their products in response to heightened scrutiny as well.

OK, fine, but what are the tradeoffs? Daphne Keller, director of the program on Platform regulation at Stanford, testified that Congress should consider carefully what sorts of data it requires platforms to disclose. Among other things, any new requirements could be exploited by law enforcement to get around existing limits.

“Nothing about these transparency laws should change Americans’ protections under the Fourth Amendment or laws like the Stored Communications Act, and I don’t think that’s anyone’s intention here,” she said. “But clear drafting is essential to ensure that government can’t effectively bypass Fourth Amendment limits by harnessing the unprecedented surveillance power of private platforms.”

There are also First Amendment concerns around these sort of platform regulations, she noted, pointing to the failure in court of two recent state laws designed to force platforms to carry speech that violates their policies.

“I want transparency mandates to be constitutional, but there are serious challenges,” Keller said. “And I hope that you will put really good lawyers on that.”

Unfortunately, into every Senate hearing, a little Ted Cruz must fall. The Texas senator was the only participant on Wednesday to exhaust his allotted speaking time without asking a single question of the experts present. Cruz expressed great confusion about why he got relatively few new Twitter followers in the days before Elon Musk said he was going to buy it, but then got many more after the acquisition was announced.

“It is obvious someone flipped the switch,” the Texas Republican said. “The governors they had on that said ‘silence conservatives’ were flipped off. That is the only rational explanation.” (I know the word “governors” is used somewhat unconventionally here, but I listened to the tape five times and that’s what I heard.)

The actual explanation is that Musk has lots of conservative fans, they flocked back to the platform when they heard he was buying it, and from there Twitter’s recommendation algorithms kicked into gear.

But here even I must sympathize with Cruz, for all the reasons that today’s hearing was called in the first place. Absent legislation that requires platforms to explain how they work in greater detail, some people are always going to believe in the dumbest explanations possible. (Especially when those explanations serve a political purpose.) Cruz is what you get in a world with only voluntary transparency on the part of the platforms.

That said, we should still keep our expectations in check — there are limits on what platform disclosures can do for our discourse. It seems quite possible that you could explain exactly how Twitter works to Ted Cruz, and he would either fail to comprehend or willfully misunderstand you for political reasons. And even people who seek to understand recommender systems in good faith may fail to understand explanations on a technical level. “Transparency” isn’t a cure-all.

But… it’s a start? And seems much less fraught than lots of other proposed tech regulations, many of which find Congress attempting to regulate speech in ways that seem unlikely to survive First Amendment scrutiny.

Of course, where other countries hold hearings as a prelude to passing legislation, in the United States we typically hold hearings instead of passing legislation. And despite some Republican support for the measure — even Cruz said this one sounded fine to him — there’s no evidence that it’s gathering any particular momentum.

As usual, though, Europe is much further ahead of us. The Digital Services Act, which regulators reached an agreement on in April, includes provisions that would require big platforms to share data with qualified researchers. The law is expected to go into effect by next year. And so even if Congress dithers after today, transparency is coming to platforms one way or another. Here’s hoping it can begin to answer some very important questions.



Source: https://www.theverge.com/2022/5/5/23058139/meta-tiktok-youtube-senate-transparency-hearing-research-us-europe

The post Meta, TikTok and YouTube may finally have to start sharing data with researchers appeared first on Science and Nerds.

]]>
https://scienceandnerds.com/2022/05/05/meta-tiktok-and-youtube-may-finally-have-to-start-sharing-data-with-researchers/feed/ 0 5196