AH: Can you explain how that worked? Before, you were sorting based on who your friends were and the pages you followed. Now how do you decide what someone sees? How do you decide quality in an environment like this?<\/strong><\/p>\nTA: When we rank your connected feed, we are looking at a lot of signals. We are looking at a lot of different aspects of the content itself and your previous behavior to help us understand, \u201cIs this something that you would like to see at the top of your feed or not?\u201d We look at your interaction history with a friend, we look at how often have you commented on their posts, how often have you sent them a message, how often have you liked things, and how often have you shared them. <\/p>\n
We also look at qualities of the content itself, and obviously there are integrity and community standards things that we look at. It is a host of different things to help us understand, \u201cIs this something that you would want to see right now towards the top of your feed?\u201d<\/p>\n
It is actually similar to recommendations in some way. We can take a look at many things. Have you interacted with this topic or this type of content before? Have your friends done that? You might not be connected to this creator, but have you liked their post before? Have you commented on their stuff? Have you participated in a group that might be similar to the post from a public group that we are recommending? We look at a lot of characteristics to understand if this is something that you would be interested in. We try to understand the content behind this. <\/p>\n
One of the things our team has been working on that I am actually excited about \u2014 and you\u2019re going to see this more and more throughout the product \u2014 is that periodically we ask people on a recommendation, \u201cDo you want to see more of this or do you want to see less of this?\u201d That is our way of asking you, \u201cHelp us, tell us what you\u2019re into.\u201d <\/p>\n
The reason that I am excited about that, especially going back to the MSI conversation, is that we always struggled with the fact that one of the meaningful social interactions that a lot of people have when they see something on Facebook is that they might not like it. They might not comment on it. But they will go and talk to a friend about it. I am regularly saying to my wife, \u201cOh, I saw this thing on Facebook today,\u201d and we talk about it. Facebook doesn\u2019t know that loop happened.<\/p>\n
Now, when Facebook shows me something and it says, \u201cHey Tom, do you want to see more or see less of it?\u201d I can say, \u201cHey, I want to see more of it.\u201d I\u2019m not necessarily sharing this with Facebook, but this is the type of thing that helps me have an interesting conversation with my wife. I think incorporating a lot of those user-preference-type signals is an exciting area for me and a lot of our teams. I think it is going to help make sure that the recommendations we show you are relevant. <\/p>\n
Do people like recommendations? Do they not? I think we need to make the quality of them much better. This is why you have heard about some of the changes we have made to AI. In my team, we have pulled together some of the best and brightest AI engineers across the company to focus on recommendations quality, to figure out how to leverage what we know about the content, what we know about the person, and what they are telling us they want to see more or less of. I think these recommendations are going to get better and better, to the point where they feel just as good as, if not better than, some of your connected content in Feed. That is our aspiration.<\/p>\n
AH: You mentioned goaling and objective function.<\/strong> I know Mark said at the time, \u201cWe expect time spent to decrease if MSI works.\u201d What was the goal for MSI? Did it work? What is your goal for the discovery engine? What are you goaling the teams towards, in terms of a north star metric, for the discovery engine?<\/strong><\/p>\nTA: For MSI, we ended up creating almost a composite metric. The heart of it was looking at if we were increasing the amount of social activity on the site, such as friend posting, commenting, and things of that nature. Now, we still look at all of that. There are different teams that look at how many comments people are sharing with one another and how many messages people are sending back and forth. There are a lot of different goals at different levels in the organization that are meant to reflect the health of the social ecosystem. <\/p>\n
What I look at in particular is four buckets of goals, I would say. I would love to get across that there is no one goal, so I will give you the shape of that. <\/p>\n
The first bucket: Are people visiting Facebook? Are they choosing to visit Facebook on a monthly basis or on a daily basis? If we have bad recommendations, people are not going to visit Facebook as often. They will go to another competitor that is doing recommendations. Are people choosing to visit? <\/p>\n
The second bucket: We are a business, so are we actually growing revenue? Are these things monetizing? Monetizing video is different from monetizing Feed. There is a whole lot of work we need to do to make sure that the business is running as we make these format integrations and innovations.<\/p>\n
The third bucket: What is the quality and reliability of the experience? Does Facebook load quickly? I used to be responsible for the performance and reliability of the Facebook app. We learned that making sure that all these interactions load right, feel right, and work well is an important part of the holistic experience. I also look at trust, safety, and integrity. How are we doing on prevalence, mitigating the harms and adhering to community standards? The trust piece is increasingly also about privacy. How well are we doing in terms of our privacy commitments and different things like that? There are goals that ladder up to all of those at different levels of the organization.<\/p>\n
The fourth bucket: Looking at sentiment, how much do people prefer Facebook for this over any other service? It gives me a complete picture of how we are doing. I want to make sure that Facebook is growing, and that people feel like, \u201cThis is a valuable thing to me,\u201d even as we have new generations using social media. I want to make sure that Facebook contributes to Meta\u2019s overall business. I want to make sure that Facebook feels fast, easy to use, and enjoyable. I want to make sure that Facebook is safe and that we do a good job of protecting your data. All of that boils down to how I think about the goals.<\/p>\n
Shirin Ghaffary: I grew up with Facebook. I remember the \u201cpivot to video\u201d era when I was first starting in journalism, when everyone wanted to be a video journalist \u2014 including me \u2014 because of Facebook. There is the MSI stuff we talked about, and now you are leaning toward discovery. You have been at Facebook since 2010, so can you help us define some of these different eras that you have seen for News Feed? Can you briefly give us an overview of how you would demarcate the different eras?<\/strong><\/p>\nTA: I joined the company in 2010 as an engineer, and I worked on our growth team for the first two years. I think our milestone was like 500 million users that we hit in my first year. The era that I really remember was the transition from web to mobile. This was around the time of our IPO, and I remember tons of headlines saying, \u201cFacebook isn\u2019t going to succeed on mobile,\u201d \u201cFacebook doesn\u2019t have a business,\u201d \u201cFacebook is a web company, it\u2019s dead.\u201d I say this because I am used to living through skepticism as Facebook has made these big leaps, yet I have always seen us get to the other side. <\/p>\n
The shift from web to mobile was a big one. And in terms of the technology lift and the engineering lift to do that, it was actually huge. Figuring out how to design and have a good mobile product when you were used to designing for the web was big. Our core feed did not change much, but it was a gigantic shift we had to make as a company under a lot of pressure during the IPO. That was memorable to me.<\/p>\n
Alongside that was figuring out how to make money on mobile because we were so used to making money through the web right-hand column ads. In hindsight we all laugh about it, but it was this huge mental breakthrough to be like, \u201cOh, we are going to show ads in Feed.\u201d It was like, \u201cWe can\u2019t possibly do that.\u201d Watching the organization go through that was interesting. <\/p>\n
I am proud of Facebook for being able to integrate different formats. I think you see TikTok and YouTube trying to do this, but Facebook was able to integrate video. It was able to integrate other entity types too, like Groups, where you could form a community. I think that over the next several years, you saw Feed grow and be able to integrate these new format and content types. It was exciting to see how much you could use Feed to keep track of your friends, your communities, and following pages. That was this proliferation of us understanding the power of what Feed could do.<\/p>\n
There was probably a phase after that where you started to see things shift to Stories. This was similar to the shift to mobile; we were like, \u201cLet\u2019s get Stories out there,\u201d but people didn\u2019t use it for a while. It was sitting at the top of Feed, and everyone was saying, \u201cFacebook Stories is a ghost town. It\u2019s going to fail.\u201d Then a year and a half later, people were just like, \u201cOh, cool. This is the way I want to share now,\u201d and it took off. <\/p>\n
Those were a couple of the big transformations that I saw. I think the next one is the story of format innovation and wiring in new ways for people to express themselves. The format stuff that is happening now is around video and short-form video. It was Groups getting wired into the experience a few years ago \u2014 which went well \u2014 and now it\u2019s creators. I see our next piece as an expansion.<\/p>\n
The technology piece that I think is new and exciting, similar to web to mobile, is the power of AI. The innovation that is happening in the AI space, and what some of these new AI architectures and models are capable of doing, is just so amazing. Who is going to be able to leverage the power of AI recommendations and marry them with the power of traditional, connected social networks? You see TikTok trying to do that. <\/p>\n
\n
\u201cFrom where I sit, I see TikTok chasing Facebook.\u201d<\/q><\/aside>\n<\/div>\nA lot of folks and the press say to me, \u201cOh Tom, are you chasing TikTok?\u201d I say, \u201cHey look, from where I sit, I see TikTok chasing Facebook.\u201d <\/p>\n
I started my career on the growth team and I see TikTok asking me to find my friends. I was scrolling through TikTok, and there was a unit that said \u201cpeople you may know.\u201d I managed the \u201cpeople you may know\u201d team at Facebook as my first management job. We created a system and then I\u2019m literally seeing it on TikTok. I\u2019m like, \u201cI know what\u2019s going on here. This is what I worked on when I was growing the Facebook social network.\u201d<\/p>\n
I think all of these companies are starting to try to figure out what is going to be the right blend of the AI algorithmic recommendations and the social interactions and recommendations. This is why I feel like Facebook is actually well-positioned to succeed. We have integrated formats in the past, with new entity types, Groups, and creator\u2019s Pages. We are pretty good at AI, and we are pretty good at social. I think how well we bring all these things together over the next year or two is going to determine how successful we are and how much people want to continue using Facebook. I\u2019m pretty optimistic right now.<\/p>\n
SG: That\u2019s a great point. I have noticed that with TikTok too. Does TikTok need to become more of a Facebook-like app and integrate your real-life friends in order to be successful?<\/strong><\/p>\nTA: It seems like it\u2019s trying.<\/p>\n
SG: It is definitely trying, but I will couch that for now. There are important questions I want to get to around the societal impact of News Feed \u2014 and these are all probably topics and discussions you have heard before. Just to jump back to meaningful social interactions, what is your response to the criticism, especially from Frances Haugen, that this MSI metric gave preference to extreme and polarizing content, and that the integrity team seemed to have found some evidence of this? Is there context missing here? How would you respond to that criticism?<\/strong><\/p>\nTA: I think with any system of incentives you have folks that find a way to abuse it. It was funny, because one of the integrity problems that we were looking at before MSI was things like watchbait or clickbait. When you optimize a system for time spent, you get a different type of problem. You get people trying to trick you into watching or clicking on something that is not that valuable. <\/p>\n
MSI came from the spirit of people wanting to feel like they can interact with one another on Facebook. Comments were a very good proxy for that. But we ended up seeing that some people left gnarly comments and abused that system. When we talk about integrity, we actually know that it is an adversarial problem, which means it is never solved. It just unfortunately changes.<\/p>\n
When we introduced MSI, even with a lot of good research about what people wanted, you saw some of the adversarial behavior come in and people gaming the system. The reason we invest so much in integrity, and we have shared how much we have invested in it from a human headcount and a budget perspective, is because we know that it is this adversarial problem. A lot of the research that ended up being leaked out there was created by our integrity team. Some of it was created by people on my team who said, \u201cHey, look, this is how this is being abused. This is now what we need to do to fix this.\u201d That is why MSI and our implementation of it has changed over and over again as we understand how people\u2019s feelings about what connects them socially are changing and how these bad actors are abusing our system.<\/p>\n
I am proud of the work and research that the integrity teams do. I spend a bunch of time with our integrity teams. They are deeply embedded in our product development process now and I think that we have gotten quite good at this. As we went through that period, we saw a lot of different ways that people were distorting or abusing the good use cases for MSI for things that were not so great.<\/p>\n
AH: Is it possible to foresee that abuse ahead of time? I mean, you are dealing with so many different inputs, outputs, and metrics, and you are looking at such a high level at a thing that has 3 billion users on it. Can you foresee this? I think the critics would say you either didn\u2019t want to, or Haugen would say you put profit above safety and that it mattered more that it was keeping people on the site. I think that is a very common criticism that gets leveled against the Feed. Can you respond to that? <\/strong><\/p>\nTA: I have a little bit of trouble with, \u201cHey, could you not foresee this?\u201d We were investing so much and we had these teams that were working and researching the problem. That was why a lot of that research was being produced as this evolved. <\/p>\n
In terms of foreseeing it, I actually looked at some of the top integrity issues before MSI and they were just very different. As we change things, we know that some things we are going to be able to predict, and some things we are not going to be able to predict. Again, that is why we have this very active integrity team looking at how this unfolds on a very regular basis, and I think we have hired some of the best in the business to be able to analyze and think through all of this.<\/p>\n
We went through a period when we were building these teams where we were not always thinking about the very adversarial nature of things. Actually building out the integrity muscle created a set of people and a culture that were able to say, \u201cThis is the way a bad actor is going to go in and abuse this system. This is the way that this thing is going to get compromised.\u201d When we were going through that period, we were still building out integrity. We were spending so much to build up those teams. <\/p>\n
One of the things I talked about is the history of different phases of Facebook. I remember the history of expansion of different teams, because when I was leading engineering, I had to figure out hiring and stuff. In that period, we were expanding those integrity teams so rapidly because we knew, \u201cHey, we\u2019ve got to build this muscle and invest here.\u201d<\/p>\n
SG: The second biggest criticism we have seen about Feed\u2019s history is around the problem of low-quality content. In a recent quarter, for example, there was a suspected scam that got taken down after it got millions of views and it was at the top of the report. How do you get the quality of content higher on Facebook, especially when recommendations from people who may not be your friends are going to play an even greater role in what people see?<\/strong><\/p>\nTA: There are a couple of things that we are looking at here. One is what I mentioned before, which is incorporating more signals that reflect people telling us what they want or do not want. See more, or see less. We have been looking at this issue of watchbait recently, which is where somebody is trying to get you to watch a video to its completion, because if you produce a video of a certain length you can monetize it. There are all these incentives to get you to watch the video. Sometimes the video is just terrible. You watch until the end, you\u2019re like, \u201cThis video sucked.\u201d It\u2019s just a bad experience.<\/p>\n
We are looking at it like, \u201cOkay, what are the types of things that help us understand that this was a bad video? Were there angry reactions on this video? When we ask people, do they want to see more or see less of it?\u201d Whatever signal you got from the fact people who watched a lot of this video, you need to discount that, because there are these other more qualitative signals that show people did not actually like the experience of watching this video. We are doing a lot of work around things of that nature.<\/p>\n
We talked about this with video in the past, but I speak about these things because Reels and video are top of mind for the recommendation systems right now. We are looking at different aspects of originality. Was this posted by the person that created it? There might be types of content with limited originality. Somebody reacts to a reel, so they are showing somebody else\u2019s content, but they are adding commentary or something interesting on top of it. Then there are other things that are just low originality, like somebody took somebody else\u2019s video and posted it to their account.<\/p>\n
How do we reward more of the folks that are creating original content or maybe the limited original content? They are adding something new to the conversation, versus folks that are just recycling or reposting. Doing this at scale is something that we are working through with Reels and some of our other content types. That is a piece that I am pretty excited about, and the team talks about it a lot. We do want Facebook to be attractive to creators and we want creators who put effort into creating original content to feel like they are getting rewarded through distribution and the monetization work that we are doing.<\/p>\n
SG: That makes sense. In the long term, the quality of the content has to go up if you want people to stay there. With the past of Feed, there was a time when it felt like a lot of what you would see was political content and people debating the news of the day. Then we see Facebook shift away from that especially with the direction now, and there was a point even in the last US presidential election where Facebook shut off political group recommendations. Now that you are recommending more to people, how do you allow for recommendations that don\u2019t exacerbate the worst in people? It seems like a tough problem. Is the answer to shut off political recommendations across the board? How do you balance that mix of recommending things that could be politically controversial or politically polarizing?<\/strong><\/p>\nTA: That is a great question. Just to clarify, we still are not recommending political content in Groups. That was not a temporary thing; that was a permanent thing that we decided to do. In terms of recommendations overall, the first thing that we look for is the types of things people want. <\/p>\n
\n
Nobody wants more political content on Facebook.<\/q><\/aside>\n<\/div>\nI will tell you, I am not getting a lot of research that says young adults want more political content on Facebook. I am not seeing a ton of research that says anybody wants a lot more political content on Facebook. That is why we have been reexamining where we are showing political content, and you are not seeing it show up in a lot of our recommendations channels. It\u2019s because that is not what people want to see. <\/p>\n
I actually think we learned this in the pandemic, too. People are fatigued by a lot of the political discussions and things like that. Everything was just so heavy. One of the reasons why short-form video took off is because it\u2019s entertaining, fun, and uplifting. It is a different vibe than what people were seeing with the doomscrolling. The content we want to reward is the content people want to see on Facebook right now. <\/p>\n
In terms of how we prevent potentially bad content from coming into recommendations, we have two sets of guidelines. One is our community standards. \u201cThis content is not allowed on Facebook.\u201d If something violates our community standards, it does not matter if you are connected to it or not, we don\u2019t put it into your feed or recommendations. <\/p>\n
We also have our recommendations guidelines. They are a distinct set of policies and guidelines that govern what we show in recommendations. That is an even higher bar than our community standards, because this is content that Facebook and Meta is recommending. We are going to be keeping a very high bar in terms of those guidelines. I actually hope that all of the other companies that are doing recommendations publish their guidelines or the type of transparency reports that we do. As more and more of these social services get into recommendations, it is going to be important to have this collective understanding of how these companies are making these decisions. How are they enforcing these things?<\/p>\n
We have this infrastructure with our policies, with our enforcement, and with the reports that we put out to continue sharing how we are doing. You even mentioned things like the WVCR, right? We are going to continue to publish that. We just put that out there. Some of the things on there, I\u2019m like, \u201cOh gosh, okay, that is something we need to figure out how to do better.\u201d I think that it is important that we are always putting out the work on how we are approaching this.<\/p>\n
SG: Definitely. I think those widely viewed content reports, as well as tools like CrowdTangle in the past, have been helpful for people to better understand Facebook. I have to say that one of the biggest concerns I hear from experts \u2014 researchers studying social media or other journalists \u2014 is that it\u2019s hard to even get a grasp on what is going on with Facebook. What are people seeing on Facebook? Everyone\u2019s experience is different, because there has been debate even inside Facebook about whether CrowdTangle, for example, is presenting a holistic view of what is being seen. What do you say to those who think Facebook\u2019s feeds are a black box? How are we supposed to make sense of it all? How transparent do you want to be about this going forward? How transparent can Facebook really be on this?<\/strong><\/p>\nTA: There are a couple of things here. I think we need to evaluate Facebook, as well as all these other companies, on how well they do at continuous enforcement of their community standards. <\/p>\n
So what does that mean? First, you need to publish your community standards and say what\u2019s allowed and what\u2019s not, and then have a process to modify it. We have obviously been trying things like the oversight board to have an external body give us feedback on how well are our community standards. I think you need to be very clear on what the rules are here. <\/p>\n
Then you need to have accountability for how you are enforcing the rules. We have our transparency report, and we talk about how much content we are taking down. It is being externally audited in terms of its methodology and in terms of its results. We are trying to have a lot of other eyes take a look at this.<\/p>\n
Let\u2019s say Facebook didn\u2019t exist tomorrow. Well, you are still going to have a video go live on Twitch, that is then pulled down and put onto YouTube, that is then turned into short-form video clips on TikTok, that are then reposted on Twitter. That whole ecosystem is shaped by a bunch of different companies. Facebook is one and we are an important one. We recognize that we have a big responsibility, but if you are just looking at Facebook and specifically how Facebook does this particular thing in their algorithm, then you are missing the bigger picture. How is each company approaching this problem? How is each company enforcing this problem? Who is auditing and providing some sense of whether or not this is even a legitimate way of attacking the problem? That is the conversation that I think we need to be having. Facebook, and Meta more broadly, has been doing a good job leading there.<\/p>\n
SG: I guess it is a question of who should be doing that. I know Facebook and Meta have said that they welcome regulation in some of these areas. Do you think that needs to go to an outside body?<\/strong><\/p>\nTA: I think it could. I think the oversight board is an interesting example of us trying something like this. We have a lot of our policy folks working with Groups from other tech companies on evolving things like our approach to transparency. There is a bunch of work just in terms of the industry getting together and figuring out how we collectively do this. <\/p>\n
I think there is a legislative angle as well. Facebook has historically shared that, \u201cHey, we are supportive of regulation in this space, but I think it needs to be well-thought-out regulation. We have been very open to having that dialogue.\u201d I don\u2019t think it\u2019s just one thing that you can do, but a collective set of things. I think what you have seen Meta do is start to model some of the ingredients of the broader solution that we think could be viable.<\/p>\n
AH: Just covering you guys for a while now, it feels like a lot of the criticism that you get comes from the opacity around ranking and Feed. People feel like you guys have your thumb on the scale in a way that is harmful to a particular group. I remember when I came to MPK in 2016, you did a whiteboard draw-out of how ranking works. People don\u2019t get that it is personalized for each person. I guess there is still this tension of having responsibility as the company for what happens when humans have responsibility. When is it the human\u2019s fault, and when is it the machine or Facebook\u2019s fault?<\/strong><\/p>\nWhen you are going to this future of recommendations, you have to expect that there is going to be even more scrutiny. Potentially, it also looks like it is going in the direction of legal liability for what Facebook recommends, at least in certain markets. How do you reckon with that push that you guys are doing? Do you feel like you have to do the product because it\u2019s what the users want? The responsibility and the scrutiny that it is going to bring you has already been bubbling, and it has only just been based on the connected graph so far. Now you are going to be really going into recommendations. Does that scare you at all? I mean, that is a big tension you guys are going to have to reckon with.<\/strong><\/p>\n\n
\u201cI recognize the responsibility that we have, and that is weighty.\u201d<\/q><\/aside>\n<\/div>\nTA: It doesn\u2019t scare me, but I recognize the responsibility that we have, and that is weighty. The reason I am okay with recommendations is because of how much we have built up in this space. We have talked about spending $40 billion over several years and just how many people we have working on this. If there is any company that has risen to the challenge of trying to do this at scale, I think it\u2019s Meta. We are well-positioned to do this for recommendations. <\/p>\n
As I look at the product features for the discovery engine, our integrity teams are in there and our road maps are aligned. It is something that we\u2019re looking at very much upfront. But I also think it is important to have that broader conversation. Like I said, we are not the only people in the recommendation space at all.<\/p>\n
If Facebook or Meta disappeared tomorrow, AI is not going away, recommendations aren\u2019t going away, and large-scale networks aren\u2019t going away. My main thing is, \u201cHow do we have a dialogue where Facebook and Meta have a seat at the table and are in a leadership position?\u201d but we are looking more broadly on what the industry needs to do here. <\/p>\n
I think getting the legislation and the regulation right is going to be important. If it is overly punitive, like, \u201cHey, if any bad recommendation comes through, we are going to fine you some exorbitant amount of money.\u201d Well, you know what? In any process, people make mistakes. People make mistakes when they manufacture a car on an assembly line. Again, that is why you have these continuous quality and enforcement things that we have, like integrity and the transparency reports, because we are constantly trying to get better and hold ourselves accountable to that.<\/p>\n
You have to be careful about how it is crafted, because of the downside of a very strict regulation that says, \u201cmake no mistakes ever.\u201d Maybe then companies aren\u2019t going to provide this, and maybe that is actually a big disservice to the world. Maybe companies in other countries or other jurisdictions are going to grow and this is going to be available to other folks. These are all of the types of conversations we are having. We have to look at the consequences, intended and unintended, of recommendations. We also have to look at the consequences, intended and unintended, of regulation so we can strike the right balance here.<\/p>\n
AH: Do you feel a sense of responsibility globally? I think the criticism that Haugen and others have leveled is that Facebook ignored safety in countries outside North America and Europe, traditionally. Do you feel like you have a more global grasp of what the implications of this are now?<\/strong><\/p>\nTA: Facebook and Meta serve a global community. We look at our programs to make sure that, no matter who is using the product and from where, that we apply those safety measures and those integrity measures. In different regions, it requires different expertise and different things to do. We released a human rights report, I think in the last week or so. I don\u2019t know any other tech company that does this. What you are seeing is us investing here and making progress. This is us trying to tell the world what we are doing about it and that we do think about this globally. <\/p>\n
I think a lot of the advancements that we are making in AI are going to help us there. We talked about some work that our fair team had done. It is one of the most sophisticated AI translation systems that has ever been created. Well, guess what? That AI is being used to help us with integrity issues in different languages across the world. We can take an AI that might have been trained in one set of languages and actually apply it to another set of languages because of all the investments that we have made in AI. <\/p>\n
I think we are doing quite a lot through the technology and policies that we are creating and the external relationships that we are building to be able to tackle this on a global scale. I do hope that is coming through, because that is definitely where we are thinking internally.<\/p>\n
AH: I know discovery engine was a big announcement. Is there anything we should expect to see going forward?<\/strong><\/p>\nTA: This is an exciting time for Facebook. We definitely have one foot in our legacy and history, which is what you see in the Feeds launch that we did today. But we also have a big foot in the future of what the next generation is looking for from social media. It is a fun time managing that transition. Like we talked about earlier, Facebook has done this multiple times; we have been through this big evolution. This is another big one, but I think on the other side of it, we are going to say, \u201cOf course we went in this direction. This is what people wanted out of social media.\u201d I am pretty optimistic about where we are headed.<\/p>\n
AH: It\u2019s a brave new world. I\u2019m glad we got to catch up on this. Thank you.<\/strong><\/p>\nTA: I really enjoyed the conversation. Thanks for having me.<\/p>\n
SG: Thank you so much.<\/strong><\/p>\n\n<\/aside>\n<\/div>\n <\/br><\/code><\/p>\nSource: https:\/\/www.theverge.com\/23328278\/facebook-tom-alison-interview-instagram-meta-zuckerberg-news-feed-discovery-engine-tiktok<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"Source: In this special episode of Decoder, Verge deputy editor Alex Heath and Recode senior reporter Shirin Ghaffary talk to Meta\u2019s Tom Alison, who runs the Facebook app. Alex and Shirin are the co-hosts of the newest season of Vox Media\u2019s podcast Land of the Giants. This season is about Facebook and Meta, and they […]<\/p>\n","protected":false},"author":1,"featured_media":13403,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"nf_dc_page":"","om_disable_all_campaigns":false,"pagelayer_contact_templates":[],"_pagelayer_content":"","_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[8],"tags":[],"class_list":["post-13402","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-technology"],"yoast_head":"\n
How the head of Facebook plans to compete with TikTok and win back Gen Z - Science and Nerds<\/title>\n \n \n \n \n \n \n \n \n \n \n \n\t \n\t \n\t \n \n \n \n\t \n\t \n\t \n