Meta has published a new overview of how it’s working to improve your Reels recommendations, by using user response surveys to better gauge which elements are driving interest and engagement.
No doubt you’ve seen these yourself within the Reels feed, prompts that are shown in-between videos that ask you how you felt about the Reel that you just watched. Meta says that it’s deployed this approach on a large scale, and based on the feedback provided, it’s gleaned more info to help refine and improve its Reels recommendations.
As explained by Meta:
“By weighting responses to correct for sampling and nonresponse bias, we built a comprehensive dataset that accurately reflects real user preferences – moving beyond implicit engagement signals to leverage direct, real-time user feedback.”
So rather than just using likes, shares and watch-time as indicators of interest, Meta’s looking to expand beyond this, and consider more elements that can further improve its recommendations.
And apparently it’s working.
According to Meta, before it deployed these surveys, its recommendation systems were only achieving a 48.3% alignment with true user interests. But now, following the implementation of learnings based on these surveys, that’s increased to more than 70%.
“By integrating survey-based measurement with machine learning, we are creating a more engaging and personalized experience – delivering content on Facebook Reels that feels truly tailored to each user and encourages repeat visits. While survey-driven modeling has already improved our recommendations, there remain important opportunities for improvement, such as better serving users with sparse engagement histories, reducing bias in survey sampling and delivery, further personalizing recommendations for diverse user cohorts and improving the diversity of recommendations.”
This approach isn’t new, with Pinterest, for example, detailing how it’s used similar surveys to gather feedback to improve its recommendation systems.
But the rate of improvement is impressive, and it’ll be interesting to see whether this does lead to a significant improvement in relevance for your Reels suggestions.
Though, really, Meta’s still trailing TikTok in this respect.
TikTok’s almighty “For You” feed algorithm remains the benchmark for compulsive engagement, keeping users scrolling through the app for hours and hours on end.
So what does TikTok’s algorithm have that Meta’s doesn’t?
Primarily, TikTok seems to have developed a better system for entity recognition within clips, which gives the TikTok system more data to go on in matching up your preferences.
Yet, TikTok is also very secretive about how the algorithm works, and won’t reveal much about this particular element, though we do know that TikTok’s system can identify very specific visual elements within clips.
Back in 2019, The Intercept came across a set of guiding principles for TikTok moderators, which included a range of very specific instructions for dealing with certain visual cues.
As per The Intercept:
“[TikTok] instructed moderators to suppress posts created by users deemed too ugly, poor, or disabled for the platform [as well as] videos showing rural poverty, slums, beer bellies, and crooked smiles. One document goes so far as to instruct moderators to scan uploads for cracked walls and ‘disreputable decorations’ in users’ own homes.”
These guidelines were intended to maximize the aspirational nature of the platform, which would then drive more growth. TikTok admitted that such parameters did, at one time, exist, but it also clarified that these specific qualifiers were never enacted in TikTok itself, with the parameters copied from an earlier document intended only for Douyin, the Chinese version.
Though their very existence suggests that TikTok can systematically detect these elements. I mean, you could assume that TikTok’s moderators were looking to manage this manually, and reject videos including these elements based on human detection. But at the platform’s scale (both TikTok and Douyin have hundreds of millions of users) would make this an impossible task, which would render these notes utterly useless. Unless the system could detect such through computer vision.
That’s where TikTok really wins out, in that it can understand a lot more about what you’re looking at, then factor that into your recommendations. So if you spend time looking at a video of a blonde-haired man with blue eyes, you can bet that you’re going to see more content from similar looking creators.
Expand that to any number of physical traits and background elements and you can see how TikTok is better able to align with your specific preferences.
So while TikTok also uses the more common matching, in terms of likes, watch time, etc., it’s also working to keep users glued to their phones by aligning with their more primal leanings. And if the true depth of that process were ever made public, TikTok would likely come under intense scrutiny, because it’s using psychological bias and leanings to compel its users, based, potentially, on problematic and even harmful traits.
That’s where Meta’s losing out, because it can’t implement the same depth of understanding to improve its systems. Theoretically, it could use more psychographic measures, based on user history on Facebook, and with older users who’ve uploaded more of their personal data to the app, that might be effective. But mostly, Meta is relying on more common algorithm signals, and now user surveys, to improve the Reels feed.
Are your recommendations looking better of late? This could be why, while it should also mean that your content is being shown to more engaged audiences.