
A Team Project about fighting the spread of misinformation on Twitter.
Duration: April – June 2021
Role: Design Lead
Team Members:
Arshay Rao, Xinyun Yu, Maya Xu, Brandon Wettenstein, Yangqi Zhang

User Scenario
You’re scrolling through your Twitter feed on a nice Sunday afternoon. You’ve seen some funny content, some posts from your mutuals, and some shocking news. You see a headline from a news source stating that a big-name celebrity has ended up in the hospital. Curious, you research the information, only to find that no other news source has reported on the news. You return to your feed with a bitter taste in your mouth.
Negative things that you see on social media stick with you. That’s why our project is designed to give you the ability to do something about misinformation that you witness instead of having to sit by as incorrect information spreads throughout the web.
Problem Statement
Many social media platforms perpetuate environments where blatant misinformation is allowed to proliferate without giving users enough tools to be able to control their feed.
Our Objective
Our team was tasked with designing user-friendly methods of allowing users to either avoid or fight back against the spread of misinformation on social media. We spent the early stages of our design process narrowing our focus on specific stakeholder groups that we could interview, as social media is a deeply personal environment and different users have different preferences/expectations for their experience.
User Research
We each interviewed a potential stakeholder of various age ranges in order to get an idea of the kinds of people who are most affected by or annoyed by misinformation. I interviewed a working adult in her thirties and asked her about what her experiences with social media were like. When evaluating social media based on the prevalence of misinformation as well as the overall trustworthiness of an individual platform, the interviewee explained how user customization and user curation were crucial components.
User customization allows users to choose what content topics are most salient to them. In addition, they also filter out misinformation. Following specific users on sites such as Twitter also achieves this purpose, however individual users are not infallible when it comes to posting misinformation.
Stakeholders
Through our interviews, we were able to decide on working adults and elderly adults as our target groups. Working adults use social media for more purposes than any other age group, as we found in our interviews (interpersonal communication, news, advertising, networking, etc). We found that elderly adults in our interviews were more prone to being misled or negatively affected by misinformation than any other stakeholder group. Using our newly formed stakeholder groups, we brainstormed various user personas in order to envision other users that would benefit from our design solution.
User Personas

The above persona is one of many that we created for this project. I decided to create a persona of an elderly adult based on what I knew of my family’s experiences with social media. The process of creating this user persona allowed me to better empathize with the stakeholder group and think critically of what design solutions would be most effective for them. Most elderly stakeholders that I know personally are inexperienced when it comes to social media and would greatly benefit from ways to filter our misinformation. With the added perspectives from the user persona process in mind, we decided to proceed to the next step.
Our Design Process

The remote nature of the Covid-19 pandemic made it difficult for us to physically brainstorm during the design process. Nevertheless, we were able to perform a virtual wall walk and determine what user tendencies look like when it comes to social media usage.

We also constructed multiple design models with the intent of allowing us to closely align our goals and expectations with those of our stakeholders. We consulted more users to learn more about shared habits and frustrations when it comes to social media. We then decided we need to start coming up with concrete solutions. We noted how working adults were more likely to do their own independent research when presented with misinformation, while elderly adults are more inherently mistrustful of what they see on social media. We believed that we would be better off giving users additional ways to block misinformation, as fact-checking was not something we felt was a huge priority based on stakeholder interactions. Instead of generalizing our solution to each individual platform, we decided that we would focus on Twitter.
Converging on a Design Solution
The specific solution that we found to be most promising involves implementing a browser extension that blocks specific keywords, topics, and accounts in order to reduce misinformation. What differentiates this solution from features that are already on Twitter is that it allows users to set general preferences for content on the platform that will automatically be blocked. Several sketches below showcase the types of settings that would be available to users.
Sketches/Paper Prototypes

This first sketch showcases what the interface could look like when applying general preferences in a side bar.

This second sketch shows an alternate model where the blocker is applied to each specific account as determined by the user.
Figma Prototypes

The above image shows a proof of concept for what the topic blocker would look like as well as features that we have envisioned. We are currently in the final phases of working with stakeholders in order to finalize our design.
Future Goals
As we progress through the project, our main goals moving forward are to produce high-fidelity prototypes and hopefully implement a working build that can serve as a proof of concept.
Potential Metrics for Success
If this project were to be properly implemented on Twitter, it would be interesting to see what types of topics are blocked most commonly as well as which topics are the biggest culprits when it comes to spreading misinformation.