Social media platforms use personalization algorithms to make content curation decisions for each end user. These personalized recommendation decisions are essentially speech conveying a platform’s predictions on content relevance for each end user. Yet, they are causing some of the worst problems on the internet. First, they facilitate the precipitous spread of mis- and disinformation by exploiting the very same biases and insecurities that drive end user engagement with such content. Second, they exacerbate social media addiction and related mental health harms by leveraging users’ affective needs to drive engagement to greater and greater heights. Lastly, they erode end user privacy and autonomy as both sources and incentives for data collection. As with any harmful speech, the solution is often counterspeech. Free speech jurisprudence considers counterspeech the most speech-protective weapon to combat false or harmful speech. Thus, to combat problematic recommendation decisions, social media platforms, policymakers, and other stakeholders should embolden end users to use counterspeech to reduce the harmful effects of platform personalization. One way to implement this solution is through end user personalization inputs. These inputs reflect end user expression about a platform’s recommendation decisions. However, industry-standard personalization inputs are failing to provide effective countermeasures against problematic recommendation decisions. On most, if not all, major social media platforms, the existing inputs confer limited ex post control over the platform’s recommendation decisions. In order for end user personalization to achieve the promise of counterspeech, I make several proposals along key regulatory modalities, including revising the architecture of personalization inputs to confer robust ex ante capabilities that filter by content type and characteristics.