I quite often have warned someone about a platform that's clearly targeting white supremacists, nazis, and other kinds of fascists as their userbase, but that superficially claims they're just "neutral".
These warnings rarely lead anyone to leave those platforms behind. And it usually takes a few years before it becomes publicly documented and accepted that yes, it really is a fascist platform.
On the one hand, the conclusion you could draw from this is that they should have listened to the warning, they should have taken it seriously. "I told you so" and all that.
You might argue that they probably understood perfectly well what it was, but they were actually okay with it as long as it wasn't too obvious, because a little bit of white supremacy helps them feel better as a white guy, and really the problem isn't that they ignored the warning, but that they are racist. And you'd probably be right.
But then the question becomes: how can we prevent this dynamic? Because however wrong and racist someone may be, that conclusion does not change the outcome of them likely having become radicalized further right in this process. How can we avoid that happening?
How do we stop "people who have internalized racist views and never introspected on them" from walking into the just-about-tolerable bar and falling into a fascist pit? Because yes, it's their own responsibility if they do so, but the consequences are borne by everyone else and that's still a problem.