meta 

I'm growing very tired of dansup's nonsense already, and by extension, how many people are falling for his whole 'strongman' theater.

We do not need or want more dudes trying to claim the spotlight for fedi, thanks. The loud people are rarely the ones doing the important work, despite their own claims. And code usually isn't the important work either.

meta (2) 

To elaborate on that last point a bit: anyone can write some code. I am saying this as someone who does this professionally at a significant hourly rate. Sure, quality is going to vary between programmers, but the 'writing code' itself really isn't the difficult part.

You know what's the actual hard work? Dealing with the human factors. Understanding people's needs. Resolving conflicts. Fostering healthy and independent, resilient communities. Dealing with bigotry and other toxic behaviour. Doing this for years on end. Weaving all of these factors into the *design* of your code.

Notably, all those things that software developers are infamous for failing to do, and that are frequently feminine-coded or branded as "unskilled".

I really cannot emphasize strongly enough how much you *shouldn't* take someone as a role model just because they write a lot of code, or even nominally popular code.

meta, sensitive topics 

@joepie91@social.pixie.town I think it's incredibly important to keep in mind too that according to dansup he is the sole moderator of pixelfed.social, an instance reporting to have an active user count of 256K. As someone who does fedi moderation on a lot smaller of a scale I have multiple administrators. Not just because moderation cases need to involve multiple people but because what if I am unavailable to access the infrastructure? So many things can go wrong if you have one person with exclusive access like dansup proudly claims. I am incredibly concerned about this especially because it's an image based platform. You need the extra attention to make sure gore (and a whole slurry of other content types including CSAM) are not being distributed within the platform. I do not think dansup takes any of this seriously and it makes me concerned. The thing is I couldn't care less if he fucks up and goes to jail for a long time. I do care about my users being exposed to CSAM, Gore and all sorts of other unsavory content because of his lack in content moderation and a trust & safety team.

meta, sensitive topics 

@puppygirlhornypost2 @joepie91@social.pixie.town so every hour he spends developing his incredible code is an hour pixelfed goes unmoderated, noted

re: meta, sensitive topics 

@NotThatDeep @joepie91 @puppygirlhornypost2

Just this weekend, A British parenting forum (run on a commercial basis, and funded by advertising) was overwhelmed with CSAM by trolls a few days ago in spite of apparently having multiple moderators working across time zones. The trolls deliberately picked a time zone where fewer UK based folk would be awake (particularly for a forum aimed at parents who generally have to keep "normie" hours to be able to look after their kids).

But the admins even claim their normal mod team in London would not be able to immediately deal with the onslaught of illegal content.

They are now turning to AI to attempt to moderate their content, and are going to have to hand over all sorts of user data to the Metropolitan Police at Scotland Yard.

bbc.co.uk/news/articles/c93qw3

Follow

re: meta, sensitive topics 

@vfrmedia @NotThatDeep @puppygirlhornypost2 FWIW I would tend to distrust anything Mumsnet has to say about moderation, given that that place is rife with transphobia and there seems to be no intention to do anything about it

· · Web · 1 · 0 · 3

re: meta, sensitive topics 

@joepie91@social.pixie.town @vfrmedia@cyberpunk.lol @NotThatDeep@transfem.social I'd also advise against any sort of automated CSAM detection using "AI". I mean, just hash based detection (fuzzy hashing, the technology used in PhotoDNA and similar projects) is already kind of sketchy (I believe it's better than nothing. sure it can only get existing content but again that's better than seeing it without warning). Have you ever tried to upload an image to discord before? The explicit image detection is so bad that you can get the most random assortment of things flagged for no reason.

re: meta, sensitive topics 

@joepie91@social.pixie.town @vfrmedia@cyberpunk.lol @NotThatDeep@transfem.social plus as a victim of CSA (I was groomed online and did end up sharing pictures I regret as a minor) the thought of those pictures being used to train some AI tooling for detection makes me want to throw up.

re: meta, sensitive topics 

@puppygirlhornypost2 @joepie91 @NotThatDeep I doubt the AI will be effective at all (or it will just trigger on anything and deter harmless image sharing) - they will just lick whatever boots they need to to keep the advertisers happy and Metpol off their back. Which will unfortunately set a precedent for how any social networks/forums are treated in the UK..

Sign in to participate in the conversation
Pixietown

Small server part of the pixie.town infrastructure. Registration is closed.