@thufie 🐈⬛🐈⬛🐈⬛🐈🐈🐈 she must not leave!
*June Update*
(Ty to everyone who donated and shared last month. Unhealthy air quality lately from wildfires. Shavuot שבועות was the 1st to 3rd this month. )
Pain leaves me in bed
Struggling financially
No vehicle
HVAC and insulation leaks bad
Busted electrical outlets
Busted plumbing
Leaking roof
Water damaged floors
Mom's recovering
Struggling to get disability
Need food delivered
Still need to pay for:
active goal 🌡️
goal met ✅
to be determined ⬜
• transit - $0/200 month 🌡️
• food - $0/100 month 🌡️
• medications - $12/100 month 🌡️
• medical masks - $10/10 month ✅
• home supplies - $0/50 month 🌡️
• braces - $0/7000 🌡️
• transition $0/50000 ⬜
• phone $0/350 ⬜
• computer $0/1000 ⬜
• etc
Anything helps. Please boost / share 🔁
https://venmo.com/u/Phoenix-Elektra
https://www.paypal.me/anonymous356
https://cash.app/$PhoenixElektra
#trans #enby #disabled #wheelchairuser #neurodivergent #mutualaid #MutualAidRequest #aid #donate #crowdfund #ocart #chronicpain #chronicillness #chronicfatigue #bunny
@vultureculture that's a yeehaw
re: ai slop text
@astra_underscore yowlëg
bidrectional text formatting library and more
Skribidi seems like an easy way to tackle handling text correctly in just about most circumstances. Without pulling in too much dependencies.
https://github.com/memononen/Skribidi
So called AI, (orthodox) Jewish halacha q, religion, boost ok/welcome
Recently I heard from someone here the idea that the current "AI" LLM hype may halachically amount to idolatry.
But, is there already rabbinical psak/advice or discussion of this out there?
Asking for a friend (myself) who might be forced into such stuff by their employer.
extracted wikipedia
English is like 247ish GB, but you need about twice that to actually extract any of it at all. Nothing can handle file listings this big.
extracting wikipedia
So everything graphical uses tar wrong. And just doing it manually works, but I have to have enough disk space to extract the tar from within the 7zip archive as a result. Around 200GB before final extraction, just to get started.
extracting wikipedia
file roller is next to bat, and I am able to list the /en/ directory within root, but I've been stuck loading the file listing for articles/ with a severe CPU bottleneck and 5GB of disk reads queued up in memory waiting to see Dr. CPU. No more I/O on the disk containing wikipedia for the past... 10 minutes? At least it isn't crashing? A slimegirl can hope.
extracting wikipedia
Process engrampa dumped core, coredump file size on disk 1G.
Engrampa seems to trigger a segfault when giving me file listings of the the contents of the root folder, not even the "articles" directory.
Thufie BLM
~
~
languages: en:✔️ he:~ es:~ ru:~
Reluctant moderator on social.pixie.town
Most online member of the system
#yesbot #nobot #noarchive I'm in my 20s, as a Computer Science researcher (Not in "AI" 🙄). Also a YouTuber now apparently, making YouTube Poops.
.אין דין ואין דיין. שלום בעולם
I'm just a disoriented white girl trying her best.
Relationship Anarchist
programming languages?
C++ C MIPS x86 Java Python and a few others :P