@creitve I realize maybe this is not the most helpful response considering you were asking for a guide.
Web server software typically enforces modern TLS and prevents you from using something that is 15-years-old-compatible because it has security issues that could be showstoppers for, IDK, money $$$ transactions or privacy concerns or something.
But for a personal site, no one cares, TLS is mostly window dressing, especially if you are trying to support 15 year old devices anyways.
So what web server software you use will be a very important decision. You are looking for a web server software that will allow you specify the allowed TLS versions, including versions that have been long since deprecated for security flaws.
This goes for both cloud software offerings and self-hosted webservers. Look into the documentation for the web server software and see if it allows you to specify the allowed TLS versions.
Besides that, you will probably target HTML 4, not HTML 5. There are plenty of resources for this, but like I said before, the most important thing would be to have such an "ancient" device, OS, and browser to test with. Cheers! 💚 💚
@creitve do you have a 15 year old device set up to test with ?
You will need:
Old TLS
Old HTML
And that's it I think? Mostly you just need a 2009 device and 2009 OS to test with.
I would worry about users "missing" each other and never being able to get a copy of whatever data from each other.
I would expect users would read silence as "this app doesn't work" or "no one likes me", not as.. *takes deep breath*...
> "oh, I must have missed that message because the senders client / senders friends clients were never connected at the same time as mine, and my client was never connected during the servers retention period after it was sent"
Ultimately I think the server retention period will have to be cranked really high to make the app usable and prevent missed messages.. And there's really no limit to "how high".
So at that point, the situation ends up looking rather silly, the server is going to be the primary data store whether we like it or not, simply because its the one that folks can connect to.
I've worked with p2p stuff quite a bit, mostly around the network connections part of it and I I feel obligated to issue a warning -- on the "Peer-to-peer architecture research" post, you said:
* it will be PWA ( it will be a web site )
* phone as primary data storage site
AFAIK this doesn't really work because:
* limited data capacity of browsers (unless app is text-only)
* extremely low availability/"reachability" of web browsers
IMO you should plan to store the data on a server as the primary storage site.
Ultimately I think this "no permanent storage on server" goal inevitably leads to either "each user has a backblaze account and uploads their e2e encrypted data to an object storage bucket"
Or
"Each user runs their own server"
I think the issue of users not having data autonomy is easier to tackle as a social problem than as a technical problem. Even the best possible technical solutions still kinda need autonomous hosting.
Not to mention, I think the social solution just feels better anyways.
@hisaac ah nvm I thought you were talking about servers.
Yeah for macos VMs I'd use ansible or just shell with -e and -x set
@hisaac ansible is ok, but its slow, clunky, can be hard to debug, and since its written in yaml, prone to yaml syntax related bugs.
I think in terms of ease of use, its hard to beat docker / docker-compose for config management as long as your apps can be configured with environment variables.
@notplants @decentral1se @j12i
Ideally you could delay calling `event.preventDefault();` until after you know that the quick-nav was successful. But due to the async nature of the cache and fetch mechanic, i don't think thats possible.
@notplants @decentral1se @j12i
Might wanna do the same thing on line 81 as an `else` statement for `if (newMainContent) {`, to handle all cases.
Really don't wanna get into a situation where user clicks a link and nothing happens.
@notplants @decentral1se @j12i
Very cool tbh 👀
line 84 (error handler) should probably do something to emulate clicking on the link and having it navigate normally. Because otherwise a missing cache entry on-click will result in a silent failure for the user.
@notplants Ah sorry I didn't understand what you were asking at first.
AFAIK that lspci output doesn't positively identify the GPU you have, but from this PDF it sounds like it should be intel HD graphics 4000?
https://psref.lenovo.com/syspool/Sys/PDF/withdrawnbook/ThinkPad_X230.pdf
And StackOverflow says the max res for that GPU would be 2560×1600 through DisplayPort
https://superuser.com/questions/1317793/does-a-thinkpad-x230-i7-support-a-4k-external-monitor
In general, as a very old laptop, it probably wouldn't be very usable with a high density screen even if the GPU did support it.
@notplants The lcd segmented displays on the old Nintendo Game & Watch have crisp edges 🤷
To actually respond to your question though, the metric you are looking for is "pixel density", similar to "DPI" (dots per inch) from the world of printing.
Celly Phones have really high pixel density screens. You could probably just peek at your friends' phones and see how they look to you, and then look up the model of phone online and see what its pixel density is, then compare that to laptop screens.
Apple is the main hardware vendor for super high pixel density screens on laptops. There are also some other laptops that feature 4k screens, but in general those are the kind of products that I would personally try to avoid as much as possible -- expensive blinged-out and consumer-oriented products that sacrifice functionality for impressive sounding specs.
Cuz here's the thing. The number of pixels goes up as the square of the pixel dimensions of the screen. So the finer the edges of the text get, the exponentially more power the machine will consume, the more complex the software to display things on the screen will have to be, etc.
I think apple actually renders a lot of non-text things at 1/2 scale to give the machines decent performance. When I play video games, even on my 1680x1050 display, I turn down the rendering resolution to make the game smoother. When I'm working with video, especially capturing video, again, I turn down the resolution a lot so that people who I am presenting the video to can actually see WTF I'm doing.
Personally I don't think all the problems that come with using a high resolution display are worth it. If you really just want a 4k display on a laptop tho, I'm sure there are some products out there you could buy, even used ones these days.
@PastaThief you forgot the "never" option
@dumpsterqueer Sometimes iphones used to upload heic images to websites (even tho I know they are generally not supposed to). IDK if they still do this but if so, I found/wrote some code that can read them and make sure they are rotated correctly https://git.sequentialread.com/forest/heic-converter-gui/src/branch/main/heic2jpg.go#L18
It uses the same wazero thing yall have been using
Are you working on a project to make our internet more resilient/secure/human-centered/inclusive/better/.../?
You could consider applying for funding.
We currently have 5 calls open:
NGI0 Core: Internet architecture
NGI0 Commons Fund: Reclaim the public nature of the internet
NGI TALER: Privacy-preserving digital payments
NGI MobiFree: More ethical and human mobile software
NGI Fediversity: Creating the hosting stack of the future
Deadline Oct 1
@AngryAnt @aras I was talking about IaaS, just the "hardware" part. My experience from running an IaaS myself has been that once you set it up, it just works and doesn't require constant effort.
The software has no marginal cost, so while its cool that someone out there can support themselves financially by facilitating the monitoring and upgrades and stuff, they are gonna be really easy to compete with, and as the market grows, the amount they'll be able to bill should approach zero.
We dont shell out $100s per month to someone who's supposed to update our web browser for us... And the web browser is extremely much more complicated than some dinky little server software like mastodon.
I am a web technologist who is interested in supporting and building enjoyable ways for individuals, organizations, and communities to set up and maintain their own server infrastructure, including the hardware part.
I am currently working full time as an SRE 😫, but I am also heavily involved with Cyberia Computer Club and Layer Zero