@kawaiipunk you are making me drool, but I just want something green added, and btw can I drench it in hot sauce? ❤️🔥 that's the USA punk way, lol ❤️
@djh I'm starting to remember this now.. The z curve has slightly more discontinuities iirc but its also easier to find the sub sections. My method is kinda "brute force" based on sampling a reduced resolution version of the curve in order to decide where to split up the sections before running the actual DB queries.
But in my opinion, the most important thing to call out about all of this is that you don't need different database software / you don't need to modify the database at all. The app that talks to the database just needs to Use this "one weird trick" to generate keys and query ranges.
i don't understand what the bigmin thing is, Are you talking about an in-memory search or a database (on disk)?
I made my own version of software that does this. It uses a different space filling curve but the idea is the same. How I thought abt the query running really far outside the requested area: essentially you have to decide whether you want to prioritize fewer individual IO operations or less wasted IO bandwidth.
Since data on disks is laid out as one sequence, typically it's faster for a disk to read a little bit more data if it only has to do one single sequential scan, as opposed to reading a bunch of different little bits and pieces.
My solution simply downsampled the curve until It was no sweat for the computer to make a list of every single point on the curve within the queried area. From there, I settled on an algorithm that looked at the points on the curve and decided how many segments it would split them into. If the distance between two points was larger than the queried area, then I wouldnt join those two points into a contiguous segment, I would split the segments at that point.
The cool thing: this 1 rule perfectly expresses the trade-off between how many IO operations and how much wasted bandwidth. And it's tweakable at query time, not at indexing time. So you can set a coefficient for that threshold depending on the performance characteristics of your disk when handling your given data set.
I'm blocking the ui thread until I know whether the email was accepted or rejected. So I can't just wait 30 seconds for a bounce notif and then assume success. No one would use this site if you have to wait 30s every time you log in
I had considered that option, but I decided against it because I want to be able to tell the difference between time out and a success.
@drahardja get in the Eva shingy
@gabek what do you think about this library? Have you seen it before? I felt like this might be right up your alley 😄
This is sick!!!! :O
Finally someone did it -- created something that has all the benefits of JSX, but not based on DOM / JSDOM
@notplants they do both, they build software that doesn't silently fail, and they also work really hard to maximize their sender reputation.
I think those are two separate things, but its a lot easier to keep good rep if you can even know in the first place that your message was rejected!
@notplants yeah they definitely do a lot of that stuff. IMO that's a whole different problem/concern. I know Microsoft will never accept my emails because I'm not big enough to get on thier allow list... as George Carlin said, "it's a big club, and you aint in it."
I just want to be able to know if the email was immediately rejected or not. IMO its not too much to ask.
@notplants I believe things like this do exist, its just not "normalized" as a feature that all SMTP server implementations should have.
@notplants well I think email itself is practically sedimentary rock at this point, we can't change the protocols.
But I was proposing to just make a new thing on top, similar to how mailgun, sendgrid, etc did, just as a built-in feature of selfhostable SMTP servers in instead of a proprietary service only. Basically the same thing I already did except not based on tailng the logs :P
In my experience with SMTP for transactional email (logins, etc), email servers will reject the messages directly, they dont accept it and then send a bounce, or accept it and then black hole it. They might send it to the spam folder but there's not much we can do about that.
@technomancy @graydon I have soft spot in my heart for the NixOS and npm way, where each dependency gets to declare its own unique version of its own dependencies. So then you get like 36 different versions of the same dependency. Honestly, I would argue that saying you have to have only 1 single version of any given lib was a mistake :P
I work with JVM stuff at work a lot , and the way it's set up the libraries will automatically get upgraded quite often (version ranges). This has broken things a few times, every time its been version conflicts between two different deps that want different versions of some other library. I believe if you told my coworkers to pin to specific versions of every library they would tell you no. They would tell you "we don't have enough time to manually upgrade all those pinned versions every time there's a an automated CVE ticket". I guess a lot of businesses have found that it's easier to just update everything all the time than to hire people who can tell the difference between actual vulnerability and some bullshit CVE. Also, there's compliance rules that they have to abide by.
I think all this stuff is always going to be imperfect and messy. The more code you add, the worse it gets. I think that's kind of a universal truth.
@technomancy I'll have to wait for the blog post 🤔
@technomancy I thought lock files were also supposed to act as TOFU for dependencies so the file contents behind a version tag cant be modified after the fact
@notplants I almost did this...
But I eventually kind of realized why they did what they did.
The problem is that SMTP submission ( as its implemented today) does not support Delivery Failures. The protocol simply doesn't have any place for them. So, if your email message gets rejected by an email server, then you will not be able to know that it happened.
That's why everyone started using a different protocol for submitting transactional email.
Especially for interactive systems like logins, it's crucial that the user can receive a warning when their email provider bounces the email.
For capsul we ended up implementing our own super janky version of this which was based on tailing the logs from SMTPD. https://git.cyberia.club/cyberia/smtpd-delivery-monitor
This is just another lump on the "email is fucking terrible and impossible to work with" Ball of mud. It's no surprise to me that a lot of companies have sprung up around trying to solve these issues and reduce the pain, damn the consequences and burn the old way of doing things.
It's also no surprise to me that the open source community generally has no interest in doing that.
in my opinion, we really should be talking about better email server software and better protocols for email submission. I think that's a prerequisite to Software like Ghost supporting non-commercial email providers.
I am a web technologist who is interested in supporting and building enjoyable ways for individuals, organizations, and communities to set up and maintain their own server infrastructure, including the hardware part.
I am currently working full time as an SRE 😫, but I am also heavily involved with Cyberia Computer Club and Layer Zero