Remote media download just works (tm) too, very easy to get the remote homeserver base url properly with https://www.npmjs.com/package/@modular-matrix/autodiscover-client-configuration
and initial commit published!
https://git.pixie.town/f0x/synapse-media-proxy
:O I think I implemented the whole https://spec.matrix.org/unstable/server-server-api/#server-discovery Matrix server discovery flow!!! It's rather complex, with .well-knowns and SRV records and combinations of those.
Also nice that I could fork of the client-spec counterpart already made by someone else :) https://www.npmjs.com/package/@modular-matrix/autodiscover-client-configuration
submitted 2 things to TWIM https://matrix.org/blog/category/this-week-in-matrix for the first time in aaaages :3
soo good to just come across a library that does what you need to *perfectly*. I was messing about with regexes to parse Content-Disposition stuff, and with this library I can do both the parsing and formatting sooo much nicer (and it's used by express.js so it's Good(tm))
https://www.npmjs.com/package/content-disposition
think I'll set up a test synapse-media-proxy soon(tm) but I'd accompany it with a testing synapse instance too, think NixOS should make it real easy to get that part up and running quick, and then I can get some real-world speedtests by just throwing test media links around :P
Monday though i suppose... i should really learn at least a bit for that fuckin midterm first
big refactor in preparation of thumbnailing support https://git.pixie.town/f0x/synapse-media-proxy/commit/7f2cb50b85ed30fbf6939087559ee5fab479e979
uhh, uhh I think I just fully implemented the thumbnailing in synapse-media-proxy??!?
https://git.pixie.town/f0x/synapse-media-proxy/commit/68829b5c115cd5ddbec4e528051360ba103b5111
And now you just get a proper error when trying to thumbnail an unsupported file (like a .txt lol), instead of crashing the server with an uncaught error :")
https://git.pixie.town/f0x/synapse-media-proxy/commit/8b867a48308ddd0437db37854698dc241a87b457
ok subscribing to streams when they come available works, subscribing to an already existing stream doesn't because some of the data will already be read-out from it (and thus removed).
And seems having multiple subscribers to the same stream isn't ideal either as varying network speeds/stream consumption would give a similar issue, hmmm
I guess this is the second yakshaving time where I really dive deep into the internals of a Node subsystem (last time it was the module system, resulting in https://www.npmjs.com/package/@require-transpile/core)
I did the proper thing and looked at existing implementations! and there's a module to split a stream to multiple consumers (nice), but nothing that keeps a buffer to backfill late-joiners. This will integrate *perfectly* with my current architecture because I'm already saving the whole stream into a buffer anyways (for later cache serves)
so:
- first request comes in, upstream starts streaming to the first client
- second client requests that file while it's still streaming, it gets a new stream with the buffer up till now + then the new data
- upstream request finishes
- new clients get the whole cached buffer
@f0x don’t forget to write the idea down…
@erikk done :)
best thing about synapse-media-proxy development was looking a lot at fokshat.jpg in full-res tbh (and some other test images)