I feel like actually, a grand bulk of how “front-end” works now is interesting workarounds for browser limitations that don’t exist anymore.

but because change is expensive we just kinda kept using the elaborate workarounds despite not needing them

and because institutional memory can’t exist in an industry with a 5 year 95% turnover rate, most people don’t remember that we don’t need most of this shit anymore

there is also a component of the old embrace extend extinguish too. There has always been a low wail of people wanting to replace the open web with some proprietary product, and microsoft and facebook may have actually succeeded with typescript and react

like, take SPAs and AJAX as an example. the limitation was that when you clicked anything on a traditional webpage the whole screen would go blank while the next page’s html loads.

so our elaborate work around was to hold onto the html and css, just download and replace the content- smoother transition.

but you give up on the progressive renderer built into the browser, so it doesn’t scale well. and your js packages are so big now, you don’t notice browsers fixed the original problem

like, if you can get your page html and css small enough, usually by jettisoning those 5mb bundles of javascript… http2 and little browser rendering tweaks over the years; it is *seamless* now. the image of the page you’re on is held, until the next page finishes loading.. so long as it loads fast enough

but it won’t ever load fast enough because you gotta have your “fast” framework

build tools and bundling was working around the overhead of making multiple http requests; it’s “quicker” if you stick all the javascript into one file, then download all at once.

browser optimisations have rendered that original problem non existent now. but we’re still bundling because… we built too much ecosystem around having a build tool?

now it’s comically hurting performance cos the one file everyone must download up front has every random utility the dev pulled from npm and all its transitive dependencies- used anywhere on any page
the old way; that one off page incurs the cost in that one page. the new way it’s a permanent accretion on every page load

i can already hear the future people “well actualling” me to tell me typescript and react are open source and free in some technical way- and i gotta just get in here up front so is every other EEE strategy, that’s what the first E *means*. it’s what lures you into the sense of security so you sleepwalk into an ecosystem lockin. which is why fundamentally, we can’t get rid of the build tool we don’t need anymore cos apparently we need typescript

“we still need bundling because http2 pipelining can’t deal with transitive dependencies”

1. you don’t need transitive dependencies.
2. if you have convinced yourself you do, manifests and precaching work just fine

“but we need transitive dependencies” is such an engineer brain objection. you need a website. Computers serve humans. humans don’t serve computers. get it straight

i remember when we made websites faster by making them smaller in file size. not by forcing elaborate hacks onto packaging systems designed for server side frameworks

the argument used to be “dev experience” but I don’t think anyone is quite able to fool themselves into believing they enjoy working with these overengineered tools anymore. maybe it *was* fun when you felt like you were solving a real problem with sophisticated solutions. but after the 80th time updating packages for security patches and spending a day fixing the broken build i think the honeymoon is over

@zens I think the ppl saying "Dev experience" actually mean "corpo manager experience".

The frameworks do have benefits; they allow us to shed legacy and rename / deprecate things that have been named wrong for decades. For example, in react, `element.innerHTML` was renamed to `dangerouslySetInnerHTML`. Honestly this is the single best feature of react.

None of this matters if you are just trying to make a single user web tool or a blog. But if you're a corporate manager and you want to throw whatever programmers you can manage to retain at a problem, react is the obvious choice. Chances are you arent gonna get the budget it would take to hire ppl who have the 2-5 years of in-depth web platform experience that it seems to take to be able to really do complex web apps in vanilla js without accidentally creating arbitrary code execution from user-provided content.

I've helped a few folks with their first frontend projects outside of work, and I've found trivial xss every single time.

Yes there are tons of ways to mitigate xss, but none of them really shut the door on it with an ultimate eternal sealing spell like react does.

@forestjohnson handlebars and virtually every other framework and templating language has the same thing?

@zens not exactly. React doesn't deal with template or html strings at all at runtime, it compiles to "hyperscript", aka, 1000s of "document.createElement()"s in a trenchcoat. And the hyperscript it generates is always fundamentally xss-proof unless you use the " danger" functions.

Other frameworks like angular 1.4 that used string templates were never fundamentally xss-proof... They were like input sanitizers, so it was always a cat and mouse game between attacker and defender.

There is a fundamental difference between trying to sanitize inputs vs explicit separation of code vs data. Similar to the difference between parameterized SQL queries and special "SQL template" tools that would try to sanitize inputs. There was a similar cat and mouse game w/ those.

Follow

@zens react is not the only game in town for this kind of strictly enforced code vs data separation, but its definitely a popular example.

In vanilla js, one would simply stick to document.createElement and element.textContent and everything should be fine.

Tools like htmx fundamentally don't do this, they load the HTML from the server and execute it as code, so the server is responsible for making sure its templating is clean.

Its harder to be sure its right without access to the actual DOM implementation that the browser uses, google search had an xss that was caused by differences between two different html parsers/serializers, the one they used on the server vs the one they used on the client side.

Sign in to participate in the conversation
Pixietown

Small server part of the pixie.town infrastructure. Registration is closed.