If only “fullstack” or “typescript” devs weren’t so scared of CSS. They can optimize a weird join, they know the Big O notation of a function that operates on a list that’ll never ever exceed a size of like 1000 items so who the fuck cares, but as soon as you ask them about grid, layers, container queries, or even what things like houdini can hopefully do one day, they just collectively shit themselves.
What corporations and enterprise software development practices have done to the web makes me crash out, sorry. Very few people who actually care about CSS are making a living wage making things with it as their job.
We’ve been adding mountains of JS to “feel” fast, while making everything slower.
I disagree that that is a bad thing. I’m not providing evidence here, so disagree if you wish. In my opinion, users don’t care if something is fast. They don’t take out stopwatches to time the page transitions. They care if something feels fast. They want the web page to react immediately when they issue an action, and have the impression that they’re not waiting for too long. Feeling fast is way more important than being fast, even if the feeling comes at a performance hit.
It takes about 120ms for a human to detect (not react to) a stimulus [1] (For the gamers: That’s 8FPS). So if you page responds to a user action in that time frame, it feels instantaneous.
If you want to try it yourself, paste this code into your browser console. Then click anywhere on the page and see if the delay of 125ms feels annoying to you.
let isBlack = true; document.body.addEventListener('mousedown', () => { setTimeout(() => { document.body.style.backgroundColor = isBlack ? 'red' : 'black' isBlack = !isBlack }, 125) })
I dunno. As a user, if you throw me a code that takes 120ms for my CPU to execute, and tell me there is an alternative that takes 2x-3x less time, I would look for someone who provides that instead.
No i want it to actually be fast, and stop using unholy amounts of ram for basic tasks. I am not pulling out a stopwatch you are correct but when I try to use one of my still perfectly functional but older systems I can feel the effects of all this JavaScript bullshit.
Maybe I missed it, but how do I share state between page transitions? For instance, I’m currently displaying a user list. How do I carry the user over to the details page without re-fetching it, and more interestingly, without re-instantiating the User instance from the data?
I imagine (though I’m the first to admit that I don’t know every modern API by heart) I would have to move the user to local storage before allowing the browser to navigate. That sounds annoying, since I’m either persisting the whole user list, or I need to check which link the user clicked, prevent the navigation, store the relevant user, and then navigate manually.
With an SPA the user list just lives in a JS variable. When I’m on the user’s details page, I just find the relevant user in that list without even making another HTTP request.
I get what you are asking for but I don’t think it is even necessary to have a list of users on the client in the first place using the MPA approach. I would guess the list of users is just available to the server, which pre-renders HTML out of it and provides it to the client.
So we’re back to fully static pages and rendering HTML on the server? That sounds like a terrible idea. I don’t want to preload 10 different pages (for opening various filtering forms, creation forms, more pages of the user list, different lengths of the user list, different orderings of the list, and all combinations of the above) just in case a user needs one of them, which they mostly don’t.
Huh, wouldn’t just plain sql be perfectly fine for that use case? Make a get request with your filter/sort params, server-cache the result, return the data with a client-cache header. I’ve been serving up customized svg files like that in a personal project of mine and its been so much faster and cleaner than styling my svgs within the jsx.
That would work, but it demands co-operation by the backend for the additional validation. It also means re-transmitting 8MB of customer data just so they can be arranged differently in the frontend. I don’t really want to move presentation logic that far to the back. If you want to display more data, you need to touch both the table-drawing logic and the sort-validation logic.
…i mean, I get it, I’ve written some very scuffed JS in my time because using .filter() right before displaying my data felt easier than getting the right data to begin with. Especially if the backend is in a different git repo and uses some whacky ORM library I’m not familiar with, while the Product Owner n e e d s everything deployed today.
But you can’t tell me that applying filter/sort to 8MB of data in the frontend is anything but mega scuffed. Imagine you need to debug that and don’t even have an intermediate step to see whether the wrong data is arriving or whether filter/sort logic is wrong for a specific case, or whether its just the rendering. You’d always need a person understanding all three just to debug that one part.
Not to even mention how that would run on low-power devices with bad internet connection. Or what that does to your SEO.filter() right before displaying my data felt easier than getting the right data to begin with.
The issue here is that you don’t know what the right data is to begin with. SAP does what you’re suggesting. They demand you set the filters correctly before requesting any data, which is a terrible user experience.
Imagine you need to debug that and don’t even have an intermediate step to see whether the wrong data is arriving or whether filter/sort logic is wrong for a specific case, or whether its just the rendering.
That’s a strawman. Why would I not know what data arrives in the frontend. That’s what the network debugger is for. That’s what a breakpoint before the filter is for.
But you can’t tell me that applying filter/sort to 8MB of data in the frontend is anything but mega scuffed.
Personally, I find re-transmitting those 8MB of data for every different sorting request way worse. Remember that this isn’t even cacheable, because the data is different for different sorting requests.
Maybe we have different types of frontend and different levels of user proficiency in mind. In my case, I cannot possibly ask the user to know how they want a list sorted and filtered before seeing the list and the options. They’d throw the frontend in my face. If you have very knowledgable users that know what they want from the get-go, then it might be possible to show them a form to sort and filter and only request the data when the user sends the form.
Not to even mention how that would run on low-power devices with bad internet connection.
I don’t see how ‘bad connection’ is an argument in favor of re-requesting data just for it to be displayed in a different order. I’ve made this back-of-the-envelope calculation in another comment. For a good connection, latency is about 20ms. In that time a 1GHz processor can to 20 million operations. Take 10 operations for each comparison (to adjust for more complicated comparisons), and you can use 2 million comparisons to sort the list in the time it takes to re-fetch it. (Keep in mind that the act of rendering the HTML itself is also computationally expensive for tables). 2 million comparisons sorts a list of 120,000 entries.