Skip to main content

Simplifying Web Development

Web development has become a fast-moving, ever changing landscape of frameworks, technologies, and best practices. Approaching a new web development project can often be daunting, as rapid advancements quickly polarize various communities and everyone's advice can be conflicting. Which frontend framework is best (for varying definitions of "best")? Which server package can support the performance constraints of your project? Which build tools, bundling tools, and source languages should you use? I've gathered many opinions and preferences about these questions myself, but decided to take a step back to evaluate what benefits these complex systems provide.


I was originally somewhat skeptical of Node.js, and I'm still wary of it for a myriad of reasons that I won't go into here. However to me there's at least one really cool idea that comes out of running Javascript in the server: if you define your pages almost entirely with Javascript (basically replacing HTML with Javascript DOM manipulation), and you have a DOM implementation on the server side, you can trivially mix server- and client-side rendering. And almost all of the code to interact with the page (whether on the server-side or client-side) can be shared! This is really useful because as a project grows, sometimes it becomes more appropriate to do something on the server rather than the client, or visa-versa. It also reduces the number of languages you're mixing together, which cognitively is less burdensome for development. I've developed pages with a mix of PHP on the backend and JS on the frontend, sometimes with the PHP generating the JS. It wasn't terrible to reason about, but it's nice to have the flexibility of moving processing logic around.

Naturally, Node isn't completely necessary for rendering: one could potentially write/use a V8 engine plugin, or even a direct Node.js plugin, to any number of other extensible webservers. However if you ever have server-side state, it becomes very convenient to run Node since now that state is a JS value and can be easily used while rendering pages without wrapping or translating the types/values.

So in my quest for simplicity (but usefulness) in development, I decided to stick with Node for the primary backend. The Node community, while sometimes saturated with projects and libraries, does have standout support and Node.js itself is well designed.

Package Management

One thing that does not scream simplicity to me is having tens, hundreds, or even thousands of dependencies. Ideally, I like to be able to point to each dependency and say exactly why it's there and what it is providing. I realize this isn't always possible or even necessary for ancestor dependencies, as long as you trust the direct dependencies you are using. But having less dependencies overall gives a much better sense of what your application may be doing.

Deploying Applications

Another weak spot I've encountered in Node development is deployment. If I have a webserver written in JS that runs on Node (likely using third-party packages for http services), I want to be able to point Webpack or Rollup at the entry file and have all the necessary code to run that server bundled up into a single, hopefully small, file. I find this preferable for deployment and updates over checking out the code to each production node and running npm to install and run the application. Theoretically you should always be able to run node myapp.js and everything needed would be right there. This is often possible with vanilla Node applications, but I've encountered situations where either Rollup and Webpack require a lot of setup to get things right (especially with source translation from things like Typescript or JSX), or when they do work produce very large files. So once again, small[ish], well-defined dependencies are the key to making this step simple.

Server-Side Rendering

Originally I looked into both Next.js (React) and Nuxt.js (Vue) for server-side rendering, but was taken aback by their size. The ability to take existing React or Vue code and with (possibly) little or no modification have it rendered server-side is really powerful. What's more, these libraries handle development versus production builds, bundling, source translation, and a host of other goodies. But the systems are more complex than I'd like, pulling in 800-1000 dependent packages each! Not to mention I had difficulty packaging them into single deployable files.

I've used React, Angular (1 and 2), Vue, Knockout, and other libraries as well. In my search for far-simplified libraries, I found Surplus built on S.js. I must admit that it first stood out to me on Stefan Krause's JS frameworks benchmark (here). I know (and usually follow), the guideline that premature optimization ought to be avoided, but the fact that surplus was so close to vanilla JS on every benchmark indicated to me that it was carefully crafted and was probably quite simple. (Not to mention the particular project for which I was making a new web interface would need to ingest and process tens to hundreds of messages a second, where almost all messages would be affecting the view.)

S.js has such a simple API and set of functions that it only took me about an hour to grasp how everything was working when I was experimenting with it. Surplus uses JSX to make writing views declarative (which is a big plus in my book), but compiles down to code that uses S.js for updates and a few helper functions in the Surplus library for DOM manipulation. And both libraries have no dependencies! This was exactly what I was looking for. The only downside was that Surplus had no server-side rendering.

However, conceptually it's not such a complex thing: when a request comes in, translate the path to a source file, compile the JSX (for which Surplus includes its own compiler), then run the resulting JS code with a DOM implementation and get the outerHTML of the resulting DOM node.

Well, actually it is a little complex. But it really shouldn't be more than a couple hundred lines of code given you have some way of evaluating JS and some external implementation of the DOM.

Doing the Render Manually

I'd never used it before, but Node includes a vm module, which provides a variety of powerful functions to evaluate JS with various global contexts. It turned out to be pretty straightforward. I wrote a function that, given a file path, would compile it with Surplus (if not an installed module i.e. user code) and run it with my own implementation of Node's require and some exposed server-side state. The require implementation was made to behave similarly to Node's, looking in node_modules for modules and otherwise trying to find a source file matching the path. I have a pet-peeve about relative require paths and being able to change the file structure, so in my version I made it such that all user-file paths had to be relative to the 'root'. But that's not too relevant here.

There's a lot of opportunity for caching, so I added caching of the compiled code, caching of the evaluation results (when evaluating code that didn't rely on the server state, which I generalized to all the user-code), and per-request caching of all evaluation results (in which the state would be static). This was a bit of premature optimization, but it was so simple to include I couldn't resist.

For the DOM implementation, I experimented with almost every DOM implementation you might find out there. For my specific use-case, I needed the DOM implementation to correctly handle SVG elements (createElementNS). Some larger libraries like jsdom worked, but they are noticeably slow, and much of the DOM stuff they implement would never be used in server-side rendering. I finally landed on domino, which had decent performance and rendered everything well. All I had to do was create a global context value called document that was an instance of a DOM Document, and then after evaluating the requested file get outerHTML on the Document.

I was happy to not run into any issues with Surplus or S.js: they happily ran on the server-side with the DOM implementation, and all values were evaluated appropriately.

Making Client-Side Work, Too

Now I had rendering working such that you could request the page and get the correct static HTML response. But that's not good enough! While I did appreciate the support I added for people who don't want to enable JS, I wanted the web interface to be progressive, and use JS when it was available for dynamic updates. This meant packaging up and sending sources, and having those sources work on the client side with the variables I had added to their global contexts when evaluating them. I added support for the evaluator to also track the dependencies of files that were run; each call to require() (which, again, was my implementation) recorded the dependencies of the file.

I then injected a <script> into the returned page that contained definitions of all of the global context variables, such as STATE (with the server state) and require() too. It has a dictionary mapping the module name to a function that runs the source and returns the exports/module.exports, which the require() implementation uses. This works surprisingly well, albeit there's no minification, pruning, or other smart processing occurring there. So lot's of room for improvement :)

The one caveat here was that surplus doesn't support what is commonly called 'hydration' of the page. That is, when the client side JS runs for the first time, it actually replaces the entire body of the document with the new DOM nodes that are created by running the packaged JS code. Proper hydration would keep the existing DOM (that was rendering on the server) and attach references as appropriate such that the Surplus/S.js updates would operate on those existing DOM nodes. This is something I'd like to pursue further when I have the time.

State Updates

As state was updating on the server, I wanted the JS-enabled clients to get the updates too. This ended up being very simple: I added code to open a websocket connection in the client (for which the SSR code had an isServer value to check where you were running), and added the websocket server endpoint. As state updates were made (they came in as memory-efficient deltas), I could forward them to the client and the same code that updated state on the server-side was used to update it on the client-side. S.js and Surplus made these updates efficient and correct.

SVG Manipulation

d3 has been the standard for SVG rendering for a while now, and while there are now more competitors in this space, some specializing in 3D rendering or animations, but I chose to use d3 where I could. It turns out that many of the d3 modules Just Work® as long as the DOM SVG implementation is correct. However, I wanted to create the SVG elements dynamically based on the server state (and client copy of the server state, too), so it became easier to just manually create SVG elements with Surplus. I still used d3 for graph scales, axes, and tick formatting, though.


Styling/CSS is not as fractured as the JS community, and it's still easy to find single-file stylesheets that give lots of nice features. I decided to just use the minified Bootstrap stylesheet and created some wrapper components setting class names for the common components I wanted to use in Surplus.


If it wasn't obvious already, I implemented everything in ES6 JS (although not using that many advanced features of ES6). I much prefer strongly-typed languages to dynamically-typed ones, and have worked with Typescript, Elm, and even Purescript a bit too. But for the sake of build simplicity, I stuck with normal JS. I would like to possibly instrument a generic way of pre-processing source files, such that they could be written in another language or require other processing but could still be served up and compiled correctly by the rendering functions. Of course, for production builds they could be processed before the server ever loads them.


At first I had installed webpack-cli to pull the server code together into a single file, but saw that it installed many more packages than just webpack. Looking at what was installed, I realized that it was much simpler (in my opinion) to write a simple Node build script that used the webpack API rather than running the CLI. That build script ran webpack and also stored the configuration (no separate webpack.config.js), and also copied some files around where I wanted them, bundling the website root with the server code. I'm not sure why so many projects use webpack-cli or other executables in their package.json scripts rather than running a Node JS script. From what I've seen, they usually combine that with manual cp or other non-portable command-line tools. Node provides all of that functionality, and it is much more portable to do it in a script than to rely on separate systems' command-line behaviors! I've encountered way too many packages whose build scripts or test scripts simply fail on Windows (I don't use Windows much at all, but I have team members who do).


I was really pleased with the results I saw. Overall, the web interface and backend I created:

  • Compiles to just under 4MB, of which ~1MB is site content, and of that on average 170kB is sent to cold-loading clients, including images rendered on the server-side.
  • Uses 5 external dependencies on the client side, all of which are integral and understood by me, and are relatively simple.
  • Uses about 50 dependencies on the server side. express accounts for all but two, which could be replaced by something simpler. I didn't look into simplifying the HTTP server side, but the built-in Node stuff looks to be pretty powerful and easy. That's probably my next effort. The other two packages (which have no dependencies) are domino and the Surplus compiler.
  • Uses webpack for developers to make a production build, which has many dependencies. But the scope of those are limited as it's only used in a single stage of the development process.
  • Responds to requests that have thousands of SVG elements rendering on the server side (among other things) in ~0.5-2s (which was acceptable for what I was doing, as that case wasn't persistent). It also dynamically updates state and these thousands of SVG elements on JS-enabled clients without much performance tuning.

Unfortunately, I can't share the compelling evidence of these figures as the project for which I experimented with these technologies is not public. However, I did develop the Surplus SSR middleware in my own time, and the source is hosted on GitHub. At the bottom of the README there are a number of improvements I'd like to see in it. It hasn't had much development, so if you use it and find a bug, or a feature you'd like, create an issue!