10 JavaScript concepts you need to succeed with Node

Even with competition from newer runtimes Deno and Bun, Node.js remains the flagship JavaScript platform on the server. Server-side Node frameworks like Express, build-chain tools like Webpack, and a host of developer-friendly utilities make Node a hugely popular way to leverage the power and expressiveness of JavaScript on the back end.

Of course, Node owes much of its popularity to JavaScript, the multiparadigm language that supports many different styles of programming. Originally developed as a dynamic scripting language to fill the gaps in HTML, modern JavaScript is used for functional programming, reactive programming, and object-oriented programming. Developers can focus on one style or move flexibly between them as different use cases require.

But JavaScript’s multiparadigm nature can be a double-edged sword. It is vital to really understand how to use the different programming paradigms in JavaScript specifically. Given such a versatile language, it also doesn’t make much sense to just pick one style and insist it’s always the best. The language’s flexibility also puts more of the burden of keeping things structured and maintainable on the developer. When working with Node, you have the additional challenge of keeping in mind that it is a single-threaded platform; even the asynchronous code you write isn’t truly concurrent.

JavaScript can be a boon if used with care—or a bane if used recklessly. Here are the 10 JavaScript concepts you’ll need to write scalable code in Node.js.

Promises and async/await

JavaScript and Node let you perform many tasks at the same time in a “non-blocking” way. This means you can tell the platform to do several things at once, without waiting for other operations to finish. The main mechanisms used for asynchronous programming in JavaScript are Promise objects and the async/await keywords. (We’ll discuss the event loop in the next section.)

Promises may be hard to grasp at first, but once you understand them, they offer a very simple means to provide and consume asynchronous jobs. Most often in Node, we are on the consuming end:


const fileReadPromise = fs.readFile(filePath, 'utf-8');
fileReadPromise.then(fileData => {
  console.log('Success… file content:');
  console.log(fileData);
});


The .then() syntax is used to handle what happens after a Promise has been resolved. It also includes a .catch() function for dealing with errors. Between them, the two syntax elements provide a simple way to describe what happens after a Promise has been issued.

With async/await, we use a synchronous (or linear) syntax to describe the same kind of thing (note that we are ignoring error handling in this code):


async function readFileAsync() { 
  const fileData = await fs.readFile(filePath, 'utf-8');
  console.log('Success... file content:'); 
  console.log(fileData);
}


This code does the same thing as the version using Promise. We decorate the readFileAsync function with the async keyword, which lets the interpreter know we will perform an async operation. Then, when we need to call out to the actual operation (fs.readfile), we decorate the call with await.

Learn more about Promises and async/await in JavaScript.

The event loop

To effectively use Promise objects and async/await in your Node operations, you’ll also need to understand the Node event loop. When we say Node is single-threaded, the event loop is the reason why. If these are newer concepts for you, don’t worry. Just make a mental note that inside Node (and similar platforms) your code is never fully multitasked. This is different from some concurrent platforms like Java or Go, where the platform can make direct use of operating system threads.

This is not a limitation that you have to wrestle until you are dealing with scale. In understanding Node, it is just important to know how it works with asynchronous operations.

Let’s say you initiate an asynchronous operation, for instance with a fetch() call:


const response = await fetch(“https://anapioficeandfire.com/api/characters/232”)

The operation is submitted to Node, which schedules the request with the operating system’s networking service. At that point, the event loop is free to do other things, including initiating other network requests that can proceed in parallel. When one of those in-flight requests is complete, the event loop is notified. When the event loop has time, it proceeds with whatever handlers you defined. Take this one, for example:


const characterData = response.json().then( /* do stuff */ );

When that task is done, the asynchronous operation is done. This is the general flow for all asynchronous programming on Node.

Streams

Streams are how Node models flows of data in networking, filesystem, and other channels. Streams represent a series of “things,” like bits coming in or going out to a file. When something important occurs within a series, an event is raised that calls out to the application code based on its handlers.

If this sounds similar to our discussion of asynchronous programming, it’s because the two concepts are closely related. Streams are the code mechanism Node provides for dealing with data flows, which may be large, in a non-blocking fashion.

For example, the fs (filesystem) standard library in Node lets us open a file as you’ve just seen, but it also lets us consume events from the file so we can accept the data in chunks and be notified of file endings and errors. This is critical when dealing with large files (where otherwise we’d load the entire dataset into memory, no bueno):


const filePath = 'hugefile.txt';
const readableStream = fs.createReadStream(filePath, { encoding: 'utf8' });
readableStream.on('data', (chunk) => { 
  console.log(`Received ${chunk.length} characters of data.`); 
}

readableStream.on('end', () => { 
  console.log('Finished reading the file.');
});

readableStream.on('error', (error) => { 
  console.error(`An error occurred: ${error.message}`); 
});

The .on() callbacks let you define which event you are watching for, and then the actual function for doing the work.

Modules

In JavaScript, you use modules to organize code into manageable chunks. It’s common to import one or more modules into your code to make use of functionality defined by others, like the filesystem library:


import fs from 'fs/promises';

Or, you might import code from the project you are working on, from a local file:


import utils from './utils/mjs';

You can also import specific parts of the module if they were named in the export:


import { specificUtil } from './utils.mjs'; 

Exporting a module is done like this:


export const utilA = () => { /* ... */ }; 
export function utilB() { /* ... */ };
const defaultValue = { /* ... */ }; 
export default defaultValue;

Notice that the default export is what you get when you do a straight import, whereas the named ones map to the named imports. Also notice that you can define exports with two syntaxes as seen with utilA and utilB. You might also notice the use of the .mjs extension on the module files here. This is necessary unless you set “type”: “modules” in your NPM package.json file. We’ll talk more about that in the NPM section below.

Important note: What we have looked at so far is the ESM (EcmaScipt Module Syntax) which Node and JavaScript have generally standardized on. But there is another syntax, CommonJS, that much of Node has used for years. You will often see it in Node code:


// Import:
const fs = require('fs'); 
const myUtil = require('./utils.js'); 
// Export:
function utilityOne() { /* ... */ } 
module.exports = { utilityOne };
exports.utilityOne = utilityOne; 

Conceptually, the most important thing is to take from this section is how breaking up software into components is absolutely key in managing complexity. Modules are the high-level mechanism for doing that in Node.

Closures and scope

When you define and execute a function in JavaScript and Node, the variables that exist around it are available to the function. This simple idea, known as a closure, gives us some serious power. The idea is that the “scope” of the variables defined in the surrounding context are accessible to the insides of the nested function.

A standard function has access to those variables in its parent context:


let album = "Abbey Road";  
let rank = function(){ 
  console.log(`${album} is very good!`); // output: “Abbey Road is very good!”
}
rank();

But in a closure, the inner function has access to the scope, even after the parent has completed:


function setupRanker() {
  let album = "Abbey Road";
  let rank = function() {
    console.log(`${album} is very good!`);
  };
  return rank; // Return the inner function
}

const rankerFunction = setupRanker(); 
rankerFunction(); 

This is sometimes called “lexical scope” to highlight that these variables are available to the function, even after the runtime execution of the surrounding context has finished.

This is a toy example, but it makes clear that the rank() function has access to the album variable defined in the surrounding context. That is the core concept. Keeping a clear understanding of this basic idea will help you navigate as you get further into the ramifications of the idea.

Closures are a core feature in JavaScript, and central to Node as well. Closures are also central in functional programming.

Classes

Modern JavaScript gives us a strong class-based syntax for defining objects. Understanding the concept of objects, how create them and make use of them, is essential. Behind the scenes, JavaScript still uses what’s called prototypes, but classes are the more common approach. This is no surprise, given the widespread use of classes in many languages and their simplicity.

Although using object leads us toward the more advanced realms of object-oriented programming, we don’t need to go there just to use objects and classes. A class simply defines a particular type of object:


// Definition:
class Album {
  #tracklist;
  constructor(title) {
    this.title = title;       
    this.#tracklist = [];
  }

  addTrack(trackName) {
    this.#tracklist.push(trackName);
  }
  displayAlbumInfo() {
    console.log(`Album: ${this.title}`);
    console.log(`Tracks: ${this.#tracklist.length}`);
  }

  static getGenreCategory() {
    return "Recorded Music";
  }
}

// Usage:
const album1 = new Album("Abbey Road");
const album2 = new Album("The Joshua Tree");


album1.addTrack("Carry That Weight");
album2.addTrack("In God’s Country");


album1.displayAlbumInfo();
album2.displayAlbumInfo();

This example gives us an Album class for modeling musical albums. It’s pretty clear what it does, in that it holds data like a string title and an array tracklist. Each object can have its own variants of those bits. (Notice the hash syntax gives us private variables with #trackList).

Conceptually, you just need to understand how much power this simple idea gives us for modeling interacting sets of data. The basic mechanism for “encapsulating” (a.k.a. containing) related information and “behavior” (functions) together is must-know for Node developers.

NPM and the Node ecosystem

Most applications rely on external libraries to get their work done, and Node has a powerful package manager called NPM (Node package manager) for specifying and using them. A massive ecosystem has grown up around NPM, as you can see here.

NPM gives you a package.json file that you can put in your program. This is a JSON file with a variety of properties. The main one is “dependencies.” NPM will install the dependencies you list in the file into the /node_modules directory of your app. When the app is running, it will have access to those modules as imports.

That is the basic concept, but there are other important features like the ability to define scripts for your program in the scripts property and setting the type property to module to default to using ESM module files without the .mjs extension. This is common practice even if you are developing a project that isn’t strictly intended to be a module alone. If you don’t set this property, NPM will assume your .js files are CommonJS modules.

NPM is a pillar of Node development. There are other, newer tools (like yarn and pnpm) but NPM is the standard. Conceptually, the insides of your Node programs are JavaScript, but the outsides are NPM.

JSON

It’s hard to imagine JavaScript and Node without JSON, which stands for JavaScript Object Notation. In fact, it’s hard to imagine the modern programming universe without it. It’s a simple yet incredibly versatile means for describing and sharing data between applications.

The basics are simple:


{ 
  “artist”: “Rembrandt”,
  “work”: “Self Portrait”,
  “url”: “https://id.rijksmuseum.nl/200107952”
}

There’s nothing much to see here; just a pair of curly braces and colons with name/value pairs. You can also have arrays and other JSON objects inside of that for complete object graphs. But beyond that is range of techniques and tools for manipulating JSON, starting with the built in JSON.stringify and JSON.parse, to turn JSON into strings or vice versa.

JSON is more or less the way to send data over the wire, from client to server and elsewhere. It also is now common in datastores, especially with NoSQL databases like MongoDB. Getting comfortable with JSON will make programming in Node much smoother.

Error handling

Good error handling is a key concept in Node. You’ll encounter two kinds of errors in Node: the standard runtime kind and the asynchronous kind.

Whereas async errors on Promises are handled by using the .catch() callback, normal runtime errors are handled with try/catch keyword blocks:


try {
  /* do something risky */
} catch (error) {
  /* do something with the error */
}

If an error occurs during the asynchronous operation, that callback will execute and receive the error object.

When using async/await, you use the standard try/catch blocks.

Conceptually, the important thing to bear in mind is that things will go wrong, and catching errors is the main way to address it. We want to recover the operation if possible. Otherwise, we want to make the error condition as painless as we can for the end user. We also want to log the error if possible. If we are interacting with an external service, we might be able to return an error code.

The best error handling is dictated by the circumstances. The main thing is to keep errors in the back of your mind. It’s the error conditions we don’t think about that usually get us, more than the ones we think about and get wrong. Also, beware of “swallowing” errors, which is to define a catch block that does nothing at all.

Keep it flexible

At the beginning of this article, I mentioned that JavaScript can handle a variety of programming styles. These are functional, object-oriented, reactive, imperative, and programmatic. Here we have a vast range of powerful concepts—some of the most important in programming. In the next paragraph, we’ll cover each of them in detail …

Just kidding! There’s no way to do justice to all these programming styles, even in a book-length treatment. The main point is that JavaScript is flexible enough to let you embrace them all. Use what you know and be open to new ideas.

You’ll be often faced with work that must be done, and there is always more than one way to do it. Being concerned that there is a cooler, faster, or more efficient way to get the job done is normal. You have to find a balance between doing the work now and researching newer technologies and best practices for later.

As you add to your understanding, you’ll naturally find tasks and areas where certain paradigms fit. Someone who truly loves programming will not indulge (much) in the “my paradigm/tool/framework is better than yours” game. A better approach, that will serve you throughout your career, is: “This is the best tool I know for the job right now, and I’m open to improving.”

Just like any other worthy discipline—say music or martial arts—the real master knows there’s always more to discover. In this case, the practice just happens to be building server-side JavaScript apps with Node.

Total
0
Shares
Previous Post

Get started with the new Python Installation Manager

Next Post

Micron joins HBM4 race with 36GB 12-high stack, eyes AI and data center dominance