Build a Basic CRUD App with Angular and Node

This article was originally published on the Okta developer blog. Thank you for supporting the partners who make SitePoint possible.

In recent years, single page applications (SPAs) have become more and more popular. A SPA is a website that consists of just one page. That lone page acts as a container for a JavaScript application. The JavaScript is responsible for obtaining the content and rendering it within the container. The content is typically obtained from a web service and RESTful APIs have become the go-to choice in many situations. The part of the application making up the SPA is commonly known as the client or front-end, while the part responsible for the REST API is known as the server or back-end. In this tutorial, you will be developing a simple Angular single page app with a REST backend, based on Node and Express.

You’ll be using Angular as it follows the MVC pattern and cleanly separates the View from the Models. It is straightforward to create HTML templates that are dynamically filled with data and automatically updated whenever the data changes. I have come to love this framework because it is very powerful, has a huge community and excellent documentation.

For the server, you will be using Node with Express. Express is a framework that makes it easy to create REST APIs by allowing to define code that runs for different requests on the server. Additional services can be plugged in globally, or depending on the request. There are a number of frameworks that build on top of Express and automate the task of turning your database models into an API. This tutorial will not make use of any of these in order to keep this focused.

Angular encourages the use of TypeScript. TypeScript adds typing information to JavaScript and, in my opinion, is the future of developing large scale applications in JavaScript. For this reason, you will be developing both client and server using TypeScript.

Here are the libraries you’ll be using for the client and the server:

  • Angular: The framework used to build the client application
  • Okta for Authorisation: A plugin that manages single sign-on authorization using Okta, both on the client and the server
  • Angular Material: An angular plugin that provides out-of-the-box Material Design
  • Node: The actual server running the JavaScript code
  • Express: A routing library for responding to server requests and building REST APIs
  • TypeORM: A database ORM library for TypeScript

Start Your Basic Angular Client Application

Let’s get started by implementing a basic client using Angular. The goal is to develop a product catalog which lets you manage products, their prices, and their stock levels. At the end of this section, you will have a simple application consisting of a top bar and two views, Home and Products. The Products view will not yet have any content and nothing will be password protected. This will be covered in the following sections.

To start you will need to install Angular. I will assume that you already have Node installed on your system and you can use the npm command. Type the following command into a terminal.

npm install -g @angular/cli@7.0.2

Depending on your system, you might need to run this command using sudo because it will install the package globally. The angular-cli package provides the ng command that is used to manage Angular applications. Once installed go to a directory of your choice and create your first Angular application using the following command.

ng new MyAngularClient

Using Angular 7, this will prompt you with two queries. The first asks you if you want to include routing. Answer yes to this. The second query relates to the type of style sheets you want to use. Leave this at the default CSS.

ng new will create a new directory called MyAngularClient and populate it with an application skeleton. Let’s take a bit of time to look at some of the files that the previous command created. At the src directory of the app, you will find a file index.html that is the main page of the application. It doesn’t contain much and simply plays the role of a container. You will also see a style.css file. This contains the global style sheet that is applied throughout the application. If you browse through the folders you might notice a directory src/app containing five files.


These files define the main application component that will be inserted into the index.html. Here is a short description of each of the files:

  • app.component.css file contains the style sheets of the main app component. Styles can be defined locally for each component
  • app.component.html contains the HTML template of the component
  • app.component.ts file contains the code controlling the view
  • app.module.ts defines which modules your app will use
  • app-routing.module.ts is set up to define the routes for your application
  • app.component.spec.ts contains a skeleton for unit testing the app component

I will not be covering testing in this tutorial, but in real life applications, you should make use of this feature. Before you can get started, you will need to install a few more packages. These will help you to quickly create a nicely designed responsive layout. Navigate to the base directory of the client, MyAngularClient, and type the following command.

npm i @angular/material@7.0.2 @angular/cdk@7.0.2 @angular/animations@7.0.1 @angular/flex-layout@7.0.0-beta.19

The @angular/material and @angular/cdk libraries provide components based on Google’s Material Design, @angular/animations is used to provide smooth transitions, and @angular/flex-layout gives you the tools to make your design responsive.

Next, create the HTML template for the app component. Open src/app/app.component.html and replace the content with the following.

<mat-toolbar color="primary" class="expanded-toolbar">
  <button mat-button routerLink="/">{{title}}</button>

  <div fxLayout="row" fxShow="false">
    <button mat-button routerLink="/"><mat-icon>home</mat-icon></button>
    <button mat-button routerLink="/products">Products</button>
    <button mat-button *ngIf="!isAuthenticated" (click)="login()"> Login </button>
    <button mat-button *ngIf="isAuthenticated" (click)="logout()"> Logout </button>
  <button mat-button [mat-menu-trigger-for]="menu" fxHide="false">
<mat-menu x-position="before" #menu="matMenu">
  <button mat-menu-item routerLink="/"><mat-icon>home</mat-icon> Home</button>
  <button mat-menu-item routerLink="/products">Products</button>;
  <button mat-menu-item *ngIf="!isAuthenticated" (click)="login()"> Login </button>
  <button mat-menu-item *ngIf="isAuthenticated" (click)="logout()"> Logout </button>

The mat-toolbar contains the material design toolbar, whereas router-outlet is the container that will be filled by the router. The app.component.ts file should be edited to contain the following.

import { Component } from '@angular/core';

  selector: 'app-root',
  templateUrl: './app.component.html',
  styleUrls: ['./app.component.css']
export class AppComponent {
  public title = 'My Angular App';
  public isAuthenticated: boolean;

  constructor() {
    this.isAuthenticated = false;

  login() {

  logout() {

This is the controller for the app component. You can see that it contains a property called isAuthenticated together with two methods login and logout. At the moment these don’t do anything. They will be implemented in the next section which covers user authentication with Okta. Now define all the modules you will be using. Replace the contents of app.module.ts with the code below:

import { BrowserModule } from '@angular/platform-browser';
import { NgModule } from '@angular/core';
import { BrowserAnimationsModule } from '@angular/platform-browser/animations';
import { FlexLayoutModule } from '@angular/flex-layout';
import {
} from '@angular/material';
import { HttpClientModule } from '@angular/common/http';
import { FormsModule } from '@angular/forms';

import { AppRoutingModule } from './app-routing.module';
import { AppComponent } from './app.component';

  declarations: [
  imports: [
  providers: [],
  bootstrap: [AppComponent]
export class AppModule { }

Notice all the material design modules. The @angular/material library requires you to import a module for each type of component you wish to use in your app. Starting with Angular 7, the default application skeleton contains a separate file called app-routing.module.ts. Edit this to declare the following routes.

import { NgModule } from '@angular/core';
import { Routes, RouterModule } from '@angular/router';
import { ProductsComponent } from './products/products.component';
import { HomeComponent } from './home/home.component';

const routes: Routes = [
    path: '',
    component: HomeComponent
    path: 'products',
    component: ProductsComponent

  imports: [RouterModule.forRoot(routes)],
  exports: [RouterModule]
export class AppRoutingModule { }

This defines two routes corresponding to the root path and to the products path. It also attaches the HomeComponent and the ProductsComponent to these routes. Create these components now. In the base directory of the Angular client, type the following commands.

ng generate component Products
ng generate component Home

This creates html, css, ts, and spec.ts files for each component. It also updates app.module.ts to declare the new components. Open up home.component.html in the src/app/home directory and paste the following content.

<div class="hero">
    <h1>Hello World</h1>
    <p class="lead">This is the homepage of your Angular app</p>

Include some styling in the home.component.css file too.

.hero {
  text-align: center;
  height: 90vh;
  display: flex;
  flex-direction: column;
  justify-content: center;
  font-family: sans-serif;

Leave the ProductsComponent empty for now. This will be implemented once you have created the back-end REST server and are able to fill it with some data. To make everything look beautiful only two little tasks remain. Copy the following styles into src/style.css

@import "~@angular/material/prebuilt-themes/deeppurple-amber.css";

body {
  margin: 0;
  font-family: sans-serif;

.expanded-toolbar {
  justify-content: space-between;

h1 {
  text-align: center;

Finally, in order to render the Material Design Icons, add one line inside the <head> tags of the index.html file.

<link href="" rel="stylesheet">

You are now ready to fire up the Angular server and see what you have achieved so far. In the base directory of the client app, type the following command.

ng serve

Then open your browser and navigate to http://localhost:4200.

Add Authentication to Your Node + Angular App

If you have ever developed web applications from scratch you will know how much work is involved just to allow users to register, verify, log on and log out of your application. Using Okta this process can be greatly simplified. To start off, you will need a developer account with Okta.

In your browser, navigate to and click on Create Free Account and enter your details.

Start building on Okta

Once you are done you will be taken to your developer dashboard. Click on the Add Application button to create a new application.

Add Application

Start by creating a new single page application. Choose Single Page App and click Next.

Create new Single Page App

On the next page, you will need to edit the default settings. Make sure that the port number is 4200. This is the default port for Angular applications.

My Angular App

That’s it. You should now see a Client ID which you will need to paste into your TypeScript code.

To implement authentication into the client, install the Okta library for Angular.

npm install @okta/okta-angular@1.0.7 --save-exact

In app.module.ts import the OktaAuthModule.

import { OktaAuthModule } from '@okta/okta-angular';

In the list of imports of the app module, add:

  issuer: 'https://{yourOktaDomain}/oauth2/default',
  redirectUri: 'http://localhost:4200/implicit/callback',
  clientId: '{YourClientId}'

Here yourOktaDomain should be replaced by the development domain you see in your browser when you navigate to your Okta dashboard. YourClientId has to be replaced by the client ID that you obtained when registering your application. The code above makes the Okta Authentication Module available in your application. Use it in app.component.ts, and import the service.

import { OktaAuthService } from '@okta/okta-angular';

Modify the constructor to inject the service and subscribe to it.

constructor(public oktaAuth: OktaAuthService) {
    (isAuthenticated: boolean) => this.isAuthenticated = isAuthenticated

Now, any changes in the authentication status will be reflected in the isAuthenticated property. You will still need to initialize it when the component is loaded. Create a ngOnInit method and add implements OnInit to your class definition

import { Component, OnInit } from '@angular/core';
export class AppComponent implements OnInit {
  async ngOnInit() {
    this.isAuthenticated = await this.oktaAuth.isAuthenticated();

Finally, implement the login and logout method to react to the user interface and log the user in or out.

login() {

logout() {

In the routing module, you need to register the route that will be used for the login request. Open app-routing.module.ts and import OktaCallbackComponent and OktaAuthGuard.

import { OktaCallbackComponent, OktaAuthGuard } from '@okta/okta-angular';

Add another route to the routes array.

  path: 'implicit/callback',
  component: OktaCallbackComponent

This will allow the user to log in using the Login button. To protect the Products route from unauthorized access, add the following line to the products route.

  path: 'products',
  component: ProductsComponent,
  canActivate: [OktaAuthGuard]

That’s all there is to it. Now, when a user tries to access the Products view, they will be redirected to the Okta login page. Once logged on, the user will be redirected back to the Products view.

Implement a Node REST API

The next step is to implement a server based on Node and Express that will store product information. This will use a number of smaller libraries to make your life easier. To develop in TypeScript, you’ll need typescript and tsc. For the database abstraction layer, you will be using TypeORM. This is a convenient library that injects behavior into TypeScript classes and turns them into database models. Create a new directory to contain your server application, then run the following command in it.

npm init

Answer all the questions, then run:

npm install --save-exact express@4.16.4 @types/express@4.16.0 @okta/jwt-verifier@0.0.14 express-bearer-token@2.2.0 tsc@1.20150623.0 typescript@3.1.3 typeorm@0.2.8 sqlite3@4.0.3 cors@2.8.4 @types/cors@2.8.4

I will not cover all these libraries in detail, but you will see that @okta/jwt-verifier is used to verify JSON Web Tokens and authenticate them.

In order to make TypeScript work, create a file tsconfig.json and paste in the following content.

The post Build a Basic CRUD App with Angular and Node appeared first on SitePoint.

Source: Sitepoint

Strategies For Headless Projects With Structured Content Management Systems

Strategies For Headless Projects With Structured Content Management Systems

Strategies For Headless Projects With Structured Content Management Systems

Knut Melvær


This is the guide I wish I had the last couple of years when running projects with headless Content Management Systems (CMSs). I’ve been a developer, a user-experience and technology consultant, a project manager, information architect, and an author. The different hats have made me realize that even if we’ve had so-called “headless” CMSs for a while now, there’s still a way to go about thinking how to use them best.

We are now at a place where many of us rely on JavaScript frameworks for frontend work, using design systems made of components and compositions, rather than just implementing flat page layouts. There’s a lot of traction towards the JAMstacks and isomorphic/universal apps that run both on the server and the client. The final piece of the puzzle then is how we manage all the content.

Traditional CMSs are adding APIs to serve content through network requests and the JSON format. In addition, “headless” CMSs have been emerged to exclusively to serve content through APIs. My argument in this article though, is that we should spend less time talking about “headless”, and more about “structured content”. Because that is the essential quality of these systems. There are lots of implications for our craft implied by these systems, and we still have a way to go in terms of figuring out the good patterns of how we should deal with these technologies.

Coming to technology consulting from a background in humanities, I have learned a lot about how to organize and work with web projects that take a content-centric approach — both with the newer API-based as well as the traditional CMSs. I have come to appreciate how getting started early with actual live content from a CMS; doing so in a cross-disciplinary setting has not only made it possible to uncover complexities at an earlier stage but also lends agency to everyone involved, and gives opportunities to reflect on the challenges and possibilities of technology and design in its broadest sense.

Headless WordPress

Everyone knows that if a website is slow, users will abandon it. Let’s take a closer look at the basics of creating a decoupled WordPress. Read article →

In this article, I’ll suggest some overarching strategies, with some concrete, real-world examples on how to think about working with structured content. At the time of writing, I have just started working for a SaaS company that provides such a content management service, for hosting content delivered over APIs. I will make references to it, both because of my past experience with it in projects I was involved in as a consultant, but also because I think it aptly illustrates the points I want to make. So consider this a disclaimer of sorts.

That being said, I have been thinking about writing this article for a couple of years, and I have strived to make it applicable to whatever platform you choose to go with. So without further ado, let’s jump twenty years back in time in order to understand a bit more where we are today.

First Moves With Web Standards

In the early 2000s, the Web Standards movement inspired a field to change their ways of working. From a “layout-first” approach, they directed our attention towards how content on a page should be marked up semantically using HTML: A website’s menu isn’t a <table>, it’s a <nav>; A heading is not a <b>, it’s an <h1>. It was a significant step towards thinking about the different roles content web plays in order to help users find, identify and take it in.

The Web Standards movement introduced the argument that semantic markup improved accessibility, which also improved its ranking in the Google search results. It also marked a shift in how we thought about web content. Your website wasn’t longer the only place your content was represented. You also had to think about how your web pages were presented in other visual contexts, like in search results or screen readers. This was later fueled by social media and embedded previews of shared links. The mindset shifted from how the content should look, to what it should mean. This also happens to be the key to working with structured content.

With the adoption of pocket-size devices connected to the Internet, the web suddenly got a serious contender in apps. The competition, however, was mostly for the eyeballs of the end user. Many organizations still needed to distribute information about their products and services in both their apps and their different web presences. Concurrently, the web matured, and JavaScript and AJAX made it easier to connect different sources of content through APIs. Today, we have GraphQL and tooling that make content fetching and state management simpler. And so the bits of the technological puzzle begin to fall into place.

“Create Once, Publish Everywhere”

Though it’s mostly described as a “technological shift”, the embedding of content in JSON payloads (traveling along HTTP tubes) has an outsized impact of how we think about digital content and surrounding workflows. In some ways, it already has. Almost ten years ago, National Public Radio’s (NPR) Daniel Jacobson guest blogged at about their approach, summed up in in the acronym COPE which stands for “Create Once, Publish Everywhere”. In the article, he introduces a content management system providing content to multiple digital interfaces through an API — not through an HTML rendering machine — as most CMSs at the time (and arguably now) did.

NPR’s COPE system diagram. Goes from the left with a data entry layer, a normalized data management layer, a flattened data management layer, and layer for APIs, one for filtering and rights, and the presentation layer to the right.
Illustration of NPR’s COPE system. Published originally on (Oct 13, 2009) (Large preview)

NPR’s COPE “data management layer” is what would become the notion of “a headless CMS”. In the early days of COPE, it was achieved by structuring the content in XML. Today, JSON has become the dominant data format for transferring data over APIs, including internet of things devices, and other systems outside the web. If you want to exchange content with chatbots, voice interfaces, and even software for visual prototyping, you very often talk HTTP with a JSON accent.

“Uncoining” The Term “Headless CMS”

According to Google Trends, searches for “headless CMS” gained in popularity as late as 2015, i.e. six years after NPR’s COPE article. The term “headless” (at least in relation to digital technology and not late 18th-century French aristocracy), has been used a good while longer to talk about systems that run without a graphical user interface.

Note: One could argue that a command line interface is indeed “graphical” such as software on servers or testing environments (but let’s save that for another article).

I’m of two minds calling these new CMSs “headless”. We could as well call them “polycephalic” — that which has many heads. They are the Hydras and Cerbeuses of CMSs. “Headless” is also defining these systems by the capability they lack (i.e., a template engine for rendering web pages), instead of defining them by their true strength: making it possible to structure content without the constraints of the web. That being said, as of today, many of the solutions in this category could also be called “Nearly Headless Nick”. Because the editing interface is still tightly coupled to the system. Their “headlessness“ arises from their lack of a templating engine, that is, the machinery producing markup from content.

Note: I would almost definitely use a CMS called “Mimsy-Porpington” (known from the Harry Potter universe) though.

Instead, they make content available through an API, hence giving you more flexibility for how, what, and where you want to display and use this content. This makes them perfect companions to popular JavaScript frontend frameworks such as React, Angular, and Vue. And despite the claim of being able to deliver content to “websites, apps, and devices”, most of them are still limited by how web content works. This is most noticeable in the way most handle rich text — storing it either as HTML or Markdown.

Traditional CMSs have also started adding somewhat generic APIs in addition to their template rendering systems and call this “decoupled” as a way to distinguish themselves from their fresh competitors. “All this, and APIs, too!”* is the claim. Some of these CMSs are also pretty agnostic when it comes to the content modeling. For example, Craft CMS, makes almost no assumptions about your content model when you first install it. WordPress is also moving towards using APIs for content delivery. I suspect the gap between the old players in the CMS field and the new will get narrower as we go along.

Nonetheless, putting content management behind APIs (instead of an HTML renderer) is an important step to more sophisticated ways of working in an age where an organization’s text, images, videos, and media are digitized and exposed to internal and external users and customers. It’s time though, to move away from defining their lacking frontend rendering capabilities, to what they really can do for us: give us a way to work with structured content. So, should we be calling them “Structured Content Management Systems”? As in, “No Bob, this isn’t your usual CMS. This is a SCMS, trust me, it’s going to be a thing.”

It’s Not About The Heads, It’s About Structured Content

The most radical change that the Structured Content Management Systems (SCMS) imposes is a move away from arranging content according to a page hierarchy to where you are free to structure content for whatever purpose you see fit. Avoiding duplicate content is a clear advantage because it increases reliability and decreases administrative burden (you don’t have to cope with duplicated content across multiple channels). In other words: Create Once, Publish Everywhere. If you just have to update your product description once — in one system — and it updates wherever your product is exposed to the user, that’s clearly an advantage.

While SCMS vendors frequently use “your website and an app” to justify thinking differently on page structure, you don’t have to cross the river to draw benefits from a structured content structure. With the popularity of JavaScript frameworks, it’s more and more common to build websites as a composition of individual components, that can be “filled” with different content depending on state and context. You may have a product card that appears in many different contexts throughout your web application. We’re seeing that modern web development moves away from setting documents and pages to composing components according to a mixture of user input, algorithms, and customization.

These trends for how design systems are made, and how we are encouraged to work in teams through processes of testing, learning, and iteration, makes the field of content management ripe for some new ways of thinking. Some patterns have emerged, but we still have many ways to go. Therefore, based on my experience from working in teams and projects that have put content front and center, and as now part of a team that builds a service for it (and I urge you to be aware of any bias here), I want to put forth some strategies that I believe can be helpful and create points for further discussion.

1. Approach Content In Multi-Disciplinary Teams

I believe that it is a thing of the past that a graphic designer can hand over stale, pixel-perfect pages to a frontend developer whose responsibility was to “implement” the design. We now make design systems consisting of smaller components, laid out in compositions that come with multiple possible states out of the box. More often than not, these components have to be resilient to user-generated input, which means that the sooner you introduce live content into the process, the better. A frontend developer’s responsibility isn’t to reproduce the vision of a graphic designer’s; it’s to maneuver a complex field of how browsers render HTML, CSS, and JavaScript, making sure that the user interfaces are responsive, accessible and performant.

When working as a technology consultant at Netlife (a consultancy specialized in user experience), I saw great steps being made towards collaboration between developers, designers, and user researchers. Even though our content editors were always involved in the project from the get-go, their contributions didn’t enter design workflow mainly because of technical friction.

The bottleneck was often a legacy CMS we couldn’t touch, or that it took time to build the content structure because it was dependent on the design layout. This often resulted in work being doubled: We made an HTML prototype, often based on content parsed from Markdown-files, which had to be re-implemented in the CMS-stack when the user testing was done, and everyone was pixel-perfect happy. This was often an expensive process as limitations in the CMS were discovered late in the process. It also creates pressure on all parts to “get it right the first time” and left less space for the kind of experimentation you would want in a design project.

Multi-Disciplinary Work Requires Nimble Systems

Moving to a SCMS in which it took minutes to code up a content model (where fields and API were ready instantly) turned our process upside down — and for the better. I remember sitting with the content editor of the new in the project’s first days. Talking through how they worked and would like to work with their content. Rather quickly, we translated our conclusions into simple JavaScript-objects that were instantly transformed to an editing environment in the browser. Figuring out helpful titles and descriptions for the titles. We talked about how they wanted text-snippets they could reuse across different pages and contexts, which they in-house called “nuggets”, which we then created then and there.

Allowing for this kind of exploration early in the project development — a content editor and a developer talking together while the interface was being made in front of us — felt powerful. Knowing that we could continue designing the frontend in React while she and her colleagues began working with the content. And not worrying about painting ourselves into a corner, like we often did with CMSs in which the structure was tightly coupled with how you had to code up the frontend part of it.’s content editor with a publication document open
Example from’s custom editor environment in Sanity with its style guide is carefully and contextually integrated with the fields. (Large preview)

A Content System Should Allow For Experimentation And Iteration

Creative redesign projects aside, a system for structured content should also allow you to continue improving, testing and iterate your content as part of your whole design system. UX designers should be able to quickly prototype with real content using tools like Sketch or Framer X. You should be able to augment content management with quantitative measurements, be it readability scales or how the content performs where it’s used.

Note: I used the term “UX designers” above despite having the opinion that we all should — in some way — relate to the process of making good user experiences. We’re all UX designers in our different strands of design.

Animated screendump of an rich text editor in Sanity
Example of quantitative readability analysis in a rich text editor. (Large preview)

Working with structured content requires a bit of getting used to if you’re used to just WYSIWYG-ing content directly on your web page layout. Yet, it lends itself to a conversation that is more in line with how the digital design field is moving. Structured content lets a team of designers, developers, content editors, user researchers, and project managers collectively think about how a system should work to support users’ needs and strategic goals. This also requires you to think differently about how content structures, which takes us to the next strategy.

2. You Might Not Need A Pecking Order

One of the most notable changes for many is that systems for structured content are geared towards collections and lists of documents and not folder-like hierarchies that reflect website navigation structures. These structures stop making sense as soon as some of the content is to be used in other contexts — be it chatbots, print media or other websites. Traditional CMSs have tried to mitigate this by allowing for reusable content blocks, but they still need to be placed on page layouts and cumbersome to reason with through APIs.

Folder-based content management in Episerver.
Folder-based content management in Episerver. This screenshot isn’t old by the way. Published on preview)

Each Page To Its Own

As laid out in The Core Model, when one of your main referrers is either Google or sharing on social media, you should consider every page a landing page. And if you look at the distribution of page views, you will notice that some of your pages are way more popular than others. Unless you are a news website, those tend not to be the news, but those that let the user achieve whatever they hoped to achieve on your website. They are where business is actually happening.

Your digital content should be in service of the intersection of your own strategic goals and individual goals of your users. When the digital agency Bengler (’s predecessor) made the new website for, they didn’t structure the content after an elaborate hierarchy of pages. They made content types that reflected the organizational everyday reality, i.e. after projects, persons, and publications. In fact, the OMA-website is almost completely flat in terms of a content hierarchy, and the front page is generated from a mix of algorithmic and editorial rules.

The Sanity Studio with a “plan feature” document titled “Premium Support” open with an editor
How structures their content (Large preview)

So, how to go about it? I believe a mix of thinking about your content as a reflection of how your organization’s mental model and what it needs to be to be useful for whatever your users need it for.

Here’s a basic example: When building a page of employees, you should probably start with a content type called person. A person can have a name, contact info, an image, different organizational roles, and a short biography. A person document can be reused in contact lists, article author bylines, chat support interfaces, and building access badges. Perhaps you already have an in-house system that knows who these people are and that comes with an API? Great, then synchronize with that.

Don’t Get Lost In An Ontological Rabbit Hole

It’s useful to return to Google’s way of indexing web pages and how they’re trying to index the world’s information. That’s why they are expending time and effort on linked data (RDFa, microformat, JSON-LD). If you annotate your web pages with JSON-LD elements, you will appear more prominently in search results. It’s also relevant when your information should be spoken by voice assistants and displayed in an assistant UI. If your content is already structured and easily available in an API, it will be relatively easy for you to implement it in these microformats.

I’m not sure I would recommend going all in on the ontologies of and various linked data resources though, at least not for editor purposes. You can quickly get lost in a rabbit hole of trying to make perfect platonic structures where it all fits.

Newsflash: It never will, because the world is a messy place, and because people think about stuff differently.

It’s more important to structure your content in a system that makes intuitive sense and lends itself to be adapted as needs change. This is why it’s important to start with content modelling early on in the design and development process — you need to learn about how it needs to be used.

Abstract From Reality, Not From CMS Conventions

It can be tempting to just follow whatever conventions your CMS comes with. Remember how WordPress will give you “Posts” and “Pages”, and suddenly everything needs to be fitted into those boxes? A WYSIWYG rich text field is flexible in that it allows you to put in whatever, but the content will not be structured and easily adaptable — it’s only flexible once. But you need some place to begin your mapping of a content model. My suggestion is to begin with talking to people, i.e. the authors and readers.

How do people talk about the content internally? What do people call different things? You could run a free-listing exercise, a method used by ethnographers to map folk-taxonomies. For example, you could ask:

“Name the different types of content in our organization.”

Or, on a more specific level:

“Can you name the different types of reports we have in this organization?”

The point with this survey is to tease out the internalized taxonomies people carry, and not their opinions or feelings about things (something that often tends to derail design processes). You don’t have to ask particularly many before having a pretty exhaustive list you can work from. You’ll probably find that parts of your list come from conventions in your current CMS (that’s good to know if you are to do some remodelling). Now you should talk with your editor and try to pin down what they need the content to do.

Some questions you can ask could be the following:

  • Do you need to use this content in more than one place? Where?
  • What are the different relationships between the content types?
  • Where do we need the content to be displayed today, and tomorrow?
  • In which ways do we need content to be sorted? Can the ordering be done algorithmically, by the user, or does it have to be manually?
  • Are there systems or databases in other systems that we can synchronize with in order to prevent duplication?
  • Where do we want the canonical content to live? Should the SCMS be the source for it, or just augment existing content, e.g. marketing copy for products living in a product management system?

This doesn’t mean that you have to throw the traditional information architecture out with the now lukewarm bathwater. It still makes sense to have articles as a content type, if articles are a part of your organization’s content reality. But perhaps you don’t really need the abstract convention of categories, because how these articles have references to the type of services or products in them. And this relation allows for querying these articles in circumstances where it makes sense, without requiring someone to have “article category management” as part of their job description.

The article is also what makes it hard to decouple content completely from the presentation layer. We are so used to thinking about the layout and styling of the article, but in an age where you are expected to host your own content on your own domain, and then syndicate it to platforms such as, you already have given up control over visual presentation. This takes us to the next strategy.

3. Presentation Contexts Are Also Content Types

Be Redesign Ready

You want to be able to adapt and quickly change the navigation structure of your website as well, without having to either rebuild your whole content architecture or fight against a stringent folder-like interface. You also want to be able to have some content hierarchy, because it sometimes makes sense, and sometimes it gets deeper than two levels, where most interfaces in the department of API-first CMSs fail to deliver much help.

The structure feature in Craft CMS
Interface for arranging content in a hierarchy (called “Structure”) in Craft CMS. Content defined by their place in one hierarchy may make sense in some cases, but it’s a legacy from menu navigation that stops making sense when the content is reused across channels or placed by software like targeting algorithms. Published on (Large preview)

Interestingly, content management systems for chatbots tend to use similar hierarchical structures for arranging intent trees and dialog flows. This goes to say that content hierarchies play different roles in different channels, but often they provide ways of navigating through content. A way to approach this is to make types for navigation, where you can arrange content by references, and either build routes for web pages, menus, or paths for conversational interfaces.

Relationship Advice

References (or relationships) is what makes a system for structured content possible, and it’s really the core of everything we’re dealing with when it comes to content on the web (it’s the reason it’s metaphorically called the web in the first place). To be able to make references between bits of content is a very powerful thing, but it can also be costly in terms of how the backends are able to write and retrieve such data. So you may have to think differently if you have multitudes of documents since scale seldom comes for free.

It’s also worth considering that you don’t always need an explicit reference to join data; most often it can be done by criteria that has to do with the content, e.g. “give me all persons and all buildings within this geolocation”. The building and persons don’t need to have an explicit reference to each other, as long as it’s implied in a location field on both content types.

Sanity Studio with a “route” document and an editor open
Example of a simple routing type for Notice that we have a “page” type, too. (Large preview)

Sanity Studio with a “page” document and an editor open
The page type is just a series of web page specific compositions where it’s possible to reuse other content types. (Large preview)

References between presentation types and other content types is useful when you can’t leave it to an algorithm in the presentation layer to join data. It may seem a bit cumbersome to explicitly draw these presentation types and make compositions of referred content, but it’s a solution to a problem you’ll often meet with SCMSs: It’s hard to know where content is being used. By including navigation types, you’ll explicitly tie content to presentation, but not just one. This makes it possible to reason to work with navigational structures independently of the content they lead to.

For example, in the screenshots we have tied Google Experiments to the routes type, allowing for adding multiple pages composed by references to content, which means that we can run A/B-tests with next to no content duplication. Since we also get a warning if we try to delete content that is referenced by other documents, this way of structuring will keep us from deleting something we shouldn’t.

Relationships across content types is a double-edged sword. It increases sustainability and is key to avoid duplication. On the other hand, you can easily cut yourself because you make dependencies between content, which (if not made transparent) can lead to unintended changes across the channels where your data is displayed. It would, for example, be bad if we could remove a “page” used by a “route” without warning.

This leads us to the next strategy, which (granted!) is partly beyond the power of the normal user as of today since it has to do with how different systems are architected. Still, it’s worth thinking about.

4. Don’t Put Rich Text In A Corner

Rich Text Is More Than HTML

I can understand why HTML is given such prevalence in digital content, but know it also comes from something; it’s a subset of SGML, a generalized way of structuring machine-readable documents. As Claire L. Evans points out in the wonderful book “Broad Band: The Untold Story of the Women who made the Internet” (2018), there was already a vibrant community of people thinking about linked documents when HTML was introduced. Tim Berners-Lee’s proposal was a lot simpler than many of the other systems at the time, but that’s probably why it caught on and made the — as of now — open, free web possible.

When you’re in a browser on the world wide web, HTML is great. If you’re a writer who wants to publish something that ends up in simple HTML, Markdown is great. If you want your rich text content to be easily integrated into something that isn’t a browser, or a popular JavaScript-framework that lets you augment HTML with JavaScript in complex components (yes, we’re talking about React and Vue.js), having HTML in your API responses begins to be a bit of a hassle — especially if you need to parse it.

Almost everyone does it though, even the new kids on the block: I went through all the vendors on and browsed through the documentation, and also signed up for those who didn’t mention it. With two exceptions, they all stored rich text either as HTML or Markdown. That’s fine if all you do is use Jekyll to render a website, or if you enjoy using dangerouslySetInnerHTML in React. But what if you want to reuse your content in interfaces that aren’t on the web? Or if you want more control and functionality in your rich text editor? Or just want it to be easier to render your rich text in one of the popular frontend frameworks, and have your components take care of different parts of your rich text content? Well, you’ll either have to find a smart way to parse that markdown or HTML into what you need, or, more conveniently, just have it stored more sensically in the first place.

For example, what if you want to output your rich text to a voice interface? We know that voice assistants are increasing in popularity. The most popular platforms for these assistants have the capabilities to get the text for spoken content through APIs. Then you want to take advantage of something like Speech Synthesis Markup Language. A system for portable text takes a more agnostic approach to rich text, which lets you adapt the same content for different kinds of interfaces.

Example of a rich text editor with speech synthesis capabilities. Compatible with, but not restricted to SSML).

Recommended reading: Experimenting With The SpeechSynthesis Interface

Portable Text As An Agnostic Rich Text Model

Portable text is also useful when you’re primarily doing content for the web. What if you want to have the possibility to nest and augment your text with data structures, such as a rich text footnote, or an inline editorial comment? Or an alternative phrase or wording for A/B-testing cases? Markdown and HTML quickly fall short, and you’ll have to rely on adding something like special shortcode tags, just like WordPress has solved it. With portable text, you have an agnostic representation of content structures, without having to marry a certain implementation. Your content ends up being more sustainable and flexible for new redesigns and implementations.

There are also other advantages to portable text, especially if you want to be able to edit content collaboratively and in real time (as you do in Google Docs); you need to store rich text in another structure than HTML. If you do, you’ll also be able to take advantage of microservices and bots, such as spaCy, in order to annotate and augment your content without locking the document.

As for now, portable text isn’t widely adopted, but we’re seeing movements towards it. The specification isn’t very complex and can be explored at

5. Make Sure Your SCMS Is In Service For Your Editors, And Not The Other Way Around

Digital content isn’t just used for your organization’s online web page leaflets anymore. For most of us, it encapsulates and defines how your organization is understood by the world, both from those within it and those outside: From product copy, micro texts to blog posts, chatbot responses, and strategy documents. We are millions of people that have to log into some CMS every day and navigate interfaces that were imagined twenty years ago with the assumptions of people who have never made much effort to user test or challenge their interfaces. Countless hours have been wasted away trying to fit a modern frontend experience into a page layout machine. Fortunately, this is soon a thing of the past.

As a technology consultant, I had to read through pages of technical specification whenever someone thought it was time to acquire a new CMS for themselves. There were demands from which server architecture it should run on (Windows servers, of course) to their ability to render “carousels” and “being able to edit web pages in place”, despite also requesting a “modular redesign”. When editors had been allowed to contribute to these specifications, they were also often dated to the what the editors had begotten used to. They seemed not aware that they could demand better user experiences, because enterprise software has to be big, lumpy and boring.

This is partly the fault of us making these systems. We tend to communicate technology features and specifications, and less what the everyday situation working with these systems look like. Sure, for a frontend designer, something supporting GraphQL is shorthand for how conveniently she is able to work against the backend, but on a higher level, it’s about the systems ability to accommodate for emerging workflows, where a content model could survive visual redesigns and design systems should be resilient to changes of its content.

Questions To Ask Of Your (S)CMS

If we are to embrace design processes, we can’t know prior to solving the problem whether the user tasks are best solved by making carousels (newsflash: most probably not), or whether A/B-testing makes sense for your case, even though it sounds cool.

Instead, ask questions like this:

  • Is it possible, and how exactly will multi-disciplinary teams work with this system?
  • How easy is it to change and migrate the content model?
  • How does it deal with file and image assets?
  • Has the editorial interface been user tested?
  • To what extent can the system be configured and customized to special workflows and needs of the editorial team?
  • How easy is it to export the content in a moveable format?
  • How does the system accommodate for collaboration?
  • Can content models be version controlled?
  • How easy is it to integrate the system with a larger ecosystem of flowing information?

The goal of these questions is to explore to what degree a content management system allows for a cross-disciplinary team to work effortlessly together, without too many bottle-necks or long deployment cycles. They also push the focus to be more about the content should be doing, and less about how things should look in a given context. Leave that for the design processes, where user testing probably will challenge assumptions one may have when looking into getting a new content system.

There are, of course, many factors in addition to this that probably have to be taken into consideration. The easiest thing to assess is the fiscal cost of software licenses and API-related costs if you are on a hosted service. The invisible cost (in time and attention spent by the team working with the system), is harder to estimate. From my experience, many of the SCMSs in combination with one of the popular frontend frameworks can significantly cut development time and allow for an agile (there’s my coin for the swear jar) design process. With the caveat that your team is prepared to solve some of the problems that come out of the box with traditional CMSs.

Towards Structured Content

The ways we work with digital content has changed dramatically since the World Wide Web made working with interconnected documents mainstream. Organizations, businesses, and corporations have amassed gigabytes of this content, which now is stuck in rigid page hierarchies, HTML markup, and clunky user interfaces.

Using a Structured Content Management System can be a great way to free your content from a paradigm that begins to feel its age. But it isn’t a trivial exercise, and success comes from being able to work multi-disciplinary and put your content model to the test. You need to get rid of some conventions you have grown used to by dealing with CMSs designed to output hierarchical websites. That means that you need to think differently about ordering content, make presentations types in order to make it easier to orchestrate content across multiple channels and to consider how you structure rich text so that it can be used outside of HTML contexts.

This article deals with some of the high-level concerns working with SCMSs. There are, of course, loads of exciting challenges when you start working with this in your team. You have to rethink stuff we’ve taken for granted for many years, but that’s probably a good thing. Because we are forced to evaluate our content, not only from its place on a digital page but from its role in a larger system that works for whatever goals your organization and your users may have.

I believe that we can achieve content models that are more meaningful and easier to sustain in the long run, and that means saving time and expenses. It means more flexibility in terms of inventing new outputs and services, and less tie in with software vendors. Because a well-made Structured Content Management System will make it easy for you to take your content and go elsewhere. And that makes for some interesting competition. Hopefully, all in favor of the users.

Smashing Editorial
(dm, ra, il)

Source: Smashing Magazine

A Complete Guide To Routing In Angular

A Complete Guide To Routing In Angular

A Complete Guide To Routing In Angular

Ahmed Bouchefra


In case you’re still not quite familiar with Angular 7, I’d like to bring you closer to everything this impressive front-end framework has to offer. I’ll walk you through an Angular demo app that shows different concepts related to the Router, such as:

  • The router outlet,
  • Routes and paths,
  • Navigation.

I’ll also show you how to use Angular CLI v7 to generate a demo project where we’ll use the Angular router to implement routing and navigation. But first, allow me to introduce you to Angular and go over some of the important new features in its latest version.

Introducing Angular 7

Angular is one of the most popular front-end frameworks for building client-side web applications for the mobile and desktop web. It follows a component-based architecture where each component is an isolated and re-usable piece of code that controls a part of the app’s UI.

A component in Angular is a TypeScript class decorated with the @Component decorator. It has an attached template and CSS stylesheets that form the component’s view.

Angular 7, the latest version of Angular has been recently released with new features particularly in CLI tooling and performance, such as:

  • CLI Prompts: A common command like ng add and ng new can now prompt the user to choose the functionalities to add into a project like routing and stylesheets format, etc.
  • Adding scrolling to Angular Material CDK (Component DevKit).
  • Adding drag and drop support to Angular Material CDK.
  • Projects are also defaulted to use Budget Bundles which will warn developers when their apps are passing size limits. By default, warnings are thrown when the size has more than 2MB and errors at 5MB. You can also change these limits in your angular.json file. etc.

Introducing Angular Router

Angular Router is a powerful JavaScript router built and maintained by the Angular core team that can be installed from the @angular/router package. It provides a complete routing library with the possibility to have multiple router outlets, different path matching strategies, easy access to route parameters and route guards to protect components from unauthorized access.

The Angular router is a core part of the Angular platform. It enables developers to build Single Page Applications with multiple views and allow navigation between these views.

Let’s now see the essential Router concepts in more details.

The Router-Outlet

The Router-Outlet is a directive that’s available from the router library where the Router inserts the component that gets matched based on the current browser’s URL. You can add multiple outlets in your Angular application which enables you to implement advanced routing scenarios.


Any component that gets matched by the Router will render it as a sibling of the Router outlet.

Routes And Paths

Routes are definitions (objects) comprised from at least a path and a component (or a redirectTo path) attributes. The path refers to the part of the URL that determines a unique view that should be displayed, and component refers to the Angular component that needs to be associated with a path. Based on a route definition that we provide (via a static RouterModule.forRoot(routes) method), the Router is able to navigate the user to a specific view.

Each Route maps a URL path to a component.

The path can be empty which denotes the default path of an application and it’s usually the start of the application.

The path can take a wildcard string (**). The router will select this route if the requested URL doesn’t match any paths for the defined routes. This can be used for displaying a “Not Found” view or redirecting to a specific view if no match is found.

This is an example of a route:

{ path:  'contacts', component:  ContactListComponent}

If this route definition is provided to the Router configuration, the router will render ContactListComponent when the browser URL for the web application becomes /contacts.

Route Matching Strategies

The Angular Router provides different route matching strategies. The default strategy is simply checking if the current browser’s URL is prefixed with the path.

For example our previous route:

{ path:  'contacts', component:  ContactListComponent}

Could be also written as:

{ path:  'contacts',pathMatch: 'prefix', component:  ContactListComponent}

The patchMath attribute specifies the matching strategy. In this case, it’s prefix which is the default.

The second  matching strategy is full. When it’s specified for a route, the router will check if the the path is exactly equal to the path of the current browser’s URL:

{ path:  'contacts',pathMatch: 'full', component:  ContactListComponent}

Route Params

Creating routes with parameters is a common feature in web apps. Angular Router allows you to access parameters in different ways:

You can create a route parameter using the colon syntax. This is an example route with an id parameter:

{ path:  'contacts/:id', component:  ContactDetailComponent}

Route Guards

A route guard is a feature of the Angular Router that allows developers to run some logic when a route is requested, and based on that logic, it allows or denies the user access to the route. It’s commonly used to check if a user is logged in and has the authorization before he can access a page.

You can add a route guard by implementing the CanActivate interface available from the @angular/router package and extends the canActivate() method which holds the logic to allow or deny access to the route. For example, the following guard will always allow access to a route:

class MyGuard implements CanActivate {
  canActivate() {
    return true;

You can then protect a route with the guard using the canActivate attribute:

{ path:  'contacts/:id, canActivate:[MyGuard], component:  ContactDetailComponent}

The Angular Router provides the routerLink directive to create navigation links. This directive takes the path associated with the component to navigate to. For example:

<a [routerLink]="'/contacts'">Contacts</a>

Multiple Outlets And Auxiliary Routes

Angular Router supports multiple outlets in the same application.

A component has one associated primary route and can have auxiliary routes. Auxiliary routes enable developers to navigate multiple routes at the same time.

To create an auxiliary route, you’ll need a named router outlet where the component associated with the auxiliary route will be displayed.

<router-outlet  name="outlet1"></router-outlet> 
  • The outlet with no name is the primary outlet.
  • All outlets should have a name except for the primary outlet.

You can then specify the outlet where you want to render your component using the outlet attribute:

{ path: "contacts", component: ContactListComponent, outlet: "outlet1" }

Creating An Angular 7 Demo Project

In this section, we’ll see a practical example of how to set up and work with the Angular Router. You can see the live demo we’ll be creating and the GitHub repository for the project.

Installing Angular CLI v7

Angular CLI requires Node 8.9+, with NPM 5.5.1+. You need to make sure you have these requirements installed on your system then run the following command to install the latest version of Angular CLI:

$ npm install -g @angular/cli

This will install the Angular CLI globally.

Installing Angular CLI v7
Installing Angular CLI v7 (Large preview)

Note: You may want to use sudo to install packages globally, depending on your npm configuration.

Creating An Angular 7 Project

Creating a new project is one command away, you simply need to run the following command:

$ ng new angular7-router-demo

The CLI will ask you if you would like to add routing (type N for No because we’ll see how we can add routing manually) and which stylesheet format would you like to use, choose CSS, the first option then hit Enter. The CLI will create a folder structure with the necessary files and install the project’s required dependencies.

Creating A Fake Back-End Service

Since we don’t have a real back-end to interact with, we’ll create a fake back-end using the angular-in-memory-web-api library which is an in-memory web API for Angular demos and tests that emulates CRUD operations over a REST API.

It works by intercepting the HttpClient requests sent to the remote server and redirects them to a local in-memory data store that we need to create.

To create a fake back-end, we need to follow the next steps:

  1. First, we install the angular-in-memory-web-api module,
  2. Next, we create a service which returns fake data,
  3. Finally, configure the application to use the fake back-end.

In your terminal run the following command to install the angular-in-memory-web-api module from npm:

$ npm install --save angular-in-memory-web-api

Next, generate a back-end service using:

$ ng g s backend

Open the src/app/backend.service.ts file and import InMemoryDbService from the angular-in-memory-web-api module:

import {InMemoryDbService} from 'angular-in-memory-web-api'

The service class needs to implement InMemoryDbService and then override the createDb() method:

  providedIn: 'root'
export class BackendService implements InMemoryDbService{

  constructor() { }
   let  contacts =  [
     {  id:  1,  name:  'Contact 1', email: '' },
     {  id:  2,  name:  'Contact 2', email: '' },
     {  id:  3,  name:  'Contact 3', email: '' },
     {  id:  4,  name:  'Contact 4', email: '' }

   return {contacts};

We simply create an array of contacts and return them. Each contact should have an id.

Finally, we simply need to import InMemoryWebApiModule into the app.module.ts file, and provide our fake back-end service.

import { InMemoryWebApiModule } from “angular-in-memory-web-api”;  
import { BackendService } from “./backend.service”;
/* ... */

  declarations: [
  imports: [
  providers: [],
  bootstrap: [AppComponent]
export class AppModule { }

Next create a ContactService which encapsulates the code for working with contacts:

$ ng g s contact

Open the src/app/contact.service.ts file and update it to look similar to the following code:

import { Injectable } from '@angular/core';
import { HttpClient } from “@angular/common/http”;

  providedIn: 'root'
export class ContactService {

  API_URL: string = "/api/";
  constructor(private http: HttpClient) { }
   return this.http.get(this.API_URL + 'contacts')
   return this.http.get(`${this.API_URL + 'contacts'}/${contactId}`) 

We added two methods:

  • getContacts()
    For getting all contacts.
  • getContact()
    For getting a contact by id.

You can set the API_URL to whatever URL since we are not going to use a real back-end. All requests will be intercepted and sent to the in-memory back-end.

Creating Our Angular Components

Before we can see how to use the different Router features, let’s first create a bunch of components in our project.

Head over to your terminal and run the following commands:

$ ng g c contact-list
$ ng g c contact-detail

This will generate two ContactListComponent and ContactDetailComponent components and add them to the main app module.

Setting Up Routing

In most cases, you’ll use the Angular CLI to create projects with routing setup but in this case, we’ll add it manually so we can get a better idea how routing works in Angular.

Adding The Routing Module

We need to add AppRoutingModule which will contain our application routes and a router outlet where Angular will insert the currently matched component depending on the browser current URL.

We’ll see:

  • How to create an Angular Module for routing and import it;
  • How to add routes to different components;
  • How to add the router outlet.

First, let’s start by creating a routing module in an app-routing.module.ts file. Inside the src/app create the file using:

$ cd angular7-router-demo/src/app
$ touch app-routing.module.ts

Open the file and add the following code:

import { NgModule } from '@angular/core';
import { Routes, RouterModule } from '@angular/router';

const routes: Routes = [];

  imports: [RouterModule.forRoot(routes)],
  exports: [RouterModule]
export class AppRoutingModule { }

We start by importing the NgModule from the @angular/core package which is a TypeScript decorator used to create an Angular module.

We also import the RouterModule and Routes classes from the @angular/router package . RouterModule provides static methods like RouterModule.forRoot() for passing a configuration object to the Router.

Next, we define a constant routes array of type Routes which will be used to hold information for each route.

Finally, we create and export a module called AppRoutingModule(You can call it whatever you want) which is simply a TypeScript class decorated with the @NgModule decorator that takes some meta information object. In the imports attribute of this object, we call the static RouterModule.forRoot(routes) method with the routes array as a parameter. In the exports array we add the RouterModule.

Importing The Routing Module

Next, we need to import this module routing into the main app module that lives in the src/app/app.module.ts file:

import { BrowserModule } from '@angular/platform-browser';
import { NgModule } from '@angular/core';

import { AppRoutingModule } from './app-routing.module';
import { AppComponent } from './app.component';

  declarations: [
  imports: [
  providers: [],
  bootstrap: [AppComponent]
export class AppModule { }

We import the AppRoutingModule from ./app-routing.module and we add it in the imports array of the main module.

Adding The Router Outlet

Finally, we need to add the router outlet. Open the src/app/app.component.html file which contains the main app template and add the <router-outlet> component:


This is where the Angular Router will render the component that corresponds to current browser’s path.

That’s all steps we need to follow in order to manually setup routing inside an Angular project.

Creating Routes

Now, let’s add routes to our two components. Open the src/app/app-routing.module.ts file and add the following routes to the routes array:

const routes: Routes = [
    {path: 'contacts' , component: ContactListComponent},
    {path: 'contact/:id' , component: ContactDetailComponent}

Make sure to import the two components in the routing module:

import { ContactListComponent } from './contact-list/contact-list.component';
import { ContactDetailComponent } from './contact-detail/contact-detail.component';

Now we can access the two components from the /contacts and contact/:id paths.

Next let’s add navigation links to our app template using the routerLink directive. Open the src/app/app.component.html and add the following code on top of the router outlet:

<h2><a [routerLink] = "'/contacts'">Contacts</a></h2>

Next we need to display the list of contacts in ContactListComponent. Open the src/app/contact-list.component.ts then add the following code:

import { Component, OnInit } from '@angular/core';
import { ContactService } from '../contact.service';

  selector: 'app-contact-list',
  templateUrl: './contact-list.component.html',
  styleUrls: ['./contact-list.component.css']
export class ContactListComponent implements OnInit {

  contacts: any[] = [];

  constructor(private contactService: ContactService) { }

  ngOnInit() {
    this.contactService.getContacts().subscribe((data : any[])=>{
        this.contacts = data;

We create a contacts array to hold the contacts. Next, we inject ContactService and we call the getContacts() method of the instance (on the ngOnInit life-cycle event) to get contacts and assign them to the contacts array.

Next open the src/app/contact-list/contact-list.component.html file and add:

<table style="width:100%">
  <tr *ngFor="let contact of contacts" >
    <td>{{ }}</td>
    <td>{{ }}</td> 
    <a [routerLink]="['/contact',]">Go to details</a>

We loop through the contacts and display each contact’s name and email. We also create a link to each contact’s details component using the routerLink directive.

This is a screen shot of the component:

Contact list
Contact list (Large preview)

When we click on the Go to details link, it will take us to ContactDetailsComponent. The route has an id parameter, let’s see how we can access it from our component.

Open the src/app/contact-detail/contact-detail.component.ts file and change the code to look similar to the following code:

import { Component, OnInit } from '@angular/core';
import { ActivatedRoute } from '@angular/router';
import { ContactService } from '../contact.service';

  selector: 'app-contact-detail',
  templateUrl: './contact-detail.component.html',
  styleUrls: ['./contact-detail.component.css']
export class ContactDetailComponent implements OnInit {
  contact: any;
  constructor(private contactService: ContactService, private route: ActivatedRoute) { }

  ngOnInit() {
    this.route.paramMap.subscribe(params => {
     this.contactService.getContact(params.get('id')).subscribe(c =>{
        console.log(c); = c;

We inject ContactService and ActivatedRoute into the component. In ngOnInit() life-cycle event we retrieve the id parameter that will be passed from the route and use it to get the contact’s details that we assign to a contact object.

Open the src/app/contact-detail/contact-detail.component.html file and add:

<h1> Contact # {{}}</h1>
  Name: {{}} 
 Email: {{}}

Contact Details
Contact details (Large preview)

When we first visit our application from, the outlet doesn’t render any component so let’s redirect the empty path to the contacts path by adding the following route to the routes array:

{path: '', pathMatch: 'full', redirectTo: 'contacts'}  

We want to match the exact empty path, that’s why we specify the full match strategy.


In this tutorial, we’ve seen how to use the Angular Router to add routing and navigation into our application. We’ve seen different concepts like the Router outlet, routes, and paths and we created a demo to practically show the different concepts. You can access the code from this repository.

Smashing Editorial
(dm, ra, yk, il)

Source: Smashing Magazine

An Extensive Guide To Progressive Web Applications

An Extensive Guide To Progressive Web Applications

An Extensive Guide To Progressive Web Applications

Ankita Masand


It was my dad’s birthday, and I wanted to order a chocolate cake and a shirt for him. I headed over to Google to search for chocolate cakes and clicked on the first link in the search results. There was a blank screen for a few seconds; I didn’t understand what was happening. After a few seconds of staring patiently, my mobile screen filled with delicious-looking cakes. As soon as I clicked on one of them to check its details, I got an ugly fat popup, asking me to install an Android application so that I could get a silky smooth experience while ordering a cake.

That was disappointing. My conscience didn’t allow me to click on the “Install” button. All I wanted to do was order a small cake and be on my way.

I clicked on the cross icon at the very right of the popup to get out of it as soon as I could. But then the installation popup sat at the bottom of the screen, occupying one-fourth of the space. And with the flaky UI, scrolling down was a challenge. I somehow managed to order a Dutch cake.

After this terrible experience, my next challenge was to order a shirt for my dad. As before, I search Google for shirts. I clicked on the first link, and in a blink, the entire content was right in front of me. Scrolling was smooth. No installation banner. I felt as if I was browsing a native application. There was a moment when my terrible internet connection gave up, but I was still able to see the content instead of a dinosaur game. Even with my janky internet, I managed to order a shirt and jeans for my dad. Most surprising of all, I was getting notifications about my order.

I would call this a silky smooth experience. These people were doing something right. Every website should do it for their users. It’s called a progressive web app.

As Alex Russell states in one of his blog posts:

“It happens on the web from time to time that powerful technologies come to exist without the benefit of marketing departments or slick packaging. They linger and grow at the peripheries, becoming old-hat to a tiny group while remaining nearly invisible to everyone else. Until someone names them.”

A Silky Smooth Experience On The Web, Sometimes Known As A Progressive Web Application

Progressive web applications (PWAs) are more of a methodology that involves a combination of technologies to make powerful web applications. With an improved user experience, people will spend more time on websites and see more advertisements. They tend to buy more, and with notification updates, they are more likely to visit often. The Financial Times abandoned its native apps in 2011 and built a web app using the best technologies available at the time. Now, the product has grown into a full-fledged PWA.

But why, after all this time, would you build a web app when a native app does the job well enough?

Let’s look into some of the metrics shared in Google IO 17.

Five billion devices are connected to the web, making the web the biggest platform in the history of computing. On the mobile web, 11.4 million monthly unique visitors go to the top 1000 web properties, and 4 million go to the top thousand apps. The mobile web garners around four times as many users as native applications. But this number drops sharply when it comes to engagement.

A user spends an average of 188.6 minutes in native apps and only 9.3 minutes on the mobile web. Native applications leverage the power of operating systems to send push notifications to give users important updates. They deliver a better user experience and boot more quickly than websites in a browser. Instead of typing a URL in the web browser, users just have to tap an app’s icon on the home screen.

Most visitors on the web are unlikely to come back, so developers came up with the workaround of showing them banners to install native applications, in an attempt to keep them deeply engaged. But then, users would have to go through the tiresome procedure of installing the binary of a native application. Forcing users to install an application is annoying and reduces further the chance that they will install it in the first place. The opportunity for the web is clear.

Recommended reading: Native And PWA: Choices, Not Challengers!

If web applications come with a rich user experience, push notifications, offline support and instant loading, they can conquer the world. This is what a progressive web application does.

A PWA delivers a rich user experience because it has several strengths:

  • Fast
    The UI is not flaky. Scrolling is smooth. And the app responds quickly to user interaction.

  • Reliable
    A normal website forces users to wait, doing nothing, while it is busy making rides to the server. A PWA, meanwhile, loads data instantaneously from the cache. A PWA works seamlessly, even on a 2G connection. Every network request to fetch an asset or piece of data goes through a service worker (more on that later), which first verifies whether the response for a particular request is already in the cache. When users get real content almost instantly, even on a poor connection, they trust the app more and view it as more reliable.

  • Engaging
    A PWA can earn a place on the user’s home screen. It offers a native app-like experience by providing a full-screen work area. It makes use of push notifications to keep users engaged.

Now that we know what PWAs bring to the table, let’s get into the details of what gives PWAs an edge over native applications. PWAs are built with technologies such as service workers, web app manifests, push notifications and IndexedDB/local data structure for caching. Let’s look into each in detail.

Service Workers

A service worker is a JavaScript file that runs in the background without interfering with the user’s interactions. All GET requests to the server go through a service worker. It acts like a client-side proxy. By intercepting network requests, it takes complete control over the response being sent back to the client. A PWA loads instantly because service workers eliminate the dependency on the network by responding with data from the cache.

A service worker can only intercept a network request that is in its scope. For example, a root-scoped service worker can intercept all of the fetch requests coming from a web page. A service worker operates as an event-driven system. It goes into a dormant state when it is not needed, thereby conserving memory. To use a service worker in a web application, we first have to register it on the page with JavaScript.

(function main () {

   /* navigator is a WEB API that allows scripts to register themselves and carry out their activities. */
    if ('serviceWorker' in navigator) {
        console.log('Service Worker is supported in your browser')
        /* register method takes in the path of service worker file and returns a promises, which returns the registration object */
        navigator.serviceWorker.register('./service-worker.js').then (registration => {
            console.log('Service Worker is registered!')
    } else {
        console.log('Service Worker is not supported in your browser')


We first check whether the browser supports service workers. To register a service worker in a web application, we provide its URL as a parameter to the register function, available in navigator.serviceWorker (navigator is a web API that allows scripts to register themselves and carry out their activities). A service worker is registered only once. Registration does not happen on every page load. The browser downloads the service worker file (./service-worker.js) only if there is a byte difference between the existing activated service worker and the newer one or if its URL has changed.

The above service worker will intercept all requests coming from the root (/). To limit the scope of a service worker, we would pass an optional parameter with one of the keys as the scope.

if ('serviceWorker' in navigator) {
    /* register method takes in an optional second parameter as an object. To restrict the scope of a service worker, the scope should be provided.
        scope: '/books' will intercept requests with '/books' in the url. */
    navigator.serviceWorker.register('./service-worker.js', { scope: '/books' }).then(registration => {
        console.log('Service Worker for scope /books is registered', registration)

The service worker above will intercept requests that have /books in the URL. For example, it will not intercept request with /products, but it could very well intercept requests with /books/products.

As mentioned, a service worker operates as an event-driven system. It listens for events (install, activate, fetch, push) and accordingly calls the respective event handler. Some of these events are a part of the life cycle of a service worker, which goes through these events in sequence to get activated.


Once a service worker has been registered successfully, an installation event is fired. This is a good place to do the initialization work, like setting up the cache or creating object stores in IndexedDB. (IndexedDB will make more sense to you once we get into its details. For now, we can just say that it’s a key-value pair structure.)

self.addEventListener('install', (event) => {
    let CACHE_NAME = 'xyz-cache'
    let urlsToCache = [
        /* open method available on caches, takes in the name of cache as the first parameter. It returns a promise that resolves to the instance of cache
        All the URLS above can be added to cache using the addAll method. */
        .then (cache => cache.addAll(urlsToCache))

Here, we’re caching some of the files so that the next load is instant. self refers to the service worker instance. event.waitUntil makes the service worker wait until all of the code inside it has finished execution.


Once a service worker has been installed, it cannot yet listen for fetch requests. Rather, an activate event is fired. If no active service worker is operating on the website in the same scope, then the installed service worker gets activated immediately. However, if a website already has an active service worker, then the activation of a new service worker is delayed until all of the tabs operating on the old service worker are closed. This makes sense because the old service worker might be using the instance of the cache that is now modified in the newer one. So, the activation step is a good place to get rid of old caches.

self.addEventListener('activate', (event) => {
    let cacheWhitelist = ['products-v2'] // products-v2 is the name of the new cache

        caches.keys().then (cacheNames => {
            return Promise.all(
       cacheName => {
                    /* Deleting all the caches except the ones that are in cacheWhitelist array */
                    if (cacheWhitelist.indexOf(cacheName) === -1) {
                        return caches.delete(cacheName)

In the code above, we’re deleting the old cache. If the name of a cache doesn’t match with the cacheWhitelist, then it is deleted. To skip the waiting phase and immediately activate the service worker, we use skip.waiting().

self.addEventListener('activate', (event) => {
    // The usual stuff

Once service worker is activated, it can listen for fetch requests and push events.

Fetch Event Handler

Whenever a web page fires a fetch request for a resource over the network, the fetch event from the service worker gets called. The fetch event handler first looks for the requested resource in the cache. If it is present in the cache, then it returns the response with the cached resource. Otherwise, it initiates a fetch request to the server, and when the server sends back the response with the requested resource, it puts it to the cache for subsequent requests.

/* Fetch event handler for responding to GET requests with the cached assets */
self.addEventListener('fetch', (event) => {
            .then (cache => {
                /* Checking if the request is already present in the cache. If it is present, sending it directly to the client */
                return cache.match(event.request).then (response => {
                    if (response) {
                        console.log('Cache hit! Fetching response from cache', event.request.url)
                        return response
                    /* If the request is not present in the cache, we fetch it from the server and then put it in cache for subsequent requests. */
                    fetch(event.request).then (response => {
                        cache.put(event.request, response.clone())
                        return response

event.respondWith lets the service worker send a customized response to the client.

Offline-first is now a thing. For any non-critical request, we must serve the response from the cache, instead of making a ride to the server. If any asset is not present in the cache, we get it from the server and then cache it for subsequent requests.

Service workers only work on HTTPS websites because they have the power to manipulate the response of any fetch request. Someone with malicious intent might tamper the response for a request on an HTTP website. So, hosting a PWA on HTTPS is mandatory. Service workers do not interrupt the normal functioning of the DOM. They cannot communicate directly with the web page. To send any message to a web page, it makes use of post messages.

Web Push Notifications

Let’s suppose you’re busy playing a game on your mobile, and a notification pops up telling you of a 30% discount on your favorite brand. Without any further ado, you click on the notification and shop your breath out. Getting live updates on, say, a cricket or football match or getting important emails and reminders as notifications is a big deal when it comes to engaging users with a product. This feature was only available in native applications until PWA came along. A PWA makes use of web push notifications to compete with this powerful feature that native apps provide out of the box. A user would still receive a web push notification even if the PWA is not open in any of the browser tabs and even if the browser is not open.

A web application has to ask permission of the user to send them push notifications.

Browser Prompt for asking permission for Web Push notifications
Browser Prompt for asking permission for Web Push notifications. (Large preview)

Once the user confirms by clicking the “Allow” button, a unique subscription token is generated by the browser. This token is unique for this device. The format of the subscription token generated by Chrome is as follows:

     "endpoint": "",
     "expirationTime": null,
     "keys": {
          "p256dh": "BJsj63kz8RPZe8Lv1uu-6VSzT12RjxtWyWCzfa18RZ0-8sc5j80pmSF1YXAj0HnnrkyIimRgLo8ohhkzNA7lX4w",
          "auth": "TJXqKozSJxcWvtQasEUZpQ"

The endpoint contained in the token above will be unique for every subscription. On an average website, thousands of users would agree to receive push notifications, and for each of them, this endpoint would be unique. So, with the help of this endpoint, the application is able to target these users in the future by sending them push notifications. The expirationTime is the amount of time that the subscription is valid for a particular device. If the expirationTime is 20 days, it means that the push subscription of the user will expire after 20 days and the user won’t be able to receive push notifications on the older subscription. In this case, the browser will generate a new subscription token for that device. The auth and p256dh keys are used for encryption.

Now, to send push notifications to these thousands of users in the future, we first have to save their respective subscription tokens. It’s the job of the application server (the back-end server, maybe a Node.js script) to send push notifications to these users. This might sound as simple as making a POST request to the endpoint URL with the notification data in the request payload. However, it should be noted that if a user is not online when a push notification intended for them is triggered by the server, they should still get that notification once they come back online. The server would have to take care of such scenarios, along with sending thousands of requests to the users. A server keeping track of the user’s connection sounds complicated. So, something in the middle would be responsible for routing web push notifications from the server to the client. This is called a push service, and every browser has its own implementation of a push service. The browser has to tell the following information to the push service in order to send any notification:

  1. The time to live
    This is how long a message should be queued, in case it is not delivered to the user. Once this time has elapsed, the message will be removed from the queue.
  2. Urgency of the message
    This is so that the push service preserves the user’s battery by sending only high-priority messages.

The push service routes the messages to the client. Because push has to be received by the client even if its respective web application is not open in the browser, push events have to be listened to by something that continuously monitors in the background. You guessed it: That’s the job of the service worker. The service worker listens for push events and does the job of showing notifications to the user.

So, now we know that the browser, push service, service worker and application server work in harmony to send push notifications to the user. Let’s look into the implementation details.

Web Push Client

Asking permission of the user is a one-time thing. If a user has already granted permission to receive push notifications, we shouldn’t ask again. The permission value is saved in Notification.permission.

/* Notification.permission can have one of these three values: default, granted or denied. */
if (Notification.permission === 'default') {
    /* The Notification.requestPermission() method shows a notification permission prompt to the user. It returns a promise that resolves to the value of permission*/
    Notification.requestPermission().then (result => {
        if (result === 'denied') {
            console.log('Permission denied')

        if (result === 'granted') {
            console.log('Permission granted')
            /* This means the user has clicked the Allow button. We’re to get the subscription token generated by the browser and store it in our database.

            The subscription token can be fetched using the getSubscription method available on pushManager of the serviceWorkerRegistration object. If subscription is not available, we subscribe using the subscribe method available on pushManager. The subscribe method takes in an object.

                .then (subscription => {
                    if (!subscription) {
                        const applicationServerKey = ''
                            userVisibleOnly: true, // All push notifications from server should be displayed to the user
                            applicationServerKey // VAPID Public key
                    } else {
                        saveSubscriptionInDB(subscription, userId) // A method to save subscription token in the database

In the subscribe method above, we’re passing userVisibleOnly and applicationServerKey to generate a subscription token. The userVisibleOnly property should always be true because it tells the browser that any push notification sent by the server will be shown to the client. To understand the purpose of applicationServerKey, let’s consider a scenario.

If some person gets ahold of your thousands of subscription tokens, they could very well send notifications to the endpoints contained in these subscriptions. There is no way for the endpoint to be linked to your unique identity. To provide a unique identity to the subscription tokens generated on your web application, we make use of the VAPID protocol. With VAPID, the application server voluntarily identifies itself to the push service while sending push notifications. We generate two keys like so:

const webpush = require('web-push')
const vapidKeys = webpush.generateVAPIDKeys()

web-push is an npm module. vapidKeys will have one public key and one private key. The application server key used above is the public key.

Web Push Server

The job of the web push server (application server) is straightforward. It sends a notification payload to the subscription tokens.

const options = {
    TTL: 24*60*60, //TTL is the time to live, the time that the notification will be queued in the push service
    vapidDetails: {
        subject: '',
        publicKey: '',
        privateKey: ''
const data = {
    title: 'Update',
    body: 'Notification sent by the server'
webpush.sendNotification(subscription, data, options)

It uses the sendNotification method from the web push library.

Service Workers

The service worker shows the notification to the user as such:

self.addEventListener('push', (event) => {
    let options = {
        icon: 'images/example.png',
        /* The showNotification method is available on the registration object of the service worker.
        The first parameter to showNotification method is the title of notification, and the second parameter is an object */
        self.registration.showNotification(, options)

Till now, we’ve seen how a service worker makes use of the cache to store requests and makes a PWA fast and reliable, and we’ve seen how web push notifications keep users engaged.

To store a bunch of data on the client side for offline support, we need a giant data structure. Let’s look into the Financial Times PWA. You’ve got to witness the power of this data structure for yourself. Load the URL in your browser, and then switch off your internet connection. Reload the page. Gah! Is it still working? It is. (Like I said, offline is the new black.) Data is not coming from the wires. It is being served from the house. Head over to the “Applications” tab of Chrome Developer Tools. Under “Storage”, you’ll find “IndexedDB”.

IndexedDB stores the articles data in Financial Times PWA
IndexedDB on Financial Times PWA. (Large preview)

Check out the “Articles” object store, and expand any of the items to see the magic for yourself. The Financial Times has stored this data for offline support. This data structure that lets us store a massive amount of data is called IndexedDB. IndexedDB is a JavaScript-based object-oriented database for storing structured data. We can create different object stores in this database for various purposes. For example, as we can see in the image above that “Resources”, “ArticleImages” and “Articles” are called as object stores. Each record in an object store is uniquely identified with a key. IndexedDB can even be used to store files and blobs.

Let’s try to understand IndexedDB by creating a database for storing books.

let openIdbRequest ='booksdb', 1)

If the database booksdb doesn’t already exist, the code above will create a booksdb database. The second parameter to the open method is the version of the database. Specifying a version takes care of the schema-related changes that might happen in future. For example, booksdb now has only one table, but when the application grows, we intend to add two more tables to it. To make sure our database is in sync with the updated schema, we’ll specify a higher version than the previous one.

Calling the open method doesn’t open the database right away. It’s an asynchronous request that returns an IDBOpenDBRequest object. This object has success and error properties; we’ll have to write appropriate handlers for these properties to manage the state of our connection.

let dbInstance
openIdbRequest.onsuccess = (event) => {
    dbInstance =
    console.log('booksdb is opened successfully')

openIdbRequest.onerror = (event) => {
    console.log(’There was an error in opening booksdb database')

openIdbRequest.onupgradeneeded = (event) => {
    let db =
    let objectstore = db.createObjectStore('books', { keyPath: 'id' })

To manage the creation or modification of object stores (object stores are analogous to SQL-based tables — they have a key-value structure), the onupgradeneeded method is called on the openIdbRequest object. The onupgradeneeded method will be invoked whenever the version changes. In the code snippet above, we’re creating a books object store with unique key as the ID.

Let’s say that, after deploying this piece of code, we have to create one more object store, called as users. So, now the version of our database will be 2.

let openIdbRequest ='booksdb', 2) // New Version - 2

/* Success and error event handlers remain the same.
The onupgradeneeded method gets called when the version of the database changes. */
openIdbRequest.onupgradeneeded = (event) => {
    let db =
    if (!db.objectStoreNames.contains('books')) {
        let objectstore = db.createObjectStore('books', { keyPath: 'id' })

    let oldVersion = event.oldVersion
    let newVersion = event.newVersion

    /* The users tables should be added for version 2. If the existing version is 1, it will be upgraded to 2, and the users object store will be created. */
    if (oldVersion === 1) {
        db.createObjectStore('users', { keyPath: 'id' })

We’ve cached dbInstance in the success event handler of the open request. To retrieve or add data in IndexedDB, we’ll make use of dbInstance. Lets add some book records in our books object store.

let transaction = dbInstance.transaction('books')
let objectstore = dbInstance.objectstore('books')

let bookRecord = {
    id: '1',
    name: ’The Alchemist',
    author: 'Paulo Coelho'
let addBookRequest = objectstore.add(bookRecord)

addBookRequest.onsuccess = (event) => {
    console.log('Book record added successfully')

addBookRequest.onerror = (event) => {
    console.log(’There was an error in adding book record')

We make use of transactions, especially while writing records on object stores. A transaction is simply a wrapper around an operation to ensure data integrity. If any of the actions in a transaction fails, then no action is performed on the database.

Let’s modify a book record with the put method:

let modifyBookRequest = objectstore.put(bookRecord) // put method takes in an object as the parameter
modifyBookRequest.onsuccess = (event) => {
    console.log('Book record updated successfully')

Let’s retrieve a book record with the get method:

let transaction = dbInstance.transaction('books')
let objectstore = dbInstance.objectstore('books')

/* get method takes in the id of the record */
let getBookRequest = objectstore.get(1)

getBookRequest.onsuccess = (event) => {
    /* contains the matched record */
    console.log('Book record',

getBookRequest.onerror = (event) => {
    console.log('Error while retrieving the book record.')

Adding Icon On Home Screen

Now that there is hardly any distinction between a PWA and a native application, it makes sense to offer a prime position to the PWA. If your website fulfills the basic criteria of a PWA (hosted on HTTPS, integrates with service workers and has a manifest.json) and after the user has spent some time on the web page, the browser will invoke a prompt at the bottom, asking the user to add the app to their home screen, as shown below:

Prompt to add Financial Times PWA on home screen
Prompt to add Financial Times PWA on home screen. (Large preview)

When a user clicks on “Add FT to Home screen”, the PWA gets to set its foot on the home screen, as well as in the app drawer. When a user searches for any application on their phone, any PWAs that match the search query will be listed. They will also be seen in the system settings, which makes it easy for users to manage them. In this sense, a PWA behaves like a native application.

PWAs make use of manifest.json to provide this feature. Let’s look into a simple manifest.json file.

    "name": "Demo PWA",
     "short_name": "Demo",
     "start_url": "/?standalone",
     "background_color": "#9F0C3F",
     "theme_color": "#fff1e0",
     "display": "standalone",
     "icons": [{
          "src": "/lib/img/icons/xxhdpi.png?v2",
          "sizes": "192x192"

The short_name appears on the user’s home screen and in the system settings. The name appears in the chrome prompt and on the splash screen. The splash screen is what the user sees when the app is getting ready to launch. The start_url is the main screen of your app. It’s what users get when they tap an icon on the home screen. The background_color is used on the splash screen. The theme_color sets the color of the toolbar. The standalone value for display mode says that the app is to be operated in full-screen mode (hiding the browser’s toolbar). When a user installs a PWA, its size is merely in kilobytes, rather than the megabytes of native applications.

Service workers, web push notifications, IndexedDB, and the home screen position make up for offline support, reliability, and engagement. It should be noted that a service worker doesn’t come to life and start doing its work on the very first load. The first load will still be slow until all of the static assets and other resources have been cached. We can implement some strategies to optimize the first load.

Bundling Assets

All of the resources, including the HTML, style sheets, images and JavaScript, are to be fetched from the server. The more files, the more HTTPS requests needed to fetch them. We can use bundlers like WebPack to bundle our static assets, hence reducing the number of HTTP requests to the server. WebPack does a great job of further optimizing the bundle by using techniques such as code-splitting (i.e. bundling only those files that are required for the current page load, instead of bundling all of them together) and tree shaking (i.e. removing duplicate dependencies or dependencies that are imported but not used in the code).

Reducing Round Trips

One of the main reasons for slowness on the web is network latency. The time it takes for a byte to travel from A to B varies with the network connection. For example, a particular round trip over Wi-Fi takes 50 milliseconds and 500 milliseconds on a 3G connection, but 2500 milliseconds on a 2G connection. These requests are sent using the HTTP protocol, which means that while a particular connection is being used for a request, it cannot be used for any other requests until the response of the previous request is served. A website can make six asynchronous HTTP requests at a time because six connections are available to a website to make HTTP requests. An average website makes roughly 100 requests; so, with a maximum of six connections available, a user might end up spending around 833 milliseconds in a single round trip. (The calculation is 833 milliseconds – 1006 = 1666. We have to divide 1666 by 2 because we’re calculating the time spend on a round trip.) With HTTP2 in place, the turnaround time is drastically reduced. HTTP2 doesn’t block the connection head, so multiple requests can be sent simultaneously.

Most HTTP responses contain last-modified and etag headers. The last-modified header is the date when the file was last modified, and an etag is a unique value based on the contents of the file. It will only be changed when the contents of a file are changed. Both of these headers can be used to avoid downloading the file again if a cached version is already locally available. If the browser has a version of this file locally available, it can add any of these two headers in the request as such:

Add ETag and Last-Modified Headers to prevent downloading of valid cached assets
ETag and Last-Modified Headers. (Large preview)

The server can check whether the contents of the file have changed. If the contents of the file have not changed, then it responds with a status code of 304 (not modified).

If-None-Match Header to prevent downloading of valid cached assets
If-None-Match Header. (Large preview)

This indicates to the browser to use the locally available cached version of the file. By doing all of this, we’ve prevented the file from being downloaded.

Faster responses are in now place, but our job is not done yet. We still have to parse the HTML, load the style sheets and make the web page interactive. It makes sense to show some empty boxes with a loader to the user, instead of a blank screen. While the HTML document is getting parsed, when it comes across <script src='asset.js'></script>, it will make a synchronous HTTP request to the server to fetch asset.js, and the whole parsing process will be paused until the response comes back. Imagine having a dozen of synchronous static asset references. These could very well be managed just by making use of the async keyword in script references, like <script src='asset.js' async></script>. With the introduction of the async keyword here, the browser will make an asynchronous request to fetch asset.js without hindering the parsing of the HTML. If a script file is required at a later stage, we can defer the downloading of that file until the entire HTML has been parsed. A script file can be deferred by using the defer keyword, like <script src='asset.js' defer></script>.


We’ve learned a lot of many new things that make for a cool web application. Here’s a summary of all of the things we’ve explored in this article:

  1. Service workers make good use of the cache to speed up the loading of assets.
  2. Web push notifications work under the hood.
  3. We use IndexedDB to store a massive amount of data.
  4. Some of the optimizations for instant first load, like using HTTP2 and adding headers like Etag, last-modified and If-None-Match, prevent the downloading of valid cached assets.

That’s all, folks!

Smashing Editorial
(rb, ra, al, yk, il)

Source: Smashing Magazine

Cyber Monday: Elegant Themes Offers 25% OFF in Biggest Discount Ever

This sponsored article was created by our content partner, BAW Media. Thank you for supporting the partners who make SitePoint possible.

Little gems hidden in an array of Cyber Monday details are easy to miss. If you’re looking for the kind of top value deal that doesn’t come your way all that often, you won’t want to miss this one.

Perhaps you’ve already found something that made you squeal with excitement. If so, it’s a good thing you didn’t stop right there, because you’ve just come across this year’s #1 Cyber Monday deal.

This offer comes from Elegant Themes, the creator of Divi, the world’s most popular premium WordPress theme.

They’ve gone all out with their biggest ever discount: 25% off on their Developer and Lifetime accounts, and they’ve added in free Divi layouts.

About Elegant Themes

Here’s what you need to know about Elegant Themes’ wildly popular WordPress toolkit, if you didn’t already:

1. It’s the ultimate WordPress toolkit

An Elegant Themes membership gives you access to 87 themes and 3 plugins, one of which is Divi, the ultimate WordPress Theme and Visual Page Builder. Divi will change your approach to website building forever.

2. You’ll get unlimited use

The one-time fee gives you unlimited use, so you needn’t concern yourself about per-website pricing. There’s a 25% Cyber Monday discount on the fee by the way, which means your membership gives you access to the most value-packed collection of WordPress tools on the market.

3. The pricing plan is SIMPLE

There’s none of the “this plan gives you this, that plan gives you that” nonsense. It’s a single membership, a one-time fee, and you get the entire collection of themes and plugins. Period.

4. You’ll get products you can trust

Elegant Themes didn’t establish itself as a leader in WordPress theme and plugin development without some very good reasons. They’ve been at it for the past decade, during which time constant improvement of each and every product has been the norm.

What You Get with the Cyber Monday Deal

Let’s get into some of the details that make this Cyber Monday deal a not-to-be-missed opportunity. We strongly suspect that you’ll like what you see.

Divi – The World’s Most Popular Premium WordPress Theme

Divi is Elegant Theme’s flagship theme. If’s stats are an indication, Divi is the most widely-used premium WordPress theme in the world. Calling Divi a theme is somewhat of an oversimplification however.

A website-building framework would be a more accurate description; a framework that allows you to design beautiful websites without coding, and without requiring assistance from a collection of disjointed plugins.

To date, 500,591 and counting users are building or have built websites with Divi. They make up one of the most empowered WordPress communities on the web.

The Versatile Divi Builder

The Divi Builder is a visual drag and drop builder that can be used with any theme. This page-building plugin uses the same visual page-building technology that helped make the Divi theme such a roaring success. The only difference is, it’s a standalone product so it can be used with any theme.

As is the case with the Divi theme, you can use the Divi Builder’s visual design interface to build anything and customize everything.

Extra – The Ultimate Magazine WordPress Theme Powered by the Divi Builder

Extra is a magazine theme that takes the Divi Builder framework and adds a newly-designed set of 40+ post-based modules to extend its power even further. Extra is ideal for creating blogs and online publications. The content modules serve as page-building content blocks (the easy way to do it).

Choose the content elements you need, customize them, arrange them, and you’re good to go!

Bloom – Email Opt-In and Lead Generation for WordPress

Bloom provides an easy way to gather leads and build a mailing list. If provides six different customizable opt-in types and a sophisticated set of visitors targeting methods.

Email reigns supreme as a marketing tool. This easy-to-work-with plugin does your list building for you and gives you the means to convert a website’s visitors into followers and customers.

Monarch – The Premier Social Media Sharing Plugin for WordPress

Social Media has become the Internet’s lifeblood, and social sharing uses it as a positive force for businesses. Elegant Theme’s Monarch plugin enables its users to engage and empower online communities.

Monarch will help you get more shares and followers, and it will do so without negatively impacting website performance.

Wrapping Up

Given all that it offers, Elegant Theme’s Cyber Monday deal gives you a positively insane amount of value for your investment. After all, Element Theme’s products are so popular (Divi being the prime example) that there’s never been a need to offer discounts like this one to attract new customers.

Divi, Divi Builder, Extra, Bloom, and Monarch. You’ve seen what they can do, which makes this a no-brainer of an offer a very wise investment.

The post Cyber Monday: Elegant Themes Offers 25% OFF in Biggest Discount Ever appeared first on SitePoint.

Source: Sitepoint

jQuery setTimeout() Function Examples

The JavaScript setTimeout function calls a function or executes a code snippet after a specified delay (in milliseconds). This might be useful if, for example, you wished to display a popup after a visitor has been browsing your page for a certain amount of time, or you want a short delay before removing a hover effect from an element (in case the user accidentally moused out).

Basic setTimeout Example

To demonstrate the concept, the following demo displays a popup, two seconds after the button is clicked.

See the Pen CSS3 animation effects for Magnific Popup by SitePoint (@SitePoint) on CodePen.


From the MDN documentation, the syntax for setTimeout is as follows:

[code language=”js”]
var timeoutID = window.setTimeout(func, [delay, param1, param2, …]);
var timeoutID = window.setTimeout(code, [delay]);


  • timeoutID is a numerical id, which can be used in conjunction with clearTimeout() to cancel the timer.
  • func is the function to be executed.
  • code (in the alternate syntax) is a string of code to be executed.
  • delay is the number of milliseconds by which the function call should be delayed. If omitted, this defaults to 0.

setTimeout vs window.setTimeout

You’ll notice that the syntax above uses window.setTimeout. Why is this?

Well, setTimeout and window.setTimeout are essentially the same, the only difference being that in the second statement we are referencing the setTimeout method as a property of the global window object.

In my opinion this adds complexity, for little or no benefit—if you’ve defined an alternative setTimeout method which would be found and returned in priority in the scope chain, then you’ve probably got bigger issues.

For the purposes of this tutorial, I’ll omit window, but ultimately which syntax you chose is up to you.

The post jQuery setTimeout() Function Examples appeared first on SitePoint.

Source: Sitepoint

Avoiding The Pitfalls Of Automatically Inlined Code

Avoiding The Pitfalls Of Automatically Inlined Code

Avoiding The Pitfalls Of Automatically Inlined Code

Leonardo Losoviz


Inlining is the process of including the contents of files directly in the HTML document: CSS files can be inlined inside a style element, and JavaScript files can be inlined inside a script element:

/* CSS contents here */

/* JS contents here */

By printing the code already in the HTML output, inlining avoids render-blocking requests and executes the code before the page is rendered. As such, it is useful for improving the perceived performance of the site (i.e. the time it takes for a page to become usable.) For instance, we can use the buffer of data delivered immediately when loading the site (around 14kb) to inline the critical styles, including styles of above-the-fold content (as had been done on the previous Smashing Magazine site), and font sizes and layout widths and heights to avoid a jumpy layout re-rendering when the rest of the data is delivered.

However, when overdone, inlining code can also have negative effects on the performance of the site: Because the code is not cacheable, the same content is sent to the client repeatedly, and it can’t be pre-cached through Service Workers, or cached and accessed from a Content Delivery Network. In addition, inline scripts are considered not safe when implementing a Content Security Policy (CSP). Then, it makes a sensible strategy to inline those critical portions of CSS and JS that make the site load faster but avoided as much as possible otherwise.

With the objective of avoiding inlining, in this article we will explore how to convert inline code to static assets: Instead of printing the code in the HTML output, we save it to disk (effectively creating a static file) and add the corresponding <script> or <link> tag to load the file.

Let’s get started!

Recommended reading: WordPress Security As A Process

When To Avoid Inlining

There is no magic recipe to establish if some code must be inlined or not, however, it can be pretty evident when some code must not be inlined: when it involves a big chunk of code, and when it is not needed immediately.

As an example, WordPress sites inline the JavaScript templates to render the Media Manager (accessible in the Media Library page under /wp-admin/upload.php), printing a sizable amount of code:

JavaScript templates inlined by the WordPress Media Manager.

Occupying a full 43kb, the size of this piece of code is not negligible, and since it sits at the bottom of the page it is not needed immediately. Hence, it would make plenty of sense to serve this code through static assets instead or printing it inside the HTML output.

Let’s see next how to transform inline code into static assets.

Triggering The Creation Of Static Files

If the contents (the ones to be inlined) come from a static file, then there is not much to do other than simply request that static file instead of inlining the code.

For dynamic code, though, we must plan how/when to generate the static file with its contents. For instance, if the site offers configuration options (such as changing the color scheme or the background image), when should the file containing the new values be generated? We have the following opportunities for creating the static files from the dynamic code:

  1. On request
    When a user accesses the content for the first time.
  2. On change
    When the source for the dynamic code (e.g. a configuration value) has changed.

Let’s consider on request first. The first time a user accesses the site, let’s say through /index.html, the static file (e.g. header-colors.css) doesn’t exist yet, so it must be generated then. The sequence of events is the following:

  1. The user requests /index.html;
  2. When processing the request, the server checks if the file header-colors.css exists. Since it does not, it obtains the source code and generates the file on disk;
  3. It returns a response to the client, including tag <link rel="stylesheet" type="text/css" href="/staticfiles/header-colors.css">
  4. The browser fetches all the resources included in the page, including header-colors.css;
  5. By then this file exists, so it is served.

However, the sequence of events could also be different, leading to an unsatisfactory outcome. For instance:

  1. The user requests /index.html;
  2. This file is already cached by the browser (or some other proxy, or through Service Workers), so the request is never sent to the server;
  3. The browser fetches all the resources included in the page, including header-colors.css. This image is, however, not cached in the browser, so the request is sent to the server;
  4. The server hasn’t generated header-colors.css yet (e.g. it was just restarted);
  5. It will return a 404.

Alternatively, we could generate header-colors.css not when requesting /index.html, but when requesting /header-colors.css itself. However, since this file initially doesn’t exist, the request is already treated as a 404. Even though we could hack our way around it, altering the headers to change the status code to a 200, and returning the content of the image, this is a terrible way of doing things, so we will not entertain this possibility (we are much better than this!)

That leaves only one option: generating the static file after its source has changed.

Creating The Static File When The Source Changes

Please notice that we can create dynamic code from both user-dependant and site-dependant sources. For instance, if the theme enables to change the site’s background image and that option is configured by the site’s admin, then the static file can be generated as part of the deployment process. On the other hand, if the site allows its users to change the background image for their profiles, then the static file must be generated on runtime.

In a nutshell, we have these two cases:

  1. User Configuration
    The process must be triggered when the user updates a configuration.
  2. Site Configuration
    The process must be triggered when the admin updates a configuration for the site, or before deploying the site.

If we considered the two cases independently, for #2 we could design the process on any technology stack we wanted. However, we don’t want to implement two different solutions, but a unique solution which can tackle both cases. And because from #1 the process to generate the static file must be triggered on the running site, then it is compelling to design this process around the same technology stack the site runs on.

When designing the process, our code will need to handle the specific circumstances of both #1 and #2:

  • Versioning
    The static file must be accessed with a “version” parameter, in order to invalidate the previous file upon the creation of a new static file. While #2 could simply have the same versioning as the site, #1 needs to use a dynamic version for each user, possibly saved in the database.
  • Location of the generated file
    #2 generates a unique static file for the whole site (e.g. /staticfiles/header-colors.css), while #1 creates a static file for each user (e.g. /staticfiles/users/leo/header-colors.css).
  • Triggering event
    While for #1 the static file must be executed on runtime, for #2 it can also be executed as part of a build process in our staging environment.
  • Deployment and distribution
    Static files in #2 can be seamlessly integrated inside the site’s deployment bundle, presenting no challenges; static files in #1, however, cannot, so the process must handle additional concerns, such as multiple servers behind a load balancer (will the static files be created in 1 server only, or in all of them, and how?).

Let’s design and implement the process next. For each static file to be generated we must create an object containing the file’s metadata, calculate its content from the dynamic sources, and finally save the static file to disk. As a use case to guide the explanations below, we will generate the following static files:

  1. header-colors.css, with some style from values saved in the database
  2. welcomeuser-data.js, containing a JSON object with user data under some variable: window.welcomeUserData = {name: "Leo"};.

Below, I will describe the process to generate the static files for WordPress, for which we must base the stack on PHP and WordPress functions. The function to generate the static files before deployment can be triggered by loading a special page executing shortcode [create_static_files] as I have described in a previous article.

Further recommended reading: Making A Service Worker: A Case Study

Representing The File As An Object

We must model a file as a PHP object with all corresponding properties, so we can both save the file on disk on a specific location (e.g. either under /staticfiles/ or /staticfiles/users/leo/), and know how to request the file consequently. For this, we create an interface Resource returning both the file’s metadata (filename, dir, type: “css” or “js”, version, and dependencies on other resources) and its content.

interface Resource {
  function get_filename();
  function get_dir();
  function get_type();
  function get_version();
  function get_dependencies();
  function get_content();

In order to make the code maintainable and reusable we follow the SOLID principles, for which we set an object inheritance scheme for resources to gradually add properties, starting from the abstract class ResourceBase from which all our Resource implementations will inherit:

abstract class ResourceBase implements Resource {
  function get_dependencies() {

    // By default, a file has no dependencies
    return array();

Following SOLID, we create subclasses whenever properties differ. As stated earlier, the location of the generated static file, and the versioning to request it will be different depending on the file being about the user or site configuration:

abstract class UserResourceBase extends ResourceBase {
  function get_dir() {
    // A different file and folder for each user
    $user = wp_get_current_user();
    return "/staticfiles/users/{$user->user_login}/";

  function get_version() {
    // Save the resource version for the user under her meta data. 
    // When the file is regenerated, must execute `update_user_meta` to increase the version number
    $user_id = get_current_user_id();
    $meta_key = "resource_version_".$this->get_filename();
    return get_user_meta($user_id, $meta_key, true);

abstract class SiteResourceBase extends ResourceBase {
  function get_dir() {
    // All files are placed in the same folder
    return "/staticfiles/";

  function get_version() {
    // Same versioning as the site, assumed defined under a constant
    return SITE_VERSION;

Finally, at the last level, we implement the objects for the files we want to generate, adding the filename, the type of file, and the dynamic code through function get_content:

class HeaderColorsSiteResource extends SiteResourceBase {
  function get_filename() {
    return "header-colors";

  function get_type() {
    return "css";

  function get_content() {
    return sprintf(
        .site-title a {
          color: #%s;
      ", esc_attr(get_header_textcolor())

class WelcomeUserDataUserResource extends UserResourceBase {
  function get_filename() {
    return "welcomeuser-data";

  function get_type() {
    return "js";

  function get_content() {
    $user = wp_get_current_user();
    return sprintf(
      "window.welcomeUserData = %s;",
          "name" => $user->display_name

With this, we have modeled the file as a PHP object. Next, we need to save it to disk.

Saving The Static File To Disk

Saving a file to disk can be easily accomplished through the native functions provided by the language. In the case of PHP, this is accomplished through the function fwrite. In addition, we create a utility class ResourceUtils with functions providing the absolute path to the file on disk, and also its path relative to the site’s root:

class ResourceUtils {

  protected static function get_file_relative_path($fileObject) {

    return $fileObject->get_dir().$fileObject->get_filename().".".$fileObject->get_type();

  static function get_file_path($fileObject) {

    // Notice that we must add constant WP_CONTENT_DIR to make the path absolute when saving the file
    return WP_CONTENT_DIR.self::get_file_relative_path($fileObject);

class ResourceGenerator {
  static function save($fileObject) {

    $file_path = ResourceUtils::get_file_path($fileObject);
    $handle = fopen($file_path, "wb");
    $numbytes = fwrite($handle, $fileObject->get_content());

Then, whenever the source changes and the static file needs to be regenerated, we execute ResourceGenerator::save passing the object representing the file as a parameter. The code below regenerates, and saves on disk, files “header-colors.css” and “welcomeuser-data.js”:

// When need to regenerate header-colors.css, execute:
ResourceGenerator::save(new HeaderColorsSiteResource());

// When need to regenerate welcomeuser-data.js, execute:
ResourceGenerator::save(new WelcomeUserDataUserResource());

Once they exist, we can enqueue files to be loaded through the <script> and <link> tags.

Enqueuing The Static Files

Enqueuing the static files is no different than enqueuing any resource in WordPress: through functions wp_enqueue_script and wp_enqueue_style. Then, we simply iterate all the object instances and use one hook or the other depending on their get_type() value being either "js" or "css".

We first add utility functions to provide the file’s URL, and to tell the type being either JS or CSS:

class ResourceUtils {

  // Continued from above...

  static function get_file_url($fileObject) {

    // Add the site URL before the file path
    return get_site_url().self::get_file_relative_path($fileObject);

  static function is_css($fileObject) {

    return $fileObject->get_type() == "css";

  static function is_js($fileObject) {

    return $fileObject->get_type() == "js";

An instance of class ResourceEnqueuer will contain all the files that must be loaded; when invoked, its functions enqueue_scripts and enqueue_styles will do the enqueuing, by executing the corresponding WordPress functions (wp_enqueue_script and wp_enqueue_style respectively):

class ResourceEnqueuer {

  protected $fileObjects;

  function __construct($fileObjects) {

    $this->fileObjects = $fileObjects;

  protected function get_file_properties($fileObject) {

    $handle = $fileObject->get_filename();
    $url = ResourceUtils::get_file_url($fileObject);
    $dependencies = $fileObject->get_dependencies();
    $version = $fileObject->get_version();

    return array($handle, $url, $dependencies, $version);

  function enqueue_scripts() {

    $jsFileObjects = array_map(array(ResourceUtils::class, 'is_js'), $this->fileObjects);
    foreach ($jsFileObjects as $fileObject) {
      list($handle, $url, $dependencies, $version) = $this->get_file_properties($fileObject);
      wp_register_script($handle, $url, $dependencies, $version);

  function enqueue_styles() {

    $cssFileObjects = array_map(array(ResourceUtils::class, 'is_css'), $this->fileObjects);
    foreach ($cssFileObjects as $fileObject) {

      list($handle, $url, $dependencies, $version) = $this->get_file_properties($fileObject);
      wp_register_style($handle, $url, $dependencies, $version);

Finally, we instantiate an object of class ResourceEnqueuer with a list of the PHP objects representing each file, and add a WordPress hook to execute the enqueuing:

// Initialize with the corresponding object instances for each file to enqueue
$fileEnqueuer = new ResourceEnqueuer(
    new HeaderColorsSiteResource(),
    new WelcomeUserDataUserResource()

// Add the WordPress hooks to enqueue the resources
add_action('wp_enqueue_scripts', array($fileEnqueuer, 'enqueue_scripts'));
add_action('wp_print_styles', array($fileEnqueuer, 'enqueue_styles'));

That’s it: Being enqueued, the static files will be requested when loading the site in the client. We have succeeded to avoid printing inline code and loading static resources instead.

Next, we can apply several improvements for additional performance gains.

Recommended reading: An Introduction To Automated Testing Of WordPress Plugins With PHPUnit

Bundling Files Together

Even though HTTP/2 has reduced the need for bundling files, it still makes the site faster, because the compression of files (e.g. through GZip) will be more effective, and because browsers (such as Chrome) have a bigger overhead processing many resources.

By now, we have modeled a file as a PHP object, which allows us to treat this object as an input to other processes. In particular, we can repeat the same process above to bundle all files from the same type together and serve the bundled version instead of all the independent files. For this, we create a function get_content which simply extracts the content from every resource under $fileObjects, and prints it again, producing the aggregation of all content from all resources:

abstract class SiteBundleBase extends SiteResourceBase {

  protected $fileObjects;

  function __construct($fileObjects) {

    $this->fileObjects = $fileObjects;

  function get_content() {

    $content = "";
    foreach ($this->fileObjects as $fileObject) {

      $content .= $fileObject->get_content().PHP_EOL;
    return $content;

We can bundle all files together into the file bundled-styles.css by creating a class for this file:

class StylesSiteBundle extends SiteBundleBase {

  function get_filename() {
    return "bundled-styles";

  function get_type() {
    return "css";

Finally, we simply enqueue these bundled files, as before, instead of all the independent resources. For CSS, we create a bundle containing files header-colors.css, background-image.css and font-sizes.css, for which we simply instantiate StylesSiteBundle with the PHP object for each of these files (and likewise we can create the JS bundle file):

$fileObjects = array(
  // CSS
  new HeaderColorsSiteResource(),
  new BackgroundImageSiteResource(),
  new FontSizesSiteResource(),
  // JS
  new WelcomeUserDataUserResource(),
  new UserShoppingItemsUserResource()
$cssFileObjects = array_map(array(ResourceUtils::class, 'is_css'), $fileObjects);
$jsFileObjects = array_map(array(ResourceUtils::class, 'is_js'), $fileObjects);

// Use this definition of $fileEnqueuer instead of the previous one
$fileEnqueuer = new ResourceEnqueuer(
    new StylesSiteBundle($cssFileObjects),
    new ScriptsSiteBundle($jsFileObjects)

That’s it. Now we will be requesting only one JS file and one CSS file instead of many.

A final improvement for perceived performance involves prioritizing assets, by delaying loading those assets which are not needed immediately. Let’s tackle this next.

async/defer Attributes For JS Resources

We can add attributes async and defer to the <script> tag, to alter when the JavaScript file is downloaded, parsed and executed, as to prioritize critical JavaScript and push everything non-critical for as late as possible, thus decreasing the site’s apparent loading time.

To implement this feature, following the SOLID principles, we should create a new interface JSResource (which inherits from Resource) containing functions is_async and is_defer. However, this would close the door to <style> tags eventually supporting these attributes too. So, with adaptability in mind, we take a more open-ended approach: we simply add a generic method get_attributes to interface Resource as to keep it flexible to add to any attribute (either already existing ones or yet to be invented) for both <script> and <link> tags:

interface Resource {
  // Continued from above...

  function get_attributes();

abstract class ResourceBase implements Resource {

  // Continued from above...
  function get_attributes() {

    // By default, no extra attributes
    return '';

WordPress doesn’t offer an easy way to add extra attributes to the enqueued resources, so we do it in a rather hacky way, adding a hook that replaces a string inside the tag through function add_script_tag_attributes:

class ResourceEnqueuerUtils {

  protected static tag_attributes = array();

  static function add_tag_attributes($handle, $attributes) {

    self::tag_attributes[$handle] = $attributes;

  static function add_script_tag_attributes($tag, $handle, $src) {

    if ($attributes = self::tag_attributes[$handle]) {

      $tag = str_replace(
        " src='${src}'>",
        " src='${src}' ".$attributes.">",

    return $tag;

// Initize by connecting to the WordPress hook
  array(ResourceEnqueuerUtils::class, 'add_script_tag_attributes'), 

We add the attributes for a resource when creating the corresponding object instance:

abstract class ResourceBase implements Resource {

  // Continued from above...
  function __construct() {

    ResourceEnqueuerUtils::add_tag_attributes($this->get_filename(), $this->get_attributes());

Finally, if resource welcomeuser-data.js doesn’t need to be executed immediately, we can then set it as defer:

class WelcomeUserDataUserResource extends UserResourceBase {

  // Continued from above...
  function get_attributes() {
    return "defer='defer'";

Because it is loaded as deferred, a script will load later, bringing forward the point in time in which the user can interact with the site. Concerning performance gains, we are all set now!

There is one issue left to resolve before we can relax: what happens when the site is hosted on multiple servers?

Dealing With Multiple Servers Behind A Load Balancer

If our site is hosted on several sites behind a load balancer, and a user-configuration dependant file is regenerated, the server handling the request must, somehow, upload the regenerated static file to all the other servers; otherwise, the other servers will serve a stale version of that file from that moment on. How do we do this? Having the servers communicate to each other is not just complex, but may ultimately prove unfeasible: What happens if the site runs on hundreds of servers, from different regions? Clearly, this is not an option.

The solution I came up with is to add a level of indirection: instead of requesting the static files from the site URL, they are requested from a location in the cloud, such as from an AWS S3 bucket. Then, upon regenerating the file, the server will immediately upload the new file to S3 and serve it from there. The implementation of this solution is explained in my previous article Sharing Data Among Multiple Servers Through AWS S3.


In this article, we have considered that inlining JS and CSS code is not always ideal, because the code must be sent repeatedly to the client, which can have a hit on performance if the amount of code is significant. We saw, as an example, how WordPress loads 43kb of scripts to print the Media Manager, which are pure JavaScript templates and could perfectly be loaded as static resources.

Hence, we have devised a way to make the website faster by transforming the dynamic JS and CSS inline code into static resources, which can enhance caching at several levels (in the client, Service Workers, CDN), allows to further bundle all files together into just one JS/CSS resource as to improve the ratio when compressing the output (such as through GZip) and to avoid an overhead in browsers from processing several resources concurrently (such as in Chrome), and additionally allows to add attributes async or defer to the <script> tag to speed up the user interactivity, thus improving the site’s apparent loading time.

As a beneficial side effect, splitting the code into static resources also allows the code to be more legible, dealing with units of code instead of big blobs of HTML, which can lead to a better maintenance of the project.

The solution we developed was done in PHP and includes a few specific bits of code for WordPress, however, the code itself is extremely simple, barely a few interfaces defining properties and objects implementing those properties following the SOLID principles, and a function to save a file to disk. That’s pretty much it. The end result is clean and compact, straightforward to recreate for any other language and platform, and not difficult to introduce to an existing project — providing easy performance gains.

Smashing Editorial
(rb, ra, yk, il)

Source: Smashing Magazine

Quick Tip: How to Sort an Array of Objects in JavaScript

Sort an array of objects in JavaScript

If you have an array of objects that you need to sort into a certain order, the temptation might be to reach for a JavaScript library. Before you do however, rember that you can do some pretty neat sorting with the native Array.sort function. In this article I’ll show you how to sort an array of objects in JavaScript with no fuss or bother.

To follow along with this article, you will need a knowledge of basic JavaScript concepts, such as declaring variables, writing functions, and conditional statements. I’ll also be using ES6 syntax. You can get a refresher on that here:

Basic Array Sorting

By default, the JavaScript Array.sort function converts each element in the array to be sorted, into a string, and compares them in Unicode code point order.

const foo = [9, 2, 3, 'random', 'panda'];
foo.sort(); // returns [ 2, 3, 9, 'panda', 'random' ]

const bar = [4, 19, 30, function(){}, {key: 'value'}];
bar.sort(); // returns [ 19, 30, 4, { key: 'value' }, [Function] ]

You may be wondering why 30 comes before 4… not logical huh? Well, actually it is. This happens because each element in the array is first converted to a string, and "30" comes before "4" in Unicode order.

It is also worth noting that unlike many other JavaScript array functions, Array.sort actually changes, or mutates the array it sorts.

const baz = ['hello world', 31, 5, 9, 12];
baz.sort(); // baz array is modified
console.log(baz); // shows [12, 31, 5, 9, "hello world"]

To avoid this, you can create a new instance of the array to be sorted and modify that instead.

const baz = ['hello world', 31, 5, 9, 12];
const newBaz = baz.slice().sort(); // new instance of baz array is created and sorted
console.log(baz); // "hello world", 31, 5, 9, 12]
console.log(newBaz); // [12, 31, 5, 9, "hello world"]

Try it out

JS Bin on

Using Array.sort alone would not be very useful for sorting an array of objects, thankfully the function takes an optional compareFunction parameter which causes the array elements to be sorted according to the return value of the compare function.

The post Quick Tip: How to Sort an Array of Objects in JavaScript appeared first on SitePoint.

Source: Sitepoint

Monthly Web Development Update 11/2018: Just-In-Time Design And Variable Font Fallbacks

Monthly Web Development Update 11/2018: Just-In-Time Design And Variable Font Fallbacks

Monthly Web Development Update 11/2018: Just-In-Time Design And Variable Font Fallbacks

Anselm Hannemann


How much does design affect the perception of our products and the users who interact with them? To me, it’s getting clearer that design makes all the difference and that unifying designs to a standard model like the Google Material Design Kit doesn’t work well. By using it, you’ll get a decent design that works from a technical perspective, of course. But you won’t create a unique experience with it, an experience that lasts or that reaches people on a personal level.

Now think about which websites you visit and if you enjoy being there, reading or even contributing content to the service. In my opinion, that’s something that Instagram manages to do very well. Good design fits your company’s purpose and adjusts to what visitors expect, making them feel comfortable where they are and enabling them to connect with the product. Standard solutions, however, might be nice and convenient, but they’ll always have that anonymous feel to them which prevents people from really caring for your product. It’s in our hands to shape a better experience.


  • Yes, Firefox 63 is here, but what does it bring? Web Components support including Custom Elements with built-in extends and Shadow DOM. prefers-reduced-motion media query support is available now, too, Developer Tools have gotten a font editor to make playing with web typography easier, and the accessibility inspector is enabled by default. The img element now supports the decoding attribute which can get sync, async, or auto values to hint the preferred decoding timing to the browser. Flexbox got some improvements as well, now supporting gap (row-gap, column-gap) properties. And last but not least, the Media Capabilities API, Async Clipboard API, and the SecurityPolicyViolationEvent interface which allows us to send CSP violations have also been added. Wow, what a release!
  • React 16.6 is out — that doesn’t sound like big news, does it? Well, this minor update brings React.lazy(), a method you can use to do code-splitting by wrapping a dynamic import in a call to React.lazy(). A huge step for better performance. There are also a couple of other useful new things in the update.
  • The latest Safari Tech Preview 68 brings <input type=“color”> support and changes the default behavior of links that have target=”_blank” to get the rel=“noopener” as implied attribute. It also includes the new prefers-color-scheme media query which allows developers to adapt websites to the light or dark mode settings of macOS.
  • From now on, PageSpeed Insights, likely still the most commonly used performance analysis tool by Google, is now powered by project Lighthouse which many of you have already used additionally. A nice iteration of their tool that makes it way more accurate than before.


  • Explore structured learning paths to discover everything you need to know about building for the modern web. is the new resource by the Google Web team for developers.
  • No matter how you feel about Apple Maps — I guess most of us have experienced moments of frustration with it —, but this comparison about the maps data they used until now and the data they currently gather for their revamped Maps is fascinating. I’m sure that the increased level of detail will help a lot of people around the world. Imagine how landscape architects could make use of this or how rescue helpers could profit from that level of detail after an earthquake, for example.
From fast load times to accessibility — helps you make your site better.


  • Andrea Giammarchi wrote a polyfill library for Custom Elements that allows us to extend built-in elements in Safari. This is super nice as it allows us to extend native elements with our own custom features — something that works in Chrome and Firefox already, and now there’s this little polyfill for other browsers as well.
  • Custom elements are still very new and browser support varies. That’s why this html-parsed-element project is useful as it provides a base custom element class with a reliable parsedCallback method.



  • How do you build a color palette? Steve Schoger from RefactoringUI shares a great approach that meets real-life needs.
  • Matthew Ström’s article “Just-in-time Design” mentions a solution to minimize the disconnection between product design and product engineering. It’s about adopting the Just-in-time method for design. Something that my current team was very excited about and I’m happy to give it a try.
  • HolaBrief looks promising. It’s a tool that improves how we create design briefs, keeping everyone on the same page during the process.
  • Mental models are explanations of how we see the world. Teresa Man wrote about how we can apply mental models to product design and why it matters.
  • Shelby Rogers shares how we can build better 404 error pages.
Building Your Color Palette
Steve Schoger looks into color palettes that really work. (Image credit)


  • The color palette generator Palx lets you enter a base hex value and generates a full color palette based on it.


  • This neat Python tool is a great XSS detection utility.
  • Svetlin Nakov wrote a book about Practical Cryptography for Developers which is available for free. If you ever wanted to understand or know more about how private/public keys, hashing, ciphers, or signatures work, this is a great place to start.
  • Facebook claimed that they’d reveal who pays for political ads. Now VICE researched this new feature and posed as every single of the current 100 U.S. senators to run ads ‘paid by them’. Pretty scary to see how one security failure that gives users more power as intented can change world politics.


  • I don’t like linking to paid, restricted articles but this one made me think and you don’t need the full story to follow me. When Tesla announced that they’d ramp up model 3 production to 247, a lot of people wanted to verify this, and a company that makes money by providing geolocation data captured smartphone location data from the workers around the Tesla factories to confirm whether this could be true. Another sad story of how easy it is to track someone without consent, even though this is more a case of mass-surveillance than individual tracking.

Web Performance

  • Addy Osmani shares a performance case study of Netflix to improve Time-to-Interactive of the streaming service. This includes switching from React and other libraries to plain JavaScript, prefetching HTML, CSS, and (React) JavaScript and the usage of React.js on the server side. Quite interesting to see so many unconventional approaches and their benefits. But remember that what works for others doesn’t need to be the perfect approach for your project, so take it more as inspiration than blindly copying it.
  • Harry Roberts explains all the details that are important to know about CSS and Network Performance. A comprehensive collection that also provides some very interesting tips for when you have async scripts in your code.
  • I love the tiny ImageOptim app for batch optimizing my images for web distribution. But now there’s an impressive web app called “Squoosh” that lets you optimize images perfectly in your web browser and, as a bonus, you can also resize the image and choose which compression to use, including mozJPEG and WebP. Made by the Google Chrome team.


Redesigning your product and website for dark mode
How to design for dark mode while maintaining accessibility, readability, and a consistent feel for your brand? Andy Clarke shares some valuable tips. (Image credit)

Work & Life

Going Beyond…

  • Neil Stevenson on Steve Jobs, creativity and death and why this is a good story for life. Although copying Steve Jobs is likely not a good idea, Neil provides some different angles on how we might want to work, what to do with our lives, and why purpose matters for many of us.
  • Ryan Broderick reflects on what we did by inventing the internet. He concludes that all that radicalism in the world, those weird political views are all due to the invention of social media, chat software and the (not so sub-) culture of promoting and embracing all the bad things happening in our society. Remember 4chan, Reddit, and similar services, but also Facebook et al? They contribute and embrace not only good ideas but often stupid or even harmful ones. “This is how we radicalized the world” is a sad story to read but well-written and with a lot of inspiring thoughts about how we shape society through technology.
  • I’m sorry, this is another link about Bitcoin’s energy consumption, but it shows that Bitcoin mining alone could raise global temperatures above the critical limit (2°C) by 2033. It’s time to abandon this inefficient type of cryptocurrency. Now.
  • Wilderness is something special. And our planet has less and less of it, as this article describes. The map reveals that only very few countries have a lot of wilderness these days, giving rare animals and species a place to live, giving humans a way to explore nature, to relax, to go on adventures.
  • We definitely live in exciting times, but it makes me sad when I read that in the last forty years, wildlife population declined by 60%. That’s a pretty massive scale, and if this continues, the world will be another place when I’m old. Yes, when I am old, a lot of animals I knew and saw in nature will not exist anymore by then, and the next generation of humans will not be able to see them other than in a museum. It’s not entirely clear what the reasons are, but climate change might be one thing, and the ever-growing expansion of humans into wildlife areas probably contributes a lot to it, too.
Smashing Editorial

Source: Smashing Magazine

Getting Started with Error Tracking

This article was created in partnership with Sentry. Thank you for supporting the partners who make SitePoint possible.

Writing code can be fun. Testing it is another matter. Of course, SitePoint readers always produce bug-free applications but errors can still slip into the best production code. How can you detect those issues?…


Writing software to test software is one option. Unit and integration testing can be adopted to verify functions and interfaces accordingly. Unfortunately:

  1. It can be difficult to write tests when product requirements are evolving.
  2. Are you sure your tests cover every option and pathway?
  3. Who’s testing your tests?

Tests help, but the industry still releases software with bugs because it’s impossible to cover every eventually. Does a bug occur in a certain browser, on a particular OS, at a specific time of day?

In addition, browser testing is notoriously complicated owing to:

  • Multiple devices and applications. There’s a long tail of old, new, and obscure browsers across desktop PCs, tablets, smartphones, TVs, games consoles, smart watches, IoT devices, and more. It’s impossible to test everything.
  • User control. Any user can choose whether to download, block or modify any part of your application. For example, Firefox will block Google Analytics when tracking is disabled; recording an Analytics event could cause the whole application to fail.
  • Network failures. Even if the user permits every file you throw at them, there’s no guarantee they’ll receive all images, CSS, JavaScript and other assets. Travelling or using flaky hotel wi-fi exacerbates the problem.

User Feedback

Have you ever watched someone using your software? They always do something you never expected. I wince every time I see someone enter a URL into the search box.

Humans are adept at finding their own methods to complete tasks based on previous experience. Those processes may or may not be efficient, but they’ll rarely match your expectations because your experiences are different. A bug may occur because a sequence of tasks is tackled in a manner that seems illogical to you.

Additionally, the majority of users will never report a bug. They won’t know whether the fault occurred in your app, the browser, or the OS. Many may blame themselves, will not know who to contact, or simply switch to a competing product.

Users who do report issues will rarely be able to describe the problem unless they have software engineering expertise. It’s frustrating to be faced with dozens of “ProductX doesn’t work” issue tickets.

Ultimately, should we rely on customers to report problems?


Logging errors is a possibility but:

The post Getting Started with Error Tracking appeared first on SitePoint.

Source: Sitepoint