Object-Oriented PHP With Classes and Objects

In this article, we’re going to explore the basics of object-oriented programming in PHP. We’ll start with an introduction to classes and objects, and we’ll discuss a couple of advanced concepts like inheritance and polymorphism in the latter half of this article.

What Is Object-Oriented Programming (OOP)?

Object-oriented programming, commonly referred to as OOP, is an approach which helps you to develop complex applications in a way that’s easily maintainable and scalable over the long term. In the world of OOP, real-world entities such as Person, Car, or Animal are treated as objects. In object-oriented programming, you interact with your application by using objects. This contrasts with procedural programming, where you primarily interact with functions and global variables.

In OOP, there’s a concept of “class“, which is used to model or map a real-world entity to a template of data (properties) and functionality (methods). An “object” is an instance of a class, and you can create multiple instances of the same class. For example, there is a single Person class, but many person objects can be instances of this class—dan, zainab, hector, etc. 

The class defines properties. For example, for the Person class, we might have nameage, and phoneNumber. Then each person object will have its own values for those properties. 

You can also define methods in the class that allow you to manipulate the values of object properties and perform operations on objects. As an example, you could define a save method which saves the object information to a database.

What Is a PHP Class?

A class is a template which represents a real-world entity, and it defines properties and methods of the entity. In this section, we’ll discuss the basic anatomy of a typical PHP class.

The best way to understand new concepts is with an example. So let’s have a look at the Employee class in the following snippet, which represents the employee entity.

The class Employee statement in the first line defines the Employee class. Then, we go on to declare the properties, the constructor, and the other class methods.

Class Properties in PHP

You could think of class properties as variables that are used to hold information about the object. In the above example, we’ve defined three propertiesfirst_name, last_name, and age. In most cases, class properties are accessed via instantiated objects.

These properties are private, which means they can only be accessed from within the class. This is the safest access level for properties. We’ll discuss the different access levels for class properties and methods later in this article.

Constructors for PHP Classes

A constructor is a special class method which is called automatically when you instantiate an object. We’ll see how to instantiate objects in the next couple of sections, but for now you just have to know that a constructor is used to initialize object properties when the object is being created.

You can define a constructor by defining the __construct method.

Methods for PHP Classes

We can think of class methods as functions that perform specific actions associated with objects. In most cases, they are used to access and manipulate object properties and perform related operations.

In the above example, we’ve defined the getLastName method, which returns the last name associated with the object. 

So that’s a brief introduction to the class structure in PHP. In the next section, we’ll see how to instantiate objects of the Employee class.

What Is an Object in PHP?

In the previous section, we discussed the basic structure of a class in PHP. Now, when you want to use a class, you need to instantiate it, and the end result is an object. So we could think of a class as a blueprint, and an object is an actual thing that you can work with.

In the context of the Employee class which we’ve just created in the previous section, let’s see how to instantiate an object of that class.

You need to use the new keyword when you want to instantiate an object of any class along with its class name, and you’ll get back a new object instance of that class.

If a class has defined the __construct method and it requires arguments, you need to pass those arguments when you instantiate an object. In our case, the Employee class constructor requires three arguments, and thus we’ve passed these when we created the $objEmployee object. As we discussed earlier, the __construct method is called automatically when the object is instantiated.

Next, we’ve called class methods on the $objEmployee object to print the information which was initialized during object creation. Of course, you can create multiple objects of the same class, as shown in the following snippet.

The following image is a graphical representation of the Employee class and some of its instances.

Object Instantiation

Simply put, a class is a blueprint which you can use to create structured objects.

Encapsulation

In the previous section, we discussed how to instantiate objects of the Employee class. It’s interesting to note that the $objEmployee object itself wraps together properties and methods of the class. In other words, it hides those details from the rest of the program. In the world of OOP, this is called data encapsulation.

Encapsulation is an important aspect of OOP that allows you to restrict access to certain properties or methods of the object. And that brings us to another topic for discussionaccess levels.

Access Levels

When you define a property or a method in a class, you can declare it to have one of these three access levelspublic, private, or protected.

Public Access

When you declare a property or a method as public, it can be accessed from anywhere outside the class. The value of a public property can be modified from anywhere in your code.

Let’s look at an example to understand the public access level.

As you can see in the above example, we’ve declared the name property to be public. Hence, you can set it from anywhere outside the class, as we’ve done here.

Private Access

When you declare a property or a method as private, it can only be accessed from within the class. This means that you need to define getter and setter methods to get and set the value of that property.

Again, let’s revise the previous example to understand the private access level.

If you try accessing a private property from outside the class, it’ll throw the fatal error Cannot access private property Person::$name. Thus, you need to set the value of the private property using the setter method, as we did using the setName method.

There are good reasons why you might want to make a property private. For example, perhaps some action should be taken (updating a database, say, or re-rendering a template) if that property changes. In that case, you can define a setter method and handle any special logic when the property is changed.

Protected Access

Finally, when you declare a property or a method as protected, it can be accessed by the same class that has defined it and classes that inherit the class in question. We’ll discuss inheritance in the very next section, so we’ll get back to the protected access level a bit later.

Inheritance

Inheritance is an important aspect of the object-oriented programming paradigm which allows you to inherit properties and methods of other classes by extending them. The class which is being inherited is called the parent class, and the class which inherits the other class is called the child class. When you instantiate an object of the child class, it inherits the properties and methods of the parent class as well.

Let’s have a look at the following screenshot to understand the concept of inheritance.

Inheritance

In the above example, the Person class is the parent class, and the Employee class extends or inherits the Person class and so is called a child class.

Let’s try to go through a real-world example to understand how it works.

The important thing to note here is that the Employee class has used the extends keyword to inherit the Person class. Now, the Employee class can access all properties and methods of the Person class that are declared as public or protected. (It can’t access members that are declared as private.)

In the above example, the $employee object can access getName and setName methods that are defined in the Person class since they are declared as public.

Next, we’ve accessed the callToProtectedNameAndAge method using the getNameAndAge method defined in the Employee class, since it’s declared as protected. Finally, the $employee object can’t access the callToPrivateNameAndAge method of the Person class since it’s declared as private.

On the other hand, you can use the $employee object to set the age property of the Person class, as we did in the setAge method which is defined in the Employee class, since the age property is declared as protected.

So that was a brief introduction to inheritance. It helps you to reduce code duplication, and thus encourages code reusability.

Polymorphism

Polymorphism is another important concept in the world of object-oriented programming which refers to the ability to process objects differently based on their data types.

For example, in the context of inheritance, if the child class wants to change the behavior of the parent class method, it can override that method. This is called method overriding. Let’s quickly go through a real-world example to understand the concept of method overriding.

As you can see, we’ve changed the behavior of the formatMessage method by overriding it in the BoldMessage class. The important thing is that a message is formatted differently based on the object type, whether it’s an instance of the parent class or the child class.

(Some object-oriented languages also have a kind of method overloading that lets you define multiple class methods with the same name but a different number of arguments. This isn’t directly supported in PHP, but there are a couple of workarounds to achieve similar functionality.)

Conclusion

Object-oriented programming is a vast subject, and we’ve only scratched the surface of its complexity. I do hope that this tutorial helped you get you started with the basics of OOP and that it motivates you to go on and learn more advanced OOP topics.

Object-oriented programming is an important aspect in application development, irrespective of the technology you’re working with. Today, in the context of PHP, we discussed a couple of basic concepts of OOP, and we also took the opportunity to introduce a few real-world examples.

Feel free to post your queries using the feed below!


Source: Nettuts Web Development

Web Design And Development Advent Roundup For 2018

Web Design And Development Advent Roundup For 2018

Web Design And Development Advent Roundup For 2018

Rachel Andrew

2018-12-03T16:30:30+02:00
2018-12-03T15:49:20+00:00

In the run-up to Christmas, there is a tradition across the web design and development community to produce advent calendars, typically with a new article or resource for each day of December. In this article, I have rounded up all those that I have found to be running this year, along with RSS feeds where they can be located, and Twitter accounts to make it easier to follow along.

It is a lot of work to publish an article every day — as we well know here at Smashing — and we have a whole team here to help. However, the majority of the calendars published here are true community efforts, often with the bulk of the work falling to an individual or tiny team, with no budget to pay authors and editors. So, please join us in supporting these efforts, share the articles that you enjoyed reading, and join the discussions respectfully. Whether you celebrate Christmas or not, you can certainly learn a lot of new things over the next 24 days.

Perl Advent

Perl AdventPerl Advent has been running since 2000 and is the longest running web advent calendar that I know of. Now old enough to enjoy a festive sherry (in Europe at least), they are back for 2018 for 24 merry days of Perl.

24 Ways

24 WaysI have to admit a little bias in this subject as 24 Ways is the calendar started by my husband, business partner and fellow Smashing Magazine editor, Drew McLellan. 24 Ways has been running since 2005, when Drew had the idea to post a new article each day of December until Christmas. I don’t think either of us expected that on the 30th November 2018, Drew would still be staying up until midnight to check that the first article went live!

PerfPlanet Calendar

Performance CalendarEver since 2009, the Performance Calendar has been helping us to speed up the web. It is up again for the 10th year in a row and is maintained by Sergey Chernyshev.

24 Jours De Web

24 Jours De WebNext on my list is the French language calendar, 24 Jours De Web. Their first year of publication was 2012, and they are supporting the charity L’Auberge des Migrants, asking readers to donate to help refugees in Calais and the North of France, providing material and food aid, support and advocacy.

Christmas XP

Christmas XPThe Christmas XP calendar is a little different. Rather than an article each day, they bring us a WebGL experiment. They have been bringing us experiments in WebGL since 2012.

AWS Advent

AWS AdventIf you would like to brush up on your AWS skills then the AWS Advent calendar promises “a yearly exploration of AWS in 24 parts”. You’ll find a wide range of articles on security, deployment strategy and general tips and techniques to be aware of when using Amazon Web Services.

24 Days In December

24 Days In DecemberIf PHP is your language of choice then you can look forward to 24 days of PHP articles over at 24 Days in December. They began publishing in 2015, when Andreas Heigl realized that he missed Web Advent who had stopped publishing in 2012.

In his wrap-up post after the initial season Andreas wrote:

“I came to realize that I missed the phpadvent/webadvent the last years. And wouldn’t it be great to have a read about something PHP-related for the first 24 Days in December? Even for those people that are not Christian, the time before Christmas has to be a special time. And if it’s only due to the commercials and ads what to present.

So I would wait and see whether someone would arrange something.

But wait! When everyone thinks someone should do something, no-one will do anything.”

I’m happy that the community effort started by Andreas, when he realized that he would need to be the person who did something, is back for another year.

24 Pull Requests

24 Pull RequestsThe 24 Pull Requests calendar is a little different. It began in 2012 with the idea that contributors should, “Send 24 pull requests between December 1st and December 24th.” The idea being that this would be 24 gifts to the open-source community from every contributor. Since the project began, 19,859 have contributed (at the time of writing).

It’s a brilliant idea, and I really love their updated policy for 2018. Recognizing that making Pull Requests is only one way that people contribute to open source, the site has widened the scope of contributions. You can make a Pull Request as before if this is how you contribute. However, you can also complete a form to track your non-PR contribution. Perhaps you wrote a tutorial, ran a meetup, mentored someone — these things are all important contributions too. Read the updated contribution policy for 2018 and join in!

UXmas

UXmasUXmas have been publishing their Advent Calendar for UX folk since 2012. I am very impressed that they already have their call for submissions for 2019 up and running. So, you can read some useful aticles this year and submit your idea for next year at the same time.

24 Accessibility

24 Accessibility24 Accessibility are back for a second year of accessibility posts in the run-up to Christmas. The project was started by Dennis Deacon, an Accessibility Engineer with The Paciello Group, and will be well worth following this year.

Fronteers Advent Calendar

Fronteers Advent CalendarFronteers are running 24 posts in Dutch on their blog. A lovely touch is that each writer chooses a charity, and the Fronteers organization will donate 75 euros on their behalf.

Advent Of Code

Advent Of CodeIf you would prefer a puzzle over an article, take a look at Advent of Code. Created by Eric Wastl, this is an Advent calendar of small programming puzzles for a variety of skill sets and skill levels that can be solved in any programming language you like.

24 Days In Umbraco

24 Days In Umbraco24 Days In Umbraco is a calendar of articles relating to the Umbraco CMS. They have been publishing every December since 2012. Their About page says that the calendar started when:

“… we asked a bunch of Umbraco people if they had a favorite feature, a story or something else that they’d be willing to write a short article about. Then we’d post a new one here every day through December.”

Advent Speaker Tips

Advent Speaker TipsNew this year is a daily public speaking tip on the Notist blog.

Data-Driven Advent Calendar

Data-Driven Advent CalendarThe Data-Driven Advent Calendar will contain an article about data journalism each day of Advent. The project is from Journocode who are computer scientists and journalists working in newsrooms across Germany.

A Computer Of One’s Own

A Computer Of One’s OwnPosting to Medium, A Computer Of One’s Own will highlight the lives of the pioneers of the computer age. The project is by Alvaro Videla, with illustrations by Sebastian Navas.

React, JavaScript, Security And Elm Christmas

React, JavaScript, Security And Elm ChristmasA set of calendars sponsored by Norwegian company Bekk. They are supporting React Christmas, JavaScript Christmas, Security Christmas and Elm Christmas. That’s a whole lot of articles to curate!

#devAdvent

Not a website but instead a hashtag: Sarah Drasner tweeted that she would be highlighting a person and project every day of Advent using the hastag #devAdvent. Follow along to get to know some new folks and the work they do.

Share The Ones I Missed!

If you know of a calendar related to web design and development that I haven’t mentioned here, post a comment. Enjoy your seasonal reading!

Smashing Editorial
(il)


Source: Smashing Magazine

Best Practices for ARIA Implementation

Today, to mark the International Day of Persons With Disabilities, we are highlighting accessibility here at Envato Tuts+.

ARIA is an important feature that web developers can use to help make their sites more accessible. In previous pieces, we talked about how you can implement ARIA, whether you’re doing so on an eCommerce site or in more niche places.

So far, the focus of this series has been on how to implement ARIA—for example, how to add a role to an element or how to code more accessible forms. With this piece, the focus is going to shift a bit towards other aspects of online accessibility, like why and when we should use ARIA. We’ll also cover some common questions asked on previous posts throughout the series.

Alright, let’s begin!

What Is ARIA?

At its base, ARIA is an extension to current web development languages (mainly HTML) that allows for enhanced accessibility to end users.

What exactly does that mean, though?

Extending HTML With ARIA

HTML has some shortcomings when it comes to how elements are defined and how elements can be related to one another.

In many cases, HTML elements (such as the <div> tag) are too broadly defined to be useful to someone navigating a site with a screen reader, or an element may have too many possible meanings to be interpreted (e.g. an image could also be used as a button). ARIA adds additional attributes to HTML elements to allow for definitions that can be layered on top of the already existing markup language, adding clarity as needed.

The second major benefit is the relation of elements. With HTML, every element exists as a child and/or a parent of another element. But this structure doesn’t capture all semantic relationships. This can lead to scenarios like a controller and the element it controls not being clearly associated if they are placed in separate div containers. This becomes increasingly important in complex site structures or when altering the DOM using JavaScript.

Beyond those two benefits, there are a host of others that tend to get less attention, but provide excellent functionality nonetheless.

ARIA for Advanced Accessibility

Although much of the web is pushing towards easier-to-use UX, ARIA fills an important role for others who may not be able to simplify their structure. There are cases where a website might require tree controls, such as those commonly used by filesystems. By providing additional structure, ARIA makes these more advanced controls available to people who might not be able to access them otherwise—especially for users seeking to create a site with drag-and-drop capabilities.

Another key capability is that ARIA doesn’t just extend HTML—it can also be used to make other, less accessibility-friendly technologies available to more users. This is often the case for people using AJAX or DHTML for their web applications.

And really, those are just scratching the surface. If you’re interested in finding out more about the capabilities, attributes, and other useful parameters, take a look at the WAI-ARIA overview.

Why Should We Use ARIA?

As far as accessibility is concerned, ARIA has become one of the most widely adopted standards, with more than 25% of the top million websites using it to some extent.

The biggest boon of using WAI-ARIA is that it increases accessibility on your site. Users that rely on screen readers, have low vision, or use alternative interfaces for the web benefit greatly from the implementation—and in some cases, they may not be able to use a website to its full extent without it. For many, accessibility is seen as a core value of the web, and as developers, we should strive to provide it wherever possible.

Beyond the best practices aspect, there is also a business incentive to implement ARIA. With 2-3% of Americans having some form of low vision, there is a significant portion of most markets that would benefit from use of the standard. In addition, having an implementation in place now will be beneficial as adoption of ARIA increasingly grows among non-standard web interfaces and creeps into devices such as smart speakers.

Best Practices for ARIA Implementation

Until now, we’ve focused on the actual methods of implementation in this series. With the basics of how to add ARIA to your site in place now, let’s take a look at some key guidelines for putting together your own implementation.

Using a Light Touch Approach

A good implementation of ARIA doesn’t need additional attributes and roles at every opportunity.

When possible, using the least amount of additional code to convey the necessary information is ideal. As an example, the overuse of ARIA is similar to making your site’s homepage a full sitemap for a visual user. People using screen readers will be overwhelmed by the amount of notifications and additional markup on a page that overutilizes ARIA, achieving the opposite of what we were seeking.

Providing enough additional markup to make your important elements’ context clear is the goal, and anything beyond that is probably not necessary.

Avoiding Redundancy (Especially When Using HTML5)

Throughout the previous posts in this series, we talked about the role attribute quite a bit. We placed it in every place that we needed the user to be aware of. There’s an important exception to this, however, that we didn’t talk about before, and that’s the need to avoid redundancy.

Whenever an element that you want to add ARIA to already clearly defines what it is, then you can skip adding ARIA to it. Why is this? Because ARIA is meant to extend existing code to make it more readable, but in some cases, code is already clear, structured, and easy to understand.

This happens frequently for people using elements introduced in HTML5. For example, when using the <button> element, you no longer need to add in the attribute role="button", since the role is already explicitly defined by the HTML code.

Testing With a Screen Reader

Another key to creating a good ARIA implementation is to make sure that you are testing your site with a screen reader or two. You’re likely to be surprised at the details of how they function and how easy it is to make your website annoying to use by accident.

Many ARIA attributes can create notifications or alerts for the end user, and if you are utilizing an aria-live content area that changes every 10 seconds, it’s possible that the implementation is making it more difficult to use your website.

Iterative Accessibility for the Web

When adding additional accessibility measures to your site, it’s important to remember that it isn’t an all or nothing task. You don’t have to completely deck out your entire website in one attempt, and iterative additions are probably the best way to go.

Starting with major content areas and navigations and then spreading throughout the site slowly is a solid strategy, completing more as time goes on. Even if you schedule 15 or 30 minutes each month to add just a bit of accessibility to your site, it’s a step in the right direction.

Good UX Is Good Accessibility (And Vice Versa)

ARIA isn’t a cure-all for accessibility issues. It is crucial to place a heavy focus on good UX design, especially when it comes to text readability, as another tool in your accessibility toolkit.

If you’d like to delve deeper into the specifics of how UX and development (outside of ARIA) can be wielded to improve accessibility, take a look at the Web Content Accessibility Guidelines.

You can also take a look at our Complete Learning Guide to Web Accessibility.

Frequently Asked Questions About ARIA

This series on ARIA resonated with a lot of readers and sparked discussion in the community. Let’s keep it up! Increasing awareness and improving these tutorials is the best way to bring accessibility to the forefront.  

Here are some of the most common questions and commentary that popped up around the web:

What Browsers Support WAI-ARIA?

A bunch! Most modern browsers support the features of ARIA to some extent, though this slightly changes from browser to browser. If you want to see specifics for each browser, you can use a tool like Can I Use?

Can I Use ARIA results

Can ARIA Be Used With WordPress?

Yep! A few WordPress themes already have ARIA integrated, but you can add it to any theme that you can edit the source code for. In addition, you can also use ARIA with almost any Content Management System!

What Happens If ARIA Isn’t Supported?

Since ARIA doesn’t affect rendering, nothing! In most cases, if a device doesn’t support ARIA, it’ll just be ignored entirely.

Make the Web a Better Place!

Putting all of the articles from this series together, I hope you now have all of the tools needed to improve the accessibility of your site!

If you have any questions, feedback, or have a correction for something I’ve said, please let me know in the comments!


Source: Nettuts Web Development

Accessible Apps: Barriers to Access and Getting Started With Accessibility

Today, we are highlighting accessibility—something we strive to think about every day here at Envato Tuts+. December 3 is International Day of Persons With Disabilities. Created by the United Nations in 1992, this day seeks to promote the rights and well-being of persons with disabilities in all spheres of society. More than one billion people worldwide live with some form of disability.

In the context of web and app development, the goal of accessibility is that your tool works for all people, regardless of the hardware or software they are using, and wherever they fall on the spectrum of hearing, movement, visual, and cognitive ability. With rapid changes in digital and assistive technology, meeting this goal requires thought, testing, and an overall understanding of the way online tools are used by different people with diverse needs.

girl using a laptop computer

There are tools here at Envato Tuts+ and across the web that can help you learn how to design and code for accessibility, and I will link to some in this article. But in this post, I would like to look at the role of developers in web accessibility and talk about why the best time to think about accessibility is at the beginning of a project. I’ll also introduce some emerging issues around developing for accessibility, and raise considerations around barriers to access and advocate for the importance of engaging with users at different stages of the development process.

Thinking About Barriers to Access

The UN’s Convention on the Rights of Persons with Disabilities points out that the existence of barriers is a central feature of disability—that disability is an evolving relationship between people with impairments and the social and environment barriers with which they interact. Barriers are in themselves disabling, and exclusion is a structural problem that lives in our systems, rather than in the bodies of those with impairments. Because of this, removing barriers is a prerequisite to social inclusion for all people.

Let’s think a bit about accommodations and barriers to access.

Last week, a friend pointed out that light bulbs are an assistive device for people who rely on their vision to get around. Using light bulbs in a building is a way to mitigate this barrier, since sighted people need light to navigate the world, but people without sight do not need this accommodation, as they navigate using other strategies. What we consider a “normal” accommodation is socially conditioned, rather than an objective truth.

I bring up this example to disrupt the idea of “normal” and move away from thinking of accessible design as a special accommodation. If we prioritize making our technology barrier-free, rather than thinking about accessibility as an exception or afterthought, we can shift the concept of “normal” to accommodate all people and exclude none.

The strength of web technology is that, by its nature, it removes barriers that exist in the physical world. When building web and mobile app technology, it is vital that we not add barriers back in to the technology through the way we design our tools. In order to do this, we have to understand how different people use our tools and what their needs are. And just like when considering whether to build a ramp or a set of stairs, the best time to think about this is before we start building.

Accessibility: Getting Started

When it comes to building accessible web tools, there are two main considerations: take advantage of existing accessibility infrastructure, and stay out of the way of assistive devices and other accessibility strategies.

Using alt text for all non-text elements (images, graphs, charts, etc.) is an example of how you can take advantage of existing infrastructure. Screen readers rely on alt text to parse web content for visitors with visual impairments. This is not a complicated fix—alt text is simply good design. By designing for accessibility, you will improve the functionality of your website. In this case, search engines rely on alt text to better “read” websites. According to the W3, case studies show that accessible websites have better search results and increased audience reach.

Ensuring that keyboard input works with your tool is an example of staying out of the way of users’ accessibility strategies. Using a mouse requires a degree of fine motor control that many people do not have, so they rely on keyboard input to navigate websites and apps. If your web tool can be navigated with keyboard input, it also allows assistive technologies that mimic the keyboard, such as speech input, to function properly. If you build a tool that cannot be navigated with keyboard input, you are unnecessarily creating an inaccessible environment for users.

Knowing how different people interact with your web tool gives you the ability to make choices that support their accessibility strategies. It is so much better to do this at the beginning of a project than try to address accessibility as an afterthought. An illustrative example: UC Berkeley ran into trouble when they made thousands of uncaptioned videos—inaccessible to people with hearing impairments—available online. The university was legally required to caption the content, but did not want to pay for the expensive project, and eventually cancelled the project outright.

By making websites more accessible, you make your web tools work better, more of the time, for more people. The power you have as a developer allows you to address accessibility issues at the earliest stages, when it is the easiest and least expensive time to do so.

Tools and Guides on Accessibility

Here at Envato Tuts+, we have a variety of information on how to build better, more accessible, web tools. Here are a few to get you started.

ARIA (the Accessible Rich Internet Applications Suite) is a tool to make web content and apps more accessible, especially for people who use screen readers or cannot use a mouse. The main purpose of ARIA is to allow for advanced semantic structure within HTML as a counterpart to HTML’s syntax-heavy nature. In other words, HTML tells the browser where things go, and ARIA tells it how they interact. In this series, you’ll learn how to use ARIA to make your web apps more accessible.

In this post, you’ll learn some tips to make your web page more keyboard accessible, using only basic HTML and CSS.

It’s important to make sure your checkboxes and radio buttons remain accessible to assistive technology (AT) and keyboard users. In this post, you’ll learn how to make sure your custom checkboxes and radio buttons are accessible.

Complete Learning Guide to Web Accessibility

This is a collection of tutorials, articles, courses, and quick tips to walk you through the basics of accessibility.

I encourage you to also check out some source guides on accessibility from the W3. The W3 article on accessibility principles is a good place to start. They also have an article about strategies internet users can employ to make the web more accessible—this will get you started thinking about how different people have different accessibility needs and how you can code to support accessibility for everyone. And this article goes over ways to optimize your computer for more accessible web browsing, which will give you an idea of some of the strategies people employ to make the internet more accessible for themselves.

User Testing Is Everything

You’ve thought about accessibility. You are committed to removing barriers to access in your web and mobile apps. You’ve built a tool with up-to-date accessibility guidelines in mind. Is your app accessible?

There are guides on Tools and Tips for Testing for Accessibility, and that is an important part of the design process. Reading the guide and testing your web tool is a great idea. But go further: the gold standard in designing for accessibility is user testing. Involve users early in your project and throughout the development process.

Some accessibility requirements are easy to meet, and some are more challenging. Understanding how different people use your tools will give you so much insight into how to build for accessibility. Everyone has different needs, different browsers, different assistive devices. No guide or checklist is going to be able to fully capture the breadth of experience of the people using your tool.

Like learning that users are receiving error reports or a 404 page, be grateful if and when you receive feedback that your tool is not currently meeting a user’s accessibility needs. Solicit this kind of feedback. Keep an open dialog with users and find solutions to the issues they bring to your attention. 

Anything you build will evolve—nothing is static in web and mobile technology. The real-life experience of your users is the most valuable input you can receive, so if you hear something is not working, say thank you. And then find a way to make it work.

Conclusion

Thank you for spending some time with me on this International Day of Persons With Disabilities. My intention with this article is to support the dialog around web and mobile accessibility and to give you a starting point for your own thoughts, research, and testing. Whether you are just getting started with programming or are an experienced programmer, it is so vital that you are prioritizing accessibility in your development projects, so that your tools can work well for each person who uses them.


Source: Nettuts Web Development

Build a Basic CRUD App with Angular and Node

This article was originally published on the Okta developer blog. Thank you for supporting the partners who make SitePoint possible.

In recent years, single page applications (SPAs) have become more and more popular. A SPA is a website that consists of just one page. That lone page acts as a container for a JavaScript application. The JavaScript is responsible for obtaining the content and rendering it within the container. The content is typically obtained from a web service and RESTful APIs have become the go-to choice in many situations. The part of the application making up the SPA is commonly known as the client or front-end, while the part responsible for the REST API is known as the server or back-end. In this tutorial, you will be developing a simple Angular single page app with a REST backend, based on Node and Express.

You’ll be using Angular as it follows the MVC pattern and cleanly separates the View from the Models. It is straightforward to create HTML templates that are dynamically filled with data and automatically updated whenever the data changes. I have come to love this framework because it is very powerful, has a huge community and excellent documentation.

For the server, you will be using Node with Express. Express is a framework that makes it easy to create REST APIs by allowing to define code that runs for different requests on the server. Additional services can be plugged in globally, or depending on the request. There are a number of frameworks that build on top of Express and automate the task of turning your database models into an API. This tutorial will not make use of any of these in order to keep this focused.

Angular encourages the use of TypeScript. TypeScript adds typing information to JavaScript and, in my opinion, is the future of developing large scale applications in JavaScript. For this reason, you will be developing both client and server using TypeScript.

Here are the libraries you’ll be using for the client and the server:

  • Angular: The framework used to build the client application
  • Okta for Authorisation: A plugin that manages single sign-on authorization using Okta, both on the client and the server
  • Angular Material: An angular plugin that provides out-of-the-box Material Design
  • Node: The actual server running the JavaScript code
  • Express: A routing library for responding to server requests and building REST APIs
  • TypeORM: A database ORM library for TypeScript

Start Your Basic Angular Client Application

Let’s get started by implementing a basic client using Angular. The goal is to develop a product catalog which lets you manage products, their prices, and their stock levels. At the end of this section, you will have a simple application consisting of a top bar and two views, Home and Products. The Products view will not yet have any content and nothing will be password protected. This will be covered in the following sections.

To start you will need to install Angular. I will assume that you already have Node installed on your system and you can use the npm command. Type the following command into a terminal.

npm install -g @angular/cli@7.0.2

Depending on your system, you might need to run this command using sudo because it will install the package globally. The angular-cli package provides the ng command that is used to manage Angular applications. Once installed go to a directory of your choice and create your first Angular application using the following command.

ng new MyAngularClient

Using Angular 7, this will prompt you with two queries. The first asks you if you want to include routing. Answer yes to this. The second query relates to the type of style sheets you want to use. Leave this at the default CSS.

ng new will create a new directory called MyAngularClient and populate it with an application skeleton. Let’s take a bit of time to look at some of the files that the previous command created. At the src directory of the app, you will find a file index.html that is the main page of the application. It doesn’t contain much and simply plays the role of a container. You will also see a style.css file. This contains the global style sheet that is applied throughout the application. If you browse through the folders you might notice a directory src/app containing five files.

app-routing.module.ts
app.component.css
app.component.html
app.component.ts
app.component.spec.ts
app.module.ts

These files define the main application component that will be inserted into the index.html. Here is a short description of each of the files:

  • app.component.css file contains the style sheets of the main app component. Styles can be defined locally for each component
  • app.component.html contains the HTML template of the component
  • app.component.ts file contains the code controlling the view
  • app.module.ts defines which modules your app will use
  • app-routing.module.ts is set up to define the routes for your application
  • app.component.spec.ts contains a skeleton for unit testing the app component

I will not be covering testing in this tutorial, but in real life applications, you should make use of this feature. Before you can get started, you will need to install a few more packages. These will help you to quickly create a nicely designed responsive layout. Navigate to the base directory of the client, MyAngularClient, and type the following command.

npm i @angular/material@7.0.2 @angular/cdk@7.0.2 @angular/animations@7.0.1 @angular/flex-layout@7.0.0-beta.19

The @angular/material and @angular/cdk libraries provide components based on Google’s Material Design, @angular/animations is used to provide smooth transitions, and @angular/flex-layout gives you the tools to make your design responsive.

Next, create the HTML template for the app component. Open src/app/app.component.html and replace the content with the following.

<mat-toolbar color="primary" class="expanded-toolbar">
  <button mat-button routerLink="/">{{title}}</button>

  <div fxLayout="row" fxShow="false" fxShow.gt-sm>
    <button mat-button routerLink="/"><mat-icon>home</mat-icon></button>
    <button mat-button routerLink="/products">Products</button>
    <button mat-button *ngIf="!isAuthenticated" (click)="login()"> Login </button>
    <button mat-button *ngIf="isAuthenticated" (click)="logout()"> Logout </button>
  </div>
  <button mat-button [mat-menu-trigger-for]="menu" fxHide="false" fxHide.gt-sm>
    <mat-icon>menu</mat-icon>
  </button>
</mat-toolbar>
<mat-menu x-position="before" #menu="matMenu">
  <button mat-menu-item routerLink="/"><mat-icon>home</mat-icon> Home</button>
  <button mat-menu-item routerLink="/products">Products</button>;
  <button mat-menu-item *ngIf="!isAuthenticated" (click)="login()"> Login </button>
  <button mat-menu-item *ngIf="isAuthenticated" (click)="logout()"> Logout </button>
</mat-menu>
<router-outlet></router-outlet>

The mat-toolbar contains the material design toolbar, whereas router-outlet is the container that will be filled by the router. The app.component.ts file should be edited to contain the following.

import { Component } from '@angular/core';

@Component({
  selector: 'app-root',
  templateUrl: './app.component.html',
  styleUrls: ['./app.component.css']
})
export class AppComponent {
  public title = 'My Angular App';
  public isAuthenticated: boolean;

  constructor() {
    this.isAuthenticated = false;
  }

  login() {
  }

  logout() {
  }
}

This is the controller for the app component. You can see that it contains a property called isAuthenticated together with two methods login and logout. At the moment these don’t do anything. They will be implemented in the next section which covers user authentication with Okta. Now define all the modules you will be using. Replace the contents of app.module.ts with the code below:

import { BrowserModule } from '@angular/platform-browser';
import { NgModule } from '@angular/core';
import { BrowserAnimationsModule } from '@angular/platform-browser/animations';
import { FlexLayoutModule } from '@angular/flex-layout';
import {
  MatButtonModule,
  MatDividerModule,
  MatIconModule,
  MatMenuModule,
  MatProgressSpinnerModule,
  MatTableModule,
  MatToolbarModule
} from '@angular/material';
import { HttpClientModule } from '@angular/common/http';
import { FormsModule } from '@angular/forms';

import { AppRoutingModule } from './app-routing.module';
import { AppComponent } from './app.component';

@NgModule({
  declarations: [
    AppComponent
  ],
  imports: [
    AppRoutingModule,
    BrowserModule,
    BrowserAnimationsModule,
    HttpClientModule,
    FlexLayoutModule,
    MatToolbarModule,
    MatMenuModule,
    MatIconModule,
    MatButtonModule,
    MatTableModule,
    MatDividerModule,
    MatProgressSpinnerModule,
    FormsModule,
  ],
  providers: [],
  bootstrap: [AppComponent]
})
export class AppModule { }

Notice all the material design modules. The @angular/material library requires you to import a module for each type of component you wish to use in your app. Starting with Angular 7, the default application skeleton contains a separate file called app-routing.module.ts. Edit this to declare the following routes.

import { NgModule } from '@angular/core';
import { Routes, RouterModule } from '@angular/router';
import { ProductsComponent } from './products/products.component';
import { HomeComponent } from './home/home.component';

const routes: Routes = [
  {
    path: '',
    component: HomeComponent
  },
  {
    path: 'products',
    component: ProductsComponent
  }
];

@NgModule({
  imports: [RouterModule.forRoot(routes)],
  exports: [RouterModule]
})
export class AppRoutingModule { }

This defines two routes corresponding to the root path and to the products path. It also attaches the HomeComponent and the ProductsComponent to these routes. Create these components now. In the base directory of the Angular client, type the following commands.

ng generate component Products
ng generate component Home

This creates html, css, ts, and spec.ts files for each component. It also updates app.module.ts to declare the new components. Open up home.component.html in the src/app/home directory and paste the following content.

<div class="hero">
  <div>
    <h1>Hello World</h1>
    <p class="lead">This is the homepage of your Angular app</p>
  </div>
</div>

Include some styling in the home.component.css file too.

.hero {
  text-align: center;
  height: 90vh;
  display: flex;
  flex-direction: column;
  justify-content: center;
  font-family: sans-serif;
}

Leave the ProductsComponent empty for now. This will be implemented once you have created the back-end REST server and are able to fill it with some data. To make everything look beautiful only two little tasks remain. Copy the following styles into src/style.css

@import "~@angular/material/prebuilt-themes/deeppurple-amber.css";

body {
  margin: 0;
  font-family: sans-serif;
}

.expanded-toolbar {
  justify-content: space-between;
}

h1 {
  text-align: center;
}

Finally, in order to render the Material Design Icons, add one line inside the <head> tags of the index.html file.

<link href="https://fonts.googleapis.com/icon?family=Material+Icons" rel="stylesheet">

You are now ready to fire up the Angular server and see what you have achieved so far. In the base directory of the client app, type the following command.

ng serve

Then open your browser and navigate to http://localhost:4200.

Add Authentication to Your Node + Angular App

If you have ever developed web applications from scratch you will know how much work is involved just to allow users to register, verify, log on and log out of your application. Using Okta this process can be greatly simplified. To start off, you will need a developer account with Okta.

developer.okta.com

In your browser, navigate to developer.okta.com and click on Create Free Account and enter your details.

Start building on Okta

Once you are done you will be taken to your developer dashboard. Click on the Add Application button to create a new application.

Add Application

Start by creating a new single page application. Choose Single Page App and click Next.

Create new Single Page App

On the next page, you will need to edit the default settings. Make sure that the port number is 4200. This is the default port for Angular applications.

My Angular App

That’s it. You should now see a Client ID which you will need to paste into your TypeScript code.

To implement authentication into the client, install the Okta library for Angular.

npm install @okta/okta-angular@1.0.7 --save-exact

In app.module.ts import the OktaAuthModule.

import { OktaAuthModule } from '@okta/okta-angular';

In the list of imports of the app module, add:

OktaAuthModule.initAuth({
  issuer: 'https://{yourOktaDomain}/oauth2/default',
  redirectUri: 'http://localhost:4200/implicit/callback',
  clientId: '{YourClientId}'
})

Here yourOktaDomain should be replaced by the development domain you see in your browser when you navigate to your Okta dashboard. YourClientId has to be replaced by the client ID that you obtained when registering your application. The code above makes the Okta Authentication Module available in your application. Use it in app.component.ts, and import the service.

import { OktaAuthService } from '@okta/okta-angular';

Modify the constructor to inject the service and subscribe to it.

constructor(public oktaAuth: OktaAuthService) {
  this.oktaAuth.$authenticationState.subscribe(
    (isAuthenticated: boolean) => this.isAuthenticated = isAuthenticated
  );
}

Now, any changes in the authentication status will be reflected in the isAuthenticated property. You will still need to initialize it when the component is loaded. Create a ngOnInit method and add implements OnInit to your class definition

import { Component, OnInit } from '@angular/core';
...
export class AppComponent implements OnInit {
  ...
  async ngOnInit() {
    this.isAuthenticated = await this.oktaAuth.isAuthenticated();
  }
}

Finally, implement the login and logout method to react to the user interface and log the user in or out.

login() {
  this.oktaAuth.loginRedirect();
}

logout() {
  this.oktaAuth.logout('/');
}

In the routing module, you need to register the route that will be used for the login request. Open app-routing.module.ts and import OktaCallbackComponent and OktaAuthGuard.

import { OktaCallbackComponent, OktaAuthGuard } from '@okta/okta-angular';

Add another route to the routes array.

{
  path: 'implicit/callback',
  component: OktaCallbackComponent
}

This will allow the user to log in using the Login button. To protect the Products route from unauthorized access, add the following line to the products route.

{
  path: 'products',
  component: ProductsComponent,
  canActivate: [OktaAuthGuard]
}

That’s all there is to it. Now, when a user tries to access the Products view, they will be redirected to the Okta login page. Once logged on, the user will be redirected back to the Products view.

Implement a Node REST API

The next step is to implement a server based on Node and Express that will store product information. This will use a number of smaller libraries to make your life easier. To develop in TypeScript, you’ll need typescript and tsc. For the database abstraction layer, you will be using TypeORM. This is a convenient library that injects behavior into TypeScript classes and turns them into database models. Create a new directory to contain your server application, then run the following command in it.

npm init

Answer all the questions, then run:

npm install --save-exact express@4.16.4 @types/express@4.16.0 @okta/jwt-verifier@0.0.14 express-bearer-token@2.2.0 tsc@1.20150623.0 typescript@3.1.3 typeorm@0.2.8 sqlite3@4.0.3 cors@2.8.4 @types/cors@2.8.4

I will not cover all these libraries in detail, but you will see that @okta/jwt-verifier is used to verify JSON Web Tokens and authenticate them.

In order to make TypeScript work, create a file tsconfig.json and paste in the following content.

The post Build a Basic CRUD App with Angular and Node appeared first on SitePoint.


Source: Sitepoint

Strategies For Headless Projects With Structured Content Management Systems

Strategies For Headless Projects With Structured Content Management Systems

Strategies For Headless Projects With Structured Content Management Systems

Knut Melvær

2018-11-29T13:35:41+01:00
2018-11-29T12:48:24+00:00

This is the guide I wish I had the last couple of years when running projects with headless Content Management Systems (CMSs). I’ve been a developer, a user-experience and technology consultant, a project manager, information architect, and an author. The different hats have made me realize that even if we’ve had so-called “headless” CMSs for a while now, there’s still a way to go about thinking how to use them best.

We are now at a place where many of us rely on JavaScript frameworks for frontend work, using design systems made of components and compositions, rather than just implementing flat page layouts. There’s a lot of traction towards the JAMstacks and isomorphic/universal apps that run both on the server and the client. The final piece of the puzzle then is how we manage all the content.

Traditional CMSs are adding APIs to serve content through network requests and the JSON format. In addition, “headless” CMSs have been emerged to exclusively to serve content through APIs. My argument in this article though, is that we should spend less time talking about “headless”, and more about “structured content”. Because that is the essential quality of these systems. There are lots of implications for our craft implied by these systems, and we still have a way to go in terms of figuring out the good patterns of how we should deal with these technologies.

Coming to technology consulting from a background in humanities, I have learned a lot about how to organize and work with web projects that take a content-centric approach — both with the newer API-based as well as the traditional CMSs. I have come to appreciate how getting started early with actual live content from a CMS; doing so in a cross-disciplinary setting has not only made it possible to uncover complexities at an earlier stage but also lends agency to everyone involved, and gives opportunities to reflect on the challenges and possibilities of technology and design in its broadest sense.

Headless WordPress

Everyone knows that if a website is slow, users will abandon it. Let’s take a closer look at the basics of creating a decoupled WordPress. Read article →

In this article, I’ll suggest some overarching strategies, with some concrete, real-world examples on how to think about working with structured content. At the time of writing, I have just started working for a SaaS company that provides such a content management service, for hosting content delivered over APIs. I will make references to it, both because of my past experience with it in projects I was involved in as a consultant, but also because I think it aptly illustrates the points I want to make. So consider this a disclaimer of sorts.

That being said, I have been thinking about writing this article for a couple of years, and I have strived to make it applicable to whatever platform you choose to go with. So without further ado, let’s jump twenty years back in time in order to understand a bit more where we are today.

First Moves With Web Standards

In the early 2000s, the Web Standards movement inspired a field to change their ways of working. From a “layout-first” approach, they directed our attention towards how content on a page should be marked up semantically using HTML: A website’s menu isn’t a <table>, it’s a <nav>; A heading is not a <b>, it’s an <h1>. It was a significant step towards thinking about the different roles content web plays in order to help users find, identify and take it in.

The Web Standards movement introduced the argument that semantic markup improved accessibility, which also improved its ranking in the Google search results. It also marked a shift in how we thought about web content. Your website wasn’t longer the only place your content was represented. You also had to think about how your web pages were presented in other visual contexts, like in search results or screen readers. This was later fueled by social media and embedded previews of shared links. The mindset shifted from how the content should look, to what it should mean. This also happens to be the key to working with structured content.

With the adoption of pocket-size devices connected to the Internet, the web suddenly got a serious contender in apps. The competition, however, was mostly for the eyeballs of the end user. Many organizations still needed to distribute information about their products and services in both their apps and their different web presences. Concurrently, the web matured, and JavaScript and AJAX made it easier to connect different sources of content through APIs. Today, we have GraphQL and tooling that make content fetching and state management simpler. And so the bits of the technological puzzle begin to fall into place.

“Create Once, Publish Everywhere”

Though it’s mostly described as a “technological shift”, the embedding of content in JSON payloads (traveling along HTTP tubes) has an outsized impact of how we think about digital content and surrounding workflows. In some ways, it already has. Almost ten years ago, National Public Radio’s (NPR) Daniel Jacobson guest blogged at programmableweb.com about their approach, summed up in in the acronym COPE which stands for “Create Once, Publish Everywhere”. In the article, he introduces a content management system providing content to multiple digital interfaces through an API — not through an HTML rendering machine — as most CMSs at the time (and arguably now) did.


NPR’s COPE system diagram. Goes from the left with a data entry layer, a normalized data management layer, a flattened data management layer, and layer for APIs, one for filtering and rights, and the presentation layer to the right.
Illustration of NPR’s COPE system. Published originally on programmableweb.com (Oct 13, 2009) (Large preview)

NPR’s COPE “data management layer” is what would become the notion of “a headless CMS”. In the early days of COPE, it was achieved by structuring the content in XML. Today, JSON has become the dominant data format for transferring data over APIs, including internet of things devices, and other systems outside the web. If you want to exchange content with chatbots, voice interfaces, and even software for visual prototyping, you very often talk HTTP with a JSON accent.

“Uncoining” The Term “Headless CMS”

According to Google Trends, searches for “headless CMS” gained in popularity as late as 2015, i.e. six years after NPR’s COPE article. The term “headless” (at least in relation to digital technology and not late 18th-century French aristocracy), has been used a good while longer to talk about systems that run without a graphical user interface.

Note: One could argue that a command line interface is indeed “graphical” such as software on servers or testing environments (but let’s save that for another article).

I’m of two minds calling these new CMSs “headless”. We could as well call them “polycephalic” — that which has many heads. They are the Hydras and Cerbeuses of CMSs. “Headless” is also defining these systems by the capability they lack (i.e., a template engine for rendering web pages), instead of defining them by their true strength: making it possible to structure content without the constraints of the web. That being said, as of today, many of the solutions in this category could also be called “Nearly Headless Nick”. Because the editing interface is still tightly coupled to the system. Their “headlessness“ arises from their lack of a templating engine, that is, the machinery producing markup from content.

Note: I would almost definitely use a CMS called “Mimsy-Porpington” (known from the Harry Potter universe) though.

Instead, they make content available through an API, hence giving you more flexibility for how, what, and where you want to display and use this content. This makes them perfect companions to popular JavaScript frontend frameworks such as React, Angular, and Vue. And despite the claim of being able to deliver content to “websites, apps, and devices”, most of them are still limited by how web content works. This is most noticeable in the way most handle rich text — storing it either as HTML or Markdown.

Traditional CMSs have also started adding somewhat generic APIs in addition to their template rendering systems and call this “decoupled” as a way to distinguish themselves from their fresh competitors. “All this, and APIs, too!”* is the claim. Some of these CMSs are also pretty agnostic when it comes to the content modeling. For example, Craft CMS, makes almost no assumptions about your content model when you first install it. WordPress is also moving towards using APIs for content delivery. I suspect the gap between the old players in the CMS field and the new will get narrower as we go along.

Nonetheless, putting content management behind APIs (instead of an HTML renderer) is an important step to more sophisticated ways of working in an age where an organization’s text, images, videos, and media are digitized and exposed to internal and external users and customers. It’s time though, to move away from defining their lacking frontend rendering capabilities, to what they really can do for us: give us a way to work with structured content. So, should we be calling them “Structured Content Management Systems”? As in, “No Bob, this isn’t your usual CMS. This is a SCMS, trust me, it’s going to be a thing.”

It’s Not About The Heads, It’s About Structured Content

The most radical change that the Structured Content Management Systems (SCMS) imposes is a move away from arranging content according to a page hierarchy to where you are free to structure content for whatever purpose you see fit. Avoiding duplicate content is a clear advantage because it increases reliability and decreases administrative burden (you don’t have to cope with duplicated content across multiple channels). In other words: Create Once, Publish Everywhere. If you just have to update your product description once — in one system — and it updates wherever your product is exposed to the user, that’s clearly an advantage.

While SCMS vendors frequently use “your website and an app” to justify thinking differently on page structure, you don’t have to cross the river to draw benefits from a structured content structure. With the popularity of JavaScript frameworks, it’s more and more common to build websites as a composition of individual components, that can be “filled” with different content depending on state and context. You may have a product card that appears in many different contexts throughout your web application. We’re seeing that modern web development moves away from setting documents and pages to composing components according to a mixture of user input, algorithms, and customization.

These trends for how design systems are made, and how we are encouraged to work in teams through processes of testing, learning, and iteration, makes the field of content management ripe for some new ways of thinking. Some patterns have emerged, but we still have many ways to go. Therefore, based on my experience from working in teams and projects that have put content front and center, and as now part of a team that builds a service for it (and I urge you to be aware of any bias here), I want to put forth some strategies that I believe can be helpful and create points for further discussion.

1. Approach Content In Multi-Disciplinary Teams

I believe that it is a thing of the past that a graphic designer can hand over stale, pixel-perfect pages to a frontend developer whose responsibility was to “implement” the design. We now make design systems consisting of smaller components, laid out in compositions that come with multiple possible states out of the box. More often than not, these components have to be resilient to user-generated input, which means that the sooner you introduce live content into the process, the better. A frontend developer’s responsibility isn’t to reproduce the vision of a graphic designer’s; it’s to maneuver a complex field of how browsers render HTML, CSS, and JavaScript, making sure that the user interfaces are responsive, accessible and performant.

When working as a technology consultant at Netlife (a consultancy specialized in user experience), I saw great steps being made towards collaboration between developers, designers, and user researchers. Even though our content editors were always involved in the project from the get-go, their contributions didn’t enter design workflow mainly because of technical friction.

The bottleneck was often a legacy CMS we couldn’t touch, or that it took time to build the content structure because it was dependent on the design layout. This often resulted in work being doubled: We made an HTML prototype, often based on content parsed from Markdown-files, which had to be re-implemented in the CMS-stack when the user testing was done, and everyone was pixel-perfect happy. This was often an expensive process as limitations in the CMS were discovered late in the process. It also creates pressure on all parts to “get it right the first time” and left less space for the kind of experimentation you would want in a design project.

Multi-Disciplinary Work Requires Nimble Systems

Moving to a SCMS in which it took minutes to code up a content model (where fields and API were ready instantly) turned our process upside down — and for the better. I remember sitting with the content editor of the new u4.no in the project’s first days. Talking through how they worked and would like to work with their content. Rather quickly, we translated our conclusions into simple JavaScript-objects that were instantly transformed to an editing environment in the browser. Figuring out helpful titles and descriptions for the titles. We talked about how they wanted text-snippets they could reuse across different pages and contexts, which they in-house called “nuggets”, which we then created then and there.

Allowing for this kind of exploration early in the project development — a content editor and a developer talking together while the interface was being made in front of us — felt powerful. Knowing that we could continue designing the frontend in React while she and her colleagues began working with the content. And not worrying about painting ourselves into a corner, like we often did with CMSs in which the structure was tightly coupled with how you had to code up the frontend part of it.


U4.no’s content editor with a publication document open
Example from u4.no’s custom editor environment in Sanity with its style guide is carefully and contextually integrated with the fields. (Large preview)

A Content System Should Allow For Experimentation And Iteration

Creative redesign projects aside, a system for structured content should also allow you to continue improving, testing and iterate your content as part of your whole design system. UX designers should be able to quickly prototype with real content using tools like Sketch or Framer X. You should be able to augment content management with quantitative measurements, be it readability scales or how the content performs where it’s used.

Note: I used the term “UX designers” above despite having the opinion that we all should — in some way — relate to the process of making good user experiences. We’re all UX designers in our different strands of design.

Animated screendump of an rich text editor in Sanity
Example of quantitative readability analysis in a rich text editor. (Large preview)

Working with structured content requires a bit of getting used to if you’re used to just WYSIWYG-ing content directly on your web page layout. Yet, it lends itself to a conversation that is more in line with how the digital design field is moving. Structured content lets a team of designers, developers, content editors, user researchers, and project managers collectively think about how a system should work to support users’ needs and strategic goals. This also requires you to think differently about how content structures, which takes us to the next strategy.

2. You Might Not Need A Pecking Order

One of the most notable changes for many is that systems for structured content are geared towards collections and lists of documents and not folder-like hierarchies that reflect website navigation structures. These structures stop making sense as soon as some of the content is to be used in other contexts — be it chatbots, print media or other websites. Traditional CMSs have tried to mitigate this by allowing for reusable content blocks, but they still need to be placed on page layouts and cumbersome to reason with through APIs.


Folder-based content management in Episerver.
Folder-based content management in Episerver. This screenshot isn’t old by the way. Published on episerver.com.(Large preview)

Each Page To Its Own

As laid out in The Core Model, when one of your main referrers is either Google or sharing on social media, you should consider every page a landing page. And if you look at the distribution of page views, you will notice that some of your pages are way more popular than others. Unless you are a news website, those tend not to be the news, but those that let the user achieve whatever they hoped to achieve on your website. They are where business is actually happening.

Your digital content should be in service of the intersection of your own strategic goals and individual goals of your users. When the digital agency Bengler (sanity.io’s predecessor) made the new website for oma.eu, they didn’t structure the content after an elaborate hierarchy of pages. They made content types that reflected the organizational everyday reality, i.e. after projects, persons, and publications. In fact, the OMA-website is almost completely flat in terms of a content hierarchy, and the front page is generated from a mix of algorithmic and editorial rules.


The Sanity Studio with a “plan feature” document titled “Premium Support” open with an editor
How sanity.io structures their content (Large preview)

So, how to go about it? I believe a mix of thinking about your content as a reflection of how your organization’s mental model and what it needs to be to be useful for whatever your users need it for.

Here’s a basic example: When building a page of employees, you should probably start with a content type called person. A person can have a name, contact info, an image, different organizational roles, and a short biography. A person document can be reused in contact lists, article author bylines, chat support interfaces, and building access badges. Perhaps you already have an in-house system that knows who these people are and that comes with an API? Great, then synchronize with that.

Don’t Get Lost In An Ontological Rabbit Hole

It’s useful to return to Google’s way of indexing web pages and how they’re trying to index the world’s information. That’s why they are expending time and effort on linked data (RDFa, microformat, JSON-LD). If you annotate your web pages with JSON-LD elements, you will appear more prominently in search results. It’s also relevant when your information should be spoken by voice assistants and displayed in an assistant UI. If your content is already structured and easily available in an API, it will be relatively easy for you to implement it in these microformats.

I’m not sure I would recommend going all in on the ontologies of schema.org and various linked data resources though, at least not for editor purposes. You can quickly get lost in a rabbit hole of trying to make perfect platonic structures where it all fits.

Newsflash: It never will, because the world is a messy place, and because people think about stuff differently.

It’s more important to structure your content in a system that makes intuitive sense and lends itself to be adapted as needs change. This is why it’s important to start with content modelling early on in the design and development process — you need to learn about how it needs to be used.

Abstract From Reality, Not From CMS Conventions

It can be tempting to just follow whatever conventions your CMS comes with. Remember how WordPress will give you “Posts” and “Pages”, and suddenly everything needs to be fitted into those boxes? A WYSIWYG rich text field is flexible in that it allows you to put in whatever, but the content will not be structured and easily adaptable — it’s only flexible once. But you need some place to begin your mapping of a content model. My suggestion is to begin with talking to people, i.e. the authors and readers.

How do people talk about the content internally? What do people call different things? You could run a free-listing exercise, a method used by ethnographers to map folk-taxonomies. For example, you could ask:

“Name the different types of content in our organization.”

Or, on a more specific level:

“Can you name the different types of reports we have in this organization?”

The point with this survey is to tease out the internalized taxonomies people carry, and not their opinions or feelings about things (something that often tends to derail design processes). You don’t have to ask particularly many before having a pretty exhaustive list you can work from. You’ll probably find that parts of your list come from conventions in your current CMS (that’s good to know if you are to do some remodelling). Now you should talk with your editor and try to pin down what they need the content to do.

Some questions you can ask could be the following:

  • Do you need to use this content in more than one place? Where?
  • What are the different relationships between the content types?
  • Where do we need the content to be displayed today, and tomorrow?
  • In which ways do we need content to be sorted? Can the ordering be done algorithmically, by the user, or does it have to be manually?
  • Are there systems or databases in other systems that we can synchronize with in order to prevent duplication?
  • Where do we want the canonical content to live? Should the SCMS be the source for it, or just augment existing content, e.g. marketing copy for products living in a product management system?

This doesn’t mean that you have to throw the traditional information architecture out with the now lukewarm bathwater. It still makes sense to have articles as a content type, if articles are a part of your organization’s content reality. But perhaps you don’t really need the abstract convention of categories, because how these articles have references to the type of services or products in them. And this relation allows for querying these articles in circumstances where it makes sense, without requiring someone to have “article category management” as part of their job description.

The article is also what makes it hard to decouple content completely from the presentation layer. We are so used to thinking about the layout and styling of the article, but in an age where you are expected to host your own content on your own domain, and then syndicate it to platforms such as medium.com, you already have given up control over visual presentation. This takes us to the next strategy.

3. Presentation Contexts Are Also Content Types

Be Redesign Ready

You want to be able to adapt and quickly change the navigation structure of your website as well, without having to either rebuild your whole content architecture or fight against a stringent folder-like interface. You also want to be able to have some content hierarchy, because it sometimes makes sense, and sometimes it gets deeper than two levels, where most interfaces in the department of API-first CMSs fail to deliver much help.


The structure feature in Craft CMS
Interface for arranging content in a hierarchy (called “Structure”) in Craft CMS. Content defined by their place in one hierarchy may make sense in some cases, but it’s a legacy from menu navigation that stops making sense when the content is reused across channels or placed by software like targeting algorithms. Published on craftcms.com (Large preview)

Interestingly, content management systems for chatbots tend to use similar hierarchical structures for arranging intent trees and dialog flows. This goes to say that content hierarchies play different roles in different channels, but often they provide ways of navigating through content. A way to approach this is to make types for navigation, where you can arrange content by references, and either build routes for web pages, menus, or paths for conversational interfaces.

Relationship Advice

References (or relationships) is what makes a system for structured content possible, and it’s really the core of everything we’re dealing with when it comes to content on the web (it’s the reason it’s metaphorically called the web in the first place). To be able to make references between bits of content is a very powerful thing, but it can also be costly in terms of how the backends are able to write and retrieve such data. So you may have to think differently if you have multitudes of documents since scale seldom comes for free.

It’s also worth considering that you don’t always need an explicit reference to join data; most often it can be done by criteria that has to do with the content, e.g. “give me all persons and all buildings within this geolocation”. The building and persons don’t need to have an explicit reference to each other, as long as it’s implied in a location field on both content types.


Sanity Studio with a “route” document and an editor open
Example of a simple routing type for sanity.io. Notice that we have a “page” type, too. (Large preview)

Sanity Studio with a “page” document and an editor open
The page type is just a series of web page specific compositions where it’s possible to reuse other content types. (Large preview)

References between presentation types and other content types is useful when you can’t leave it to an algorithm in the presentation layer to join data. It may seem a bit cumbersome to explicitly draw these presentation types and make compositions of referred content, but it’s a solution to a problem you’ll often meet with SCMSs: It’s hard to know where content is being used. By including navigation types, you’ll explicitly tie content to presentation, but not just one. This makes it possible to reason to work with navigational structures independently of the content they lead to.

For example, in the screenshots we have tied Google Experiments to the routes type, allowing for adding multiple pages composed by references to content, which means that we can run A/B-tests with next to no content duplication. Since we also get a warning if we try to delete content that is referenced by other documents, this way of structuring will keep us from deleting something we shouldn’t.

Relationships across content types is a double-edged sword. It increases sustainability and is key to avoid duplication. On the other hand, you can easily cut yourself because you make dependencies between content, which (if not made transparent) can lead to unintended changes across the channels where your data is displayed. It would, for example, be bad if we could remove a “page” used by a “route” without warning.

This leads us to the next strategy, which (granted!) is partly beyond the power of the normal user as of today since it has to do with how different systems are architected. Still, it’s worth thinking about.

4. Don’t Put Rich Text In A Corner

Rich Text Is More Than HTML

I can understand why HTML is given such prevalence in digital content, but know it also comes from something; it’s a subset of SGML, a generalized way of structuring machine-readable documents. As Claire L. Evans points out in the wonderful book “Broad Band: The Untold Story of the Women who made the Internet” (2018), there was already a vibrant community of people thinking about linked documents when HTML was introduced. Tim Berners-Lee’s proposal was a lot simpler than many of the other systems at the time, but that’s probably why it caught on and made the — as of now — open, free web possible.

When you’re in a browser on the world wide web, HTML is great. If you’re a writer who wants to publish something that ends up in simple HTML, Markdown is great. If you want your rich text content to be easily integrated into something that isn’t a browser, or a popular JavaScript-framework that lets you augment HTML with JavaScript in complex components (yes, we’re talking about React and Vue.js), having HTML in your API responses begins to be a bit of a hassle — especially if you need to parse it.

Almost everyone does it though, even the new kids on the block: I went through all the vendors on headlesscms.org and browsed through the documentation, and also signed up for those who didn’t mention it. With two exceptions, they all stored rich text either as HTML or Markdown. That’s fine if all you do is use Jekyll to render a website, or if you enjoy using dangerouslySetInnerHTML in React. But what if you want to reuse your content in interfaces that aren’t on the web? Or if you want more control and functionality in your rich text editor? Or just want it to be easier to render your rich text in one of the popular frontend frameworks, and have your components take care of different parts of your rich text content? Well, you’ll either have to find a smart way to parse that markdown or HTML into what you need, or, more conveniently, just have it stored more sensically in the first place.

For example, what if you want to output your rich text to a voice interface? We know that voice assistants are increasing in popularity. The most popular platforms for these assistants have the capabilities to get the text for spoken content through APIs. Then you want to take advantage of something like Speech Synthesis Markup Language. A system for portable text takes a more agnostic approach to rich text, which lets you adapt the same content for different kinds of interfaces.

Example of a rich text editor with speech synthesis capabilities. Compatible with, but not restricted to SSML).

Recommended reading: Experimenting With The SpeechSynthesis Interface

Portable Text As An Agnostic Rich Text Model

Portable text is also useful when you’re primarily doing content for the web. What if you want to have the possibility to nest and augment your text with data structures, such as a rich text footnote, or an inline editorial comment? Or an alternative phrase or wording for A/B-testing cases? Markdown and HTML quickly fall short, and you’ll have to rely on adding something like special shortcode tags, just like WordPress has solved it. With portable text, you have an agnostic representation of content structures, without having to marry a certain implementation. Your content ends up being more sustainable and flexible for new redesigns and implementations.

There are also other advantages to portable text, especially if you want to be able to edit content collaboratively and in real time (as you do in Google Docs); you need to store rich text in another structure than HTML. If you do, you’ll also be able to take advantage of microservices and bots, such as spaCy, in order to annotate and augment your content without locking the document.

As for now, portable text isn’t widely adopted, but we’re seeing movements towards it. The specification isn’t very complex and can be explored at portabletext.org.

5. Make Sure Your SCMS Is In Service For Your Editors, And Not The Other Way Around

Digital content isn’t just used for your organization’s online web page leaflets anymore. For most of us, it encapsulates and defines how your organization is understood by the world, both from those within it and those outside: From product copy, micro texts to blog posts, chatbot responses, and strategy documents. We are millions of people that have to log into some CMS every day and navigate interfaces that were imagined twenty years ago with the assumptions of people who have never made much effort to user test or challenge their interfaces. Countless hours have been wasted away trying to fit a modern frontend experience into a page layout machine. Fortunately, this is soon a thing of the past.

As a technology consultant, I had to read through pages of technical specification whenever someone thought it was time to acquire a new CMS for themselves. There were demands from which server architecture it should run on (Windows servers, of course) to their ability to render “carousels” and “being able to edit web pages in place”, despite also requesting a “modular redesign”. When editors had been allowed to contribute to these specifications, they were also often dated to the what the editors had begotten used to. They seemed not aware that they could demand better user experiences, because enterprise software has to be big, lumpy and boring.

This is partly the fault of us making these systems. We tend to communicate technology features and specifications, and less what the everyday situation working with these systems look like. Sure, for a frontend designer, something supporting GraphQL is shorthand for how conveniently she is able to work against the backend, but on a higher level, it’s about the systems ability to accommodate for emerging workflows, where a content model could survive visual redesigns and design systems should be resilient to changes of its content.

Questions To Ask Of Your (S)CMS

If we are to embrace design processes, we can’t know prior to solving the problem whether the user tasks are best solved by making carousels (newsflash: most probably not), or whether A/B-testing makes sense for your case, even though it sounds cool.

Instead, ask questions like this:

  • Is it possible, and how exactly will multi-disciplinary teams work with this system?
  • How easy is it to change and migrate the content model?
  • How does it deal with file and image assets?
  • Has the editorial interface been user tested?
  • To what extent can the system be configured and customized to special workflows and needs of the editorial team?
  • How easy is it to export the content in a moveable format?
  • How does the system accommodate for collaboration?
  • Can content models be version controlled?
  • How easy is it to integrate the system with a larger ecosystem of flowing information?

The goal of these questions is to explore to what degree a content management system allows for a cross-disciplinary team to work effortlessly together, without too many bottle-necks or long deployment cycles. They also push the focus to be more about the content should be doing, and less about how things should look in a given context. Leave that for the design processes, where user testing probably will challenge assumptions one may have when looking into getting a new content system.

There are, of course, many factors in addition to this that probably have to be taken into consideration. The easiest thing to assess is the fiscal cost of software licenses and API-related costs if you are on a hosted service. The invisible cost (in time and attention spent by the team working with the system), is harder to estimate. From my experience, many of the SCMSs in combination with one of the popular frontend frameworks can significantly cut development time and allow for an agile (there’s my coin for the swear jar) design process. With the caveat that your team is prepared to solve some of the problems that come out of the box with traditional CMSs.

Towards Structured Content

The ways we work with digital content has changed dramatically since the World Wide Web made working with interconnected documents mainstream. Organizations, businesses, and corporations have amassed gigabytes of this content, which now is stuck in rigid page hierarchies, HTML markup, and clunky user interfaces.

Using a Structured Content Management System can be a great way to free your content from a paradigm that begins to feel its age. But it isn’t a trivial exercise, and success comes from being able to work multi-disciplinary and put your content model to the test. You need to get rid of some conventions you have grown used to by dealing with CMSs designed to output hierarchical websites. That means that you need to think differently about ordering content, make presentations types in order to make it easier to orchestrate content across multiple channels and to consider how you structure rich text so that it can be used outside of HTML contexts.

This article deals with some of the high-level concerns working with SCMSs. There are, of course, loads of exciting challenges when you start working with this in your team. You have to rethink stuff we’ve taken for granted for many years, but that’s probably a good thing. Because we are forced to evaluate our content, not only from its place on a digital page but from its role in a larger system that works for whatever goals your organization and your users may have.

I believe that we can achieve content models that are more meaningful and easier to sustain in the long run, and that means saving time and expenses. It means more flexibility in terms of inventing new outputs and services, and less tie in with software vendors. Because a well-made Structured Content Management System will make it easy for you to take your content and go elsewhere. And that makes for some interesting competition. Hopefully, all in favor of the users.

Smashing Editorial
(dm, ra, il)


Source: Smashing Magazine

A Complete Guide To Routing In Angular

A Complete Guide To Routing In Angular

A Complete Guide To Routing In Angular

Ahmed Bouchefra

2018-11-28T15:00:49+01:00
2018-11-28T14:02:36+00:00

In case you’re still not quite familiar with Angular 7, I’d like to bring you closer to everything this impressive front-end framework has to offer. I’ll walk you through an Angular demo app that shows different concepts related to the Router, such as:

  • The router outlet,
  • Routes and paths,
  • Navigation.

I’ll also show you how to use Angular CLI v7 to generate a demo project where we’ll use the Angular router to implement routing and navigation. But first, allow me to introduce you to Angular and go over some of the important new features in its latest version.

Introducing Angular 7

Angular is one of the most popular front-end frameworks for building client-side web applications for the mobile and desktop web. It follows a component-based architecture where each component is an isolated and re-usable piece of code that controls a part of the app’s UI.

A component in Angular is a TypeScript class decorated with the @Component decorator. It has an attached template and CSS stylesheets that form the component’s view.

Angular 7, the latest version of Angular has been recently released with new features particularly in CLI tooling and performance, such as:

  • CLI Prompts: A common command like ng add and ng new can now prompt the user to choose the functionalities to add into a project like routing and stylesheets format, etc.
  • Adding scrolling to Angular Material CDK (Component DevKit).
  • Adding drag and drop support to Angular Material CDK.
  • Projects are also defaulted to use Budget Bundles which will warn developers when their apps are passing size limits. By default, warnings are thrown when the size has more than 2MB and errors at 5MB. You can also change these limits in your angular.json file. etc.

Introducing Angular Router

Angular Router is a powerful JavaScript router built and maintained by the Angular core team that can be installed from the @angular/router package. It provides a complete routing library with the possibility to have multiple router outlets, different path matching strategies, easy access to route parameters and route guards to protect components from unauthorized access.

The Angular router is a core part of the Angular platform. It enables developers to build Single Page Applications with multiple views and allow navigation between these views.

Let’s now see the essential Router concepts in more details.

The Router-Outlet

The Router-Outlet is a directive that’s available from the router library where the Router inserts the component that gets matched based on the current browser’s URL. You can add multiple outlets in your Angular application which enables you to implement advanced routing scenarios.

<router-outlet></router-outlet>

Any component that gets matched by the Router will render it as a sibling of the Router outlet.

Routes And Paths

Routes are definitions (objects) comprised from at least a path and a component (or a redirectTo path) attributes. The path refers to the part of the URL that determines a unique view that should be displayed, and component refers to the Angular component that needs to be associated with a path. Based on a route definition that we provide (via a static RouterModule.forRoot(routes) method), the Router is able to navigate the user to a specific view.

Each Route maps a URL path to a component.

The path can be empty which denotes the default path of an application and it’s usually the start of the application.

The path can take a wildcard string (**). The router will select this route if the requested URL doesn’t match any paths for the defined routes. This can be used for displaying a “Not Found” view or redirecting to a specific view if no match is found.

This is an example of a route:

{ path:  'contacts', component:  ContactListComponent}

If this route definition is provided to the Router configuration, the router will render ContactListComponent when the browser URL for the web application becomes /contacts.

Route Matching Strategies

The Angular Router provides different route matching strategies. The default strategy is simply checking if the current browser’s URL is prefixed with the path.

For example our previous route:

{ path:  'contacts', component:  ContactListComponent}

Could be also written as:

{ path:  'contacts',pathMatch: 'prefix', component:  ContactListComponent}

The patchMath attribute specifies the matching strategy. In this case, it’s prefix which is the default.

The second  matching strategy is full. When it’s specified for a route, the router will check if the the path is exactly equal to the path of the current browser’s URL:

{ path:  'contacts',pathMatch: 'full', component:  ContactListComponent}

Route Params

Creating routes with parameters is a common feature in web apps. Angular Router allows you to access parameters in different ways:

You can create a route parameter using the colon syntax. This is an example route with an id parameter:

{ path:  'contacts/:id', component:  ContactDetailComponent}

Route Guards

A route guard is a feature of the Angular Router that allows developers to run some logic when a route is requested, and based on that logic, it allows or denies the user access to the route. It’s commonly used to check if a user is logged in and has the authorization before he can access a page.

You can add a route guard by implementing the CanActivate interface available from the @angular/router package and extends the canActivate() method which holds the logic to allow or deny access to the route. For example, the following guard will always allow access to a route:

class MyGuard implements CanActivate {
  canActivate() {
    return true;
  }
}

You can then protect a route with the guard using the canActivate attribute:

{ path:  'contacts/:id, canActivate:[MyGuard], component:  ContactDetailComponent}

The Angular Router provides the routerLink directive to create navigation links. This directive takes the path associated with the component to navigate to. For example:

<a [routerLink]="'/contacts'">Contacts</a>

Multiple Outlets And Auxiliary Routes

Angular Router supports multiple outlets in the same application.

A component has one associated primary route and can have auxiliary routes. Auxiliary routes enable developers to navigate multiple routes at the same time.

To create an auxiliary route, you’ll need a named router outlet where the component associated with the auxiliary route will be displayed.

<router-outlet></router-outlet>  
<router-outlet  name="outlet1"></router-outlet> 
  • The outlet with no name is the primary outlet.
  • All outlets should have a name except for the primary outlet.

You can then specify the outlet where you want to render your component using the outlet attribute:

{ path: "contacts", component: ContactListComponent, outlet: "outlet1" }

Creating An Angular 7 Demo Project

In this section, we’ll see a practical example of how to set up and work with the Angular Router. You can see the live demo we’ll be creating and the GitHub repository for the project.

Installing Angular CLI v7

Angular CLI requires Node 8.9+, with NPM 5.5.1+. You need to make sure you have these requirements installed on your system then run the following command to install the latest version of Angular CLI:

$ npm install -g @angular/cli

This will install the Angular CLI globally.


Installing Angular CLI v7
Installing Angular CLI v7 (Large preview)

Note: You may want to use sudo to install packages globally, depending on your npm configuration.

Creating An Angular 7 Project

Creating a new project is one command away, you simply need to run the following command:

$ ng new angular7-router-demo

The CLI will ask you if you would like to add routing (type N for No because we’ll see how we can add routing manually) and which stylesheet format would you like to use, choose CSS, the first option then hit Enter. The CLI will create a folder structure with the necessary files and install the project’s required dependencies.

Creating A Fake Back-End Service

Since we don’t have a real back-end to interact with, we’ll create a fake back-end using the angular-in-memory-web-api library which is an in-memory web API for Angular demos and tests that emulates CRUD operations over a REST API.

It works by intercepting the HttpClient requests sent to the remote server and redirects them to a local in-memory data store that we need to create.

To create a fake back-end, we need to follow the next steps:

  1. First, we install the angular-in-memory-web-api module,
  2. Next, we create a service which returns fake data,
  3. Finally, configure the application to use the fake back-end.

In your terminal run the following command to install the angular-in-memory-web-api module from npm:

$ npm install --save angular-in-memory-web-api

Next, generate a back-end service using:

$ ng g s backend

Open the src/app/backend.service.ts file and import InMemoryDbService from the angular-in-memory-web-api module:

import {InMemoryDbService} from 'angular-in-memory-web-api'

The service class needs to implement InMemoryDbService and then override the createDb() method:

@Injectable({
  providedIn: 'root'
})
export class BackendService implements InMemoryDbService{

  constructor() { }
  createDb(){
    
   let  contacts =  [
     {  id:  1,  name:  'Contact 1', email: 'contact1@email.com' },
     {  id:  2,  name:  'Contact 2', email: 'contact2@email.com' },
     {  id:  3,  name:  'Contact 3', email: 'contact3@email.com' },
     {  id:  4,  name:  'Contact 4', email: 'contact4@email.com' }
   ];

   return {contacts};
    
  }
}

We simply create an array of contacts and return them. Each contact should have an id.

Finally, we simply need to import InMemoryWebApiModule into the app.module.ts file, and provide our fake back-end service.

import { InMemoryWebApiModule } from “angular-in-memory-web-api”;  
import { BackendService } from “./backend.service”;
/* ... */

@NgModule({
  declarations: [
    /*...*/
  ],
  imports: [
    /*...*/
    InMemoryWebApiModule.forRoot(BackendService)
  ],
  providers: [],
  bootstrap: [AppComponent]
})
export class AppModule { }

Next create a ContactService which encapsulates the code for working with contacts:

$ ng g s contact

Open the src/app/contact.service.ts file and update it to look similar to the following code:

import { Injectable } from '@angular/core';
import { HttpClient } from “@angular/common/http”;

@Injectable({
  providedIn: 'root'
})
export class ContactService {

  API_URL: string = "/api/";
  constructor(private http: HttpClient) { }
  getContacts(){    
   return this.http.get(this.API_URL + 'contacts')
  }
  getContact(contactId){
   return this.http.get(`${this.API_URL + 'contacts'}/${contactId}`) 
  }
}

We added two methods:

  • getContacts()
    For getting all contacts.
  • getContact()
    For getting a contact by id.

You can set the API_URL to whatever URL since we are not going to use a real back-end. All requests will be intercepted and sent to the in-memory back-end.

Creating Our Angular Components

Before we can see how to use the different Router features, let’s first create a bunch of components in our project.

Head over to your terminal and run the following commands:

$ ng g c contact-list
$ ng g c contact-detail

This will generate two ContactListComponent and ContactDetailComponent components and add them to the main app module.

Setting Up Routing

In most cases, you’ll use the Angular CLI to create projects with routing setup but in this case, we’ll add it manually so we can get a better idea how routing works in Angular.

Adding The Routing Module

We need to add AppRoutingModule which will contain our application routes and a router outlet where Angular will insert the currently matched component depending on the browser current URL.

We’ll see:

  • How to create an Angular Module for routing and import it;
  • How to add routes to different components;
  • How to add the router outlet.

First, let’s start by creating a routing module in an app-routing.module.ts file. Inside the src/app create the file using:

$ cd angular7-router-demo/src/app
$ touch app-routing.module.ts

Open the file and add the following code:

import { NgModule } from '@angular/core';
import { Routes, RouterModule } from '@angular/router';

const routes: Routes = [];

@NgModule({
  imports: [RouterModule.forRoot(routes)],
  exports: [RouterModule]
})
export class AppRoutingModule { }

We start by importing the NgModule from the @angular/core package which is a TypeScript decorator used to create an Angular module.

We also import the RouterModule and Routes classes from the @angular/router package . RouterModule provides static methods like RouterModule.forRoot() for passing a configuration object to the Router.

Next, we define a constant routes array of type Routes which will be used to hold information for each route.

Finally, we create and export a module called AppRoutingModule(You can call it whatever you want) which is simply a TypeScript class decorated with the @NgModule decorator that takes some meta information object. In the imports attribute of this object, we call the static RouterModule.forRoot(routes) method with the routes array as a parameter. In the exports array we add the RouterModule.

Importing The Routing Module

Next, we need to import this module routing into the main app module that lives in the src/app/app.module.ts file:

import { BrowserModule } from '@angular/platform-browser';
import { NgModule } from '@angular/core';

import { AppRoutingModule } from './app-routing.module';
import { AppComponent } from './app.component';

@NgModule({
  declarations: [
    AppComponent
  ],
  imports: [
    BrowserModule,
    AppRoutingModule
  ],
  providers: [],
  bootstrap: [AppComponent]
})
export class AppModule { }

We import the AppRoutingModule from ./app-routing.module and we add it in the imports array of the main module.

Adding The Router Outlet

Finally, we need to add the router outlet. Open the src/app/app.component.html file which contains the main app template and add the <router-outlet> component:

<router-outlet></router-outlet>

This is where the Angular Router will render the component that corresponds to current browser’s path.

That’s all steps we need to follow in order to manually setup routing inside an Angular project.

Creating Routes

Now, let’s add routes to our two components. Open the src/app/app-routing.module.ts file and add the following routes to the routes array:

const routes: Routes = [
    {path: 'contacts' , component: ContactListComponent},
    {path: 'contact/:id' , component: ContactDetailComponent}
];

Make sure to import the two components in the routing module:

import { ContactListComponent } from './contact-list/contact-list.component';
import { ContactDetailComponent } from './contact-detail/contact-detail.component';

Now we can access the two components from the /contacts and contact/:id paths.

Next let’s add navigation links to our app template using the routerLink directive. Open the src/app/app.component.html and add the following code on top of the router outlet:

<h2><a [routerLink] = "'/contacts'">Contacts</a></h2>

Next we need to display the list of contacts in ContactListComponent. Open the src/app/contact-list.component.ts then add the following code:

import { Component, OnInit } from '@angular/core';
import { ContactService } from '../contact.service';

@Component({
  selector: 'app-contact-list',
  templateUrl: './contact-list.component.html',
  styleUrls: ['./contact-list.component.css']
})
export class ContactListComponent implements OnInit {

  contacts: any[] = [];
  

  constructor(private contactService: ContactService) { }

  ngOnInit() {
    this.contactService.getContacts().subscribe((data : any[])=>{
        console.log(data);
        this.contacts = data;
    })
  }
}

We create a contacts array to hold the contacts. Next, we inject ContactService and we call the getContacts() method of the instance (on the ngOnInit life-cycle event) to get contacts and assign them to the contacts array.

Next open the src/app/contact-list/contact-list.component.html file and add:

<table style="width:100%">
  <tr>
    <th>Name</th>
    <th>Email</th>
    <th>Actions</th>
  </tr>
  <tr *ngFor="let contact of contacts" >
    <td>{{ contact.name }}</td>
    <td>{{ contact.email }}</td> 
    <td>
    <a [routerLink]="['/contact', contact.id]">Go to details</a>
    </td>
  </tr>
</table>

We loop through the contacts and display each contact’s name and email. We also create a link to each contact’s details component using the routerLink directive.

This is a screen shot of the component:


Contact list
Contact list (Large preview)

When we click on the Go to details link, it will take us to ContactDetailsComponent. The route has an id parameter, let’s see how we can access it from our component.

Open the src/app/contact-detail/contact-detail.component.ts file and change the code to look similar to the following code:

import { Component, OnInit } from '@angular/core';
import { ActivatedRoute } from '@angular/router';
import { ContactService } from '../contact.service';

@Component({
  selector: 'app-contact-detail',
  templateUrl: './contact-detail.component.html',
  styleUrls: ['./contact-detail.component.css']
})
export class ContactDetailComponent implements OnInit {
  
  contact: any;
  constructor(private contactService: ContactService, private route: ActivatedRoute) { }

  ngOnInit() {
    this.route.paramMap.subscribe(params => {
    console.log(params.get('id'))
     this.contactService.getContact(params.get('id')).subscribe(c =>{
        console.log(c);
        this.contact = c;
    })   
    });
     
  }
}

We inject ContactService and ActivatedRoute into the component. In ngOnInit() life-cycle event we retrieve the id parameter that will be passed from the route and use it to get the contact’s details that we assign to a contact object.

Open the src/app/contact-detail/contact-detail.component.html file and add:

<h1> Contact # {{contact.id}}</h1>
<p>
  Name: {{contact.name}} 
</p>
<p>
 Email: {{contact.email}}
</p>

Contact Details
Contact details (Large preview)

When we first visit our application from 127.0.0.1:4200/, the outlet doesn’t render any component so let’s redirect the empty path to the contacts path by adding the following route to the routes array:

{path: '', pathMatch: 'full', redirectTo: 'contacts'}  

We want to match the exact empty path, that’s why we specify the full match strategy.

Conclusion

In this tutorial, we’ve seen how to use the Angular Router to add routing and navigation into our application. We’ve seen different concepts like the Router outlet, routes, and paths and we created a demo to practically show the different concepts. You can access the code from this repository.

Smashing Editorial
(dm, ra, yk, il)


Source: Smashing Magazine

An Extensive Guide To Progressive Web Applications

An Extensive Guide To Progressive Web Applications

An Extensive Guide To Progressive Web Applications

Ankita Masand

2018-11-27T14:00:29+01:00
2018-11-27T14:10:54+00:00

It was my dad’s birthday, and I wanted to order a chocolate cake and a shirt for him. I headed over to Google to search for chocolate cakes and clicked on the first link in the search results. There was a blank screen for a few seconds; I didn’t understand what was happening. After a few seconds of staring patiently, my mobile screen filled with delicious-looking cakes. As soon as I clicked on one of them to check its details, I got an ugly fat popup, asking me to install an Android application so that I could get a silky smooth experience while ordering a cake.

That was disappointing. My conscience didn’t allow me to click on the “Install” button. All I wanted to do was order a small cake and be on my way.

I clicked on the cross icon at the very right of the popup to get out of it as soon as I could. But then the installation popup sat at the bottom of the screen, occupying one-fourth of the space. And with the flaky UI, scrolling down was a challenge. I somehow managed to order a Dutch cake.

After this terrible experience, my next challenge was to order a shirt for my dad. As before, I search Google for shirts. I clicked on the first link, and in a blink, the entire content was right in front of me. Scrolling was smooth. No installation banner. I felt as if I was browsing a native application. There was a moment when my terrible internet connection gave up, but I was still able to see the content instead of a dinosaur game. Even with my janky internet, I managed to order a shirt and jeans for my dad. Most surprising of all, I was getting notifications about my order.

I would call this a silky smooth experience. These people were doing something right. Every website should do it for their users. It’s called a progressive web app.

As Alex Russell states in one of his blog posts:

“It happens on the web from time to time that powerful technologies come to exist without the benefit of marketing departments or slick packaging. They linger and grow at the peripheries, becoming old-hat to a tiny group while remaining nearly invisible to everyone else. Until someone names them.”

A Silky Smooth Experience On The Web, Sometimes Known As A Progressive Web Application

Progressive web applications (PWAs) are more of a methodology that involves a combination of technologies to make powerful web applications. With an improved user experience, people will spend more time on websites and see more advertisements. They tend to buy more, and with notification updates, they are more likely to visit often. The Financial Times abandoned its native apps in 2011 and built a web app using the best technologies available at the time. Now, the product has grown into a full-fledged PWA.

But why, after all this time, would you build a web app when a native app does the job well enough?

Let’s look into some of the metrics shared in Google IO 17.

Five billion devices are connected to the web, making the web the biggest platform in the history of computing. On the mobile web, 11.4 million monthly unique visitors go to the top 1000 web properties, and 4 million go to the top thousand apps. The mobile web garners around four times as many users as native applications. But this number drops sharply when it comes to engagement.

A user spends an average of 188.6 minutes in native apps and only 9.3 minutes on the mobile web. Native applications leverage the power of operating systems to send push notifications to give users important updates. They deliver a better user experience and boot more quickly than websites in a browser. Instead of typing a URL in the web browser, users just have to tap an app’s icon on the home screen.

Most visitors on the web are unlikely to come back, so developers came up with the workaround of showing them banners to install native applications, in an attempt to keep them deeply engaged. But then, users would have to go through the tiresome procedure of installing the binary of a native application. Forcing users to install an application is annoying and reduces further the chance that they will install it in the first place. The opportunity for the web is clear.

Recommended reading: Native And PWA: Choices, Not Challengers!

If web applications come with a rich user experience, push notifications, offline support and instant loading, they can conquer the world. This is what a progressive web application does.

A PWA delivers a rich user experience because it has several strengths:

  • Fast
    The UI is not flaky. Scrolling is smooth. And the app responds quickly to user interaction.

  • Reliable
    A normal website forces users to wait, doing nothing, while it is busy making rides to the server. A PWA, meanwhile, loads data instantaneously from the cache. A PWA works seamlessly, even on a 2G connection. Every network request to fetch an asset or piece of data goes through a service worker (more on that later), which first verifies whether the response for a particular request is already in the cache. When users get real content almost instantly, even on a poor connection, they trust the app more and view it as more reliable.

  • Engaging
    A PWA can earn a place on the user’s home screen. It offers a native app-like experience by providing a full-screen work area. It makes use of push notifications to keep users engaged.

Now that we know what PWAs bring to the table, let’s get into the details of what gives PWAs an edge over native applications. PWAs are built with technologies such as service workers, web app manifests, push notifications and IndexedDB/local data structure for caching. Let’s look into each in detail.

Service Workers

A service worker is a JavaScript file that runs in the background without interfering with the user’s interactions. All GET requests to the server go through a service worker. It acts like a client-side proxy. By intercepting network requests, it takes complete control over the response being sent back to the client. A PWA loads instantly because service workers eliminate the dependency on the network by responding with data from the cache.

A service worker can only intercept a network request that is in its scope. For example, a root-scoped service worker can intercept all of the fetch requests coming from a web page. A service worker operates as an event-driven system. It goes into a dormant state when it is not needed, thereby conserving memory. To use a service worker in a web application, we first have to register it on the page with JavaScript.

(function main () {

   /* navigator is a WEB API that allows scripts to register themselves and carry out their activities. */
    if ('serviceWorker' in navigator) {
        console.log('Service Worker is supported in your browser')
        /* register method takes in the path of service worker file and returns a promises, which returns the registration object */
        navigator.serviceWorker.register('./service-worker.js').then (registration => {
            console.log('Service Worker is registered!')
        })
    } else {
        console.log('Service Worker is not supported in your browser')
    }

})()

We first check whether the browser supports service workers. To register a service worker in a web application, we provide its URL as a parameter to the register function, available in navigator.serviceWorker (navigator is a web API that allows scripts to register themselves and carry out their activities). A service worker is registered only once. Registration does not happen on every page load. The browser downloads the service worker file (./service-worker.js) only if there is a byte difference between the existing activated service worker and the newer one or if its URL has changed.

The above service worker will intercept all requests coming from the root (/). To limit the scope of a service worker, we would pass an optional parameter with one of the keys as the scope.

if ('serviceWorker' in navigator) {
    /* register method takes in an optional second parameter as an object. To restrict the scope of a service worker, the scope should be provided.
        scope: '/books' will intercept requests with '/books' in the url. */
    navigator.serviceWorker.register('./service-worker.js', { scope: '/books' }).then(registration => {
        console.log('Service Worker for scope /books is registered', registration)
    })
}

The service worker above will intercept requests that have /books in the URL. For example, it will not intercept request with /products, but it could very well intercept requests with /books/products.

As mentioned, a service worker operates as an event-driven system. It listens for events (install, activate, fetch, push) and accordingly calls the respective event handler. Some of these events are a part of the life cycle of a service worker, which goes through these events in sequence to get activated.

Installation

Once a service worker has been registered successfully, an installation event is fired. This is a good place to do the initialization work, like setting up the cache or creating object stores in IndexedDB. (IndexedDB will make more sense to you once we get into its details. For now, we can just say that it’s a key-value pair structure.)

self.addEventListener('install', (event) => {
    let CACHE_NAME = 'xyz-cache'
    let urlsToCache = [
        '/',
        '/styles/main.css',
        '/scripts/bundle.js'
    ]
    event.waitUntil(
        /* open method available on caches, takes in the name of cache as the first parameter. It returns a promise that resolves to the instance of cache
        All the URLS above can be added to cache using the addAll method. */
        caches.open(CACHE_NAME)
        .then (cache => cache.addAll(urlsToCache))
    )
})

Here, we’re caching some of the files so that the next load is instant. self refers to the service worker instance. event.waitUntil makes the service worker wait until all of the code inside it has finished execution.

Activation

Once a service worker has been installed, it cannot yet listen for fetch requests. Rather, an activate event is fired. If no active service worker is operating on the website in the same scope, then the installed service worker gets activated immediately. However, if a website already has an active service worker, then the activation of a new service worker is delayed until all of the tabs operating on the old service worker are closed. This makes sense because the old service worker might be using the instance of the cache that is now modified in the newer one. So, the activation step is a good place to get rid of old caches.

self.addEventListener('activate', (event) => {
    let cacheWhitelist = ['products-v2'] // products-v2 is the name of the new cache

    event.waitUntil(
        caches.keys().then (cacheNames => {
            return Promise.all(
                cacheNames.map( cacheName => {
                    /* Deleting all the caches except the ones that are in cacheWhitelist array */
                    if (cacheWhitelist.indexOf(cacheName) === -1) {
                        return caches.delete(cacheName)
                    }
                })
            )
        })
    )
})

In the code above, we’re deleting the old cache. If the name of a cache doesn’t match with the cacheWhitelist, then it is deleted. To skip the waiting phase and immediately activate the service worker, we use skip.waiting().

self.addEventListener('activate', (event) => {
    self.skipWaiting()
    // The usual stuff
})

Once service worker is activated, it can listen for fetch requests and push events.

Fetch Event Handler

Whenever a web page fires a fetch request for a resource over the network, the fetch event from the service worker gets called. The fetch event handler first looks for the requested resource in the cache. If it is present in the cache, then it returns the response with the cached resource. Otherwise, it initiates a fetch request to the server, and when the server sends back the response with the requested resource, it puts it to the cache for subsequent requests.

/* Fetch event handler for responding to GET requests with the cached assets */
self.addEventListener('fetch', (event) => {
    event.respondWith(
        caches.open('products-v2')
            .then (cache => {
                /* Checking if the request is already present in the cache. If it is present, sending it directly to the client */
                return cache.match(event.request).then (response => {
                    if (response) {
                        console.log('Cache hit! Fetching response from cache', event.request.url)
                        return response
                    }
                    /* If the request is not present in the cache, we fetch it from the server and then put it in cache for subsequent requests. */
                    fetch(event.request).then (response => {
                        cache.put(event.request, response.clone())
                        return response
                    })
                })
            })
    )
})

event.respondWith lets the service worker send a customized response to the client.

Offline-first is now a thing. For any non-critical request, we must serve the response from the cache, instead of making a ride to the server. If any asset is not present in the cache, we get it from the server and then cache it for subsequent requests.

Service workers only work on HTTPS websites because they have the power to manipulate the response of any fetch request. Someone with malicious intent might tamper the response for a request on an HTTP website. So, hosting a PWA on HTTPS is mandatory. Service workers do not interrupt the normal functioning of the DOM. They cannot communicate directly with the web page. To send any message to a web page, it makes use of post messages.

Web Push Notifications

Let’s suppose you’re busy playing a game on your mobile, and a notification pops up telling you of a 30% discount on your favorite brand. Without any further ado, you click on the notification and shop your breath out. Getting live updates on, say, a cricket or football match or getting important emails and reminders as notifications is a big deal when it comes to engaging users with a product. This feature was only available in native applications until PWA came along. A PWA makes use of web push notifications to compete with this powerful feature that native apps provide out of the box. A user would still receive a web push notification even if the PWA is not open in any of the browser tabs and even if the browser is not open.

A web application has to ask permission of the user to send them push notifications.


Browser Prompt for asking permission for Web Push notifications
Browser Prompt for asking permission for Web Push notifications. (Large preview)

Once the user confirms by clicking the “Allow” button, a unique subscription token is generated by the browser. This token is unique for this device. The format of the subscription token generated by Chrome is as follows:

{
     "endpoint": "https://fcm.googleapis.com/fcm/send/c7Veb8VpyM0:APA91bGnMFx8GIxf__UVy6vJ-n9i728CUJSR1UHBPAKOCE_SrwgyP2N8jL4MBXf8NxIqW6NCCBg01u8c5fcY0kIZvxpDjSBA75sVz64OocQ-DisAWoW7PpTge3SwvQAx5zl_45aAXuvS",
     "expirationTime": null,
     "keys": {
          "p256dh": "BJsj63kz8RPZe8Lv1uu-6VSzT12RjxtWyWCzfa18RZ0-8sc5j80pmSF1YXAj0HnnrkyIimRgLo8ohhkzNA7lX4w",
          "auth": "TJXqKozSJxcWvtQasEUZpQ"
     }
}

The endpoint contained in the token above will be unique for every subscription. On an average website, thousands of users would agree to receive push notifications, and for each of them, this endpoint would be unique. So, with the help of this endpoint, the application is able to target these users in the future by sending them push notifications. The expirationTime is the amount of time that the subscription is valid for a particular device. If the expirationTime is 20 days, it means that the push subscription of the user will expire after 20 days and the user won’t be able to receive push notifications on the older subscription. In this case, the browser will generate a new subscription token for that device. The auth and p256dh keys are used for encryption.

Now, to send push notifications to these thousands of users in the future, we first have to save their respective subscription tokens. It’s the job of the application server (the back-end server, maybe a Node.js script) to send push notifications to these users. This might sound as simple as making a POST request to the endpoint URL with the notification data in the request payload. However, it should be noted that if a user is not online when a push notification intended for them is triggered by the server, they should still get that notification once they come back online. The server would have to take care of such scenarios, along with sending thousands of requests to the users. A server keeping track of the user’s connection sounds complicated. So, something in the middle would be responsible for routing web push notifications from the server to the client. This is called a push service, and every browser has its own implementation of a push service. The browser has to tell the following information to the push service in order to send any notification:

  1. The time to live
    This is how long a message should be queued, in case it is not delivered to the user. Once this time has elapsed, the message will be removed from the queue.
  2. Urgency of the message
    This is so that the push service preserves the user’s battery by sending only high-priority messages.

The push service routes the messages to the client. Because push has to be received by the client even if its respective web application is not open in the browser, push events have to be listened to by something that continuously monitors in the background. You guessed it: That’s the job of the service worker. The service worker listens for push events and does the job of showing notifications to the user.

So, now we know that the browser, push service, service worker and application server work in harmony to send push notifications to the user. Let’s look into the implementation details.

Web Push Client

Asking permission of the user is a one-time thing. If a user has already granted permission to receive push notifications, we shouldn’t ask again. The permission value is saved in Notification.permission.

/* Notification.permission can have one of these three values: default, granted or denied. */
if (Notification.permission === 'default') {
    /* The Notification.requestPermission() method shows a notification permission prompt to the user. It returns a promise that resolves to the value of permission*/
    Notification.requestPermission().then (result => {
        if (result === 'denied') {
            console.log('Permission denied')
            return
        }

        if (result === 'granted') {
            console.log('Permission granted')
            /* This means the user has clicked the Allow button. We’re to get the subscription token generated by the browser and store it in our database.

            The subscription token can be fetched using the getSubscription method available on pushManager of the serviceWorkerRegistration object. If subscription is not available, we subscribe using the subscribe method available on pushManager. The subscribe method takes in an object.
            */

            serviceWorkerRegistration.pushManager.getSubscription()
                .then (subscription => {
                    if (!subscription) {
                        const applicationServerKey = ''
                        serviceWorkerRegistration.pushManager.subscribe({
                            userVisibleOnly: true, // All push notifications from server should be displayed to the user
                            applicationServerKey // VAPID Public key
                        })
                    } else {
                        saveSubscriptionInDB(subscription, userId) // A method to save subscription token in the database
                    }
                })
        }
    })
}

In the subscribe method above, we’re passing userVisibleOnly and applicationServerKey to generate a subscription token. The userVisibleOnly property should always be true because it tells the browser that any push notification sent by the server will be shown to the client. To understand the purpose of applicationServerKey, let’s consider a scenario.

If some person gets ahold of your thousands of subscription tokens, they could very well send notifications to the endpoints contained in these subscriptions. There is no way for the endpoint to be linked to your unique identity. To provide a unique identity to the subscription tokens generated on your web application, we make use of the VAPID protocol. With VAPID, the application server voluntarily identifies itself to the push service while sending push notifications. We generate two keys like so:

const webpush = require('web-push')
const vapidKeys = webpush.generateVAPIDKeys()

web-push is an npm module. vapidKeys will have one public key and one private key. The application server key used above is the public key.

Web Push Server

The job of the web push server (application server) is straightforward. It sends a notification payload to the subscription tokens.

const options = {
    TTL: 24*60*60, //TTL is the time to live, the time that the notification will be queued in the push service
    vapidDetails: {
        subject: 'email@example.com',
        publicKey: '',
        privateKey: ''
    }
}
const data = {
    title: 'Update',
    body: 'Notification sent by the server'
}
webpush.sendNotification(subscription, data, options)

It uses the sendNotification method from the web push library.

Service Workers

The service worker shows the notification to the user as such:

self.addEventListener('push', (event) => {
    let options = {
        body: event.data.body,
        icon: 'images/example.png',
    }
    event.waitUntil(
        /* The showNotification method is available on the registration object of the service worker.
        The first parameter to showNotification method is the title of notification, and the second parameter is an object */
        self.registration.showNotification(event.data.title, options)
    )
})

Till now, we’ve seen how a service worker makes use of the cache to store requests and makes a PWA fast and reliable, and we’ve seen how web push notifications keep users engaged.

To store a bunch of data on the client side for offline support, we need a giant data structure. Let’s look into the Financial Times PWA. You’ve got to witness the power of this data structure for yourself. Load the URL in your browser, and then switch off your internet connection. Reload the page. Gah! Is it still working? It is. (Like I said, offline is the new black.) Data is not coming from the wires. It is being served from the house. Head over to the “Applications” tab of Chrome Developer Tools. Under “Storage”, you’ll find “IndexedDB”.


IndexedDB stores the articles data in Financial Times PWA
IndexedDB on Financial Times PWA. (Large preview)

Check out the “Articles” object store, and expand any of the items to see the magic for yourself. The Financial Times has stored this data for offline support. This data structure that lets us store a massive amount of data is called IndexedDB. IndexedDB is a JavaScript-based object-oriented database for storing structured data. We can create different object stores in this database for various purposes. For example, as we can see in the image above that “Resources”, “ArticleImages” and “Articles” are called as object stores. Each record in an object store is uniquely identified with a key. IndexedDB can even be used to store files and blobs.

Let’s try to understand IndexedDB by creating a database for storing books.

let openIdbRequest = window.indexedDB.open('booksdb', 1)

If the database booksdb doesn’t already exist, the code above will create a booksdb database. The second parameter to the open method is the version of the database. Specifying a version takes care of the schema-related changes that might happen in future. For example, booksdb now has only one table, but when the application grows, we intend to add two more tables to it. To make sure our database is in sync with the updated schema, we’ll specify a higher version than the previous one.

Calling the open method doesn’t open the database right away. It’s an asynchronous request that returns an IDBOpenDBRequest object. This object has success and error properties; we’ll have to write appropriate handlers for these properties to manage the state of our connection.

let dbInstance
openIdbRequest.onsuccess = (event) => {
    dbInstance = event.target.result
    console.log('booksdb is opened successfully')
}

openIdbRequest.onerror = (event) => {
    console.log(’There was an error in opening booksdb database')
}

openIdbRequest.onupgradeneeded = (event) => {
    let db = event.target.result
    let objectstore = db.createObjectStore('books', { keyPath: 'id' })
}

To manage the creation or modification of object stores (object stores are analogous to SQL-based tables — they have a key-value structure), the onupgradeneeded method is called on the openIdbRequest object. The onupgradeneeded method will be invoked whenever the version changes. In the code snippet above, we’re creating a books object store with unique key as the ID.

Let’s say that, after deploying this piece of code, we have to create one more object store, called as users. So, now the version of our database will be 2.

let openIdbRequest = window.indexedDB.open('booksdb', 2) // New Version - 2

/* Success and error event handlers remain the same.
The onupgradeneeded method gets called when the version of the database changes. */
openIdbRequest.onupgradeneeded = (event) => {
    let db = event.target.result
    if (!db.objectStoreNames.contains('books')) {
        let objectstore = db.createObjectStore('books', { keyPath: 'id' })
    }

    let oldVersion = event.oldVersion
    let newVersion = event.newVersion

    /* The users tables should be added for version 2. If the existing version is 1, it will be upgraded to 2, and the users object store will be created. */
    if (oldVersion === 1) {
        db.createObjectStore('users', { keyPath: 'id' })
    }
}

We’ve cached dbInstance in the success event handler of the open request. To retrieve or add data in IndexedDB, we’ll make use of dbInstance. Lets add some book records in our books object store.

let transaction = dbInstance.transaction('books')
let objectstore = dbInstance.objectstore('books')

let bookRecord = {
    id: '1',
    name: ’The Alchemist',
    author: 'Paulo Coelho'
}
let addBookRequest = objectstore.add(bookRecord)

addBookRequest.onsuccess = (event) => {
    console.log('Book record added successfully')
}

addBookRequest.onerror = (event) => {
    console.log(’There was an error in adding book record')
}

We make use of transactions, especially while writing records on object stores. A transaction is simply a wrapper around an operation to ensure data integrity. If any of the actions in a transaction fails, then no action is performed on the database.

Let’s modify a book record with the put method:

let modifyBookRequest = objectstore.put(bookRecord) // put method takes in an object as the parameter
modifyBookRequest.onsuccess = (event) => {
    console.log('Book record updated successfully')
}

Let’s retrieve a book record with the get method:

let transaction = dbInstance.transaction('books')
let objectstore = dbInstance.objectstore('books')

/* get method takes in the id of the record */
let getBookRequest = objectstore.get(1)

getBookRequest.onsuccess = (event) => {
    /* event.target.result contains the matched record */
    console.log('Book record', event.target.result)
}

getBookRequest.onerror = (event) => {
    console.log('Error while retrieving the book record.')
}

Adding Icon On Home Screen

Now that there is hardly any distinction between a PWA and a native application, it makes sense to offer a prime position to the PWA. If your website fulfills the basic criteria of a PWA (hosted on HTTPS, integrates with service workers and has a manifest.json) and after the user has spent some time on the web page, the browser will invoke a prompt at the bottom, asking the user to add the app to their home screen, as shown below:


Prompt to add Financial Times PWA on home screen
Prompt to add Financial Times PWA on home screen. (Large preview)

When a user clicks on “Add FT to Home screen”, the PWA gets to set its foot on the home screen, as well as in the app drawer. When a user searches for any application on their phone, any PWAs that match the search query will be listed. They will also be seen in the system settings, which makes it easy for users to manage them. In this sense, a PWA behaves like a native application.

PWAs make use of manifest.json to provide this feature. Let’s look into a simple manifest.json file.

{
    "name": "Demo PWA",
     "short_name": "Demo",
     "start_url": "/?standalone",
     "background_color": "#9F0C3F",
     "theme_color": "#fff1e0",
     "display": "standalone",
     "icons": [{
          "src": "/lib/img/icons/xxhdpi.png?v2",
          "sizes": "192x192"
     }]
}

The short_name appears on the user’s home screen and in the system settings. The name appears in the chrome prompt and on the splash screen. The splash screen is what the user sees when the app is getting ready to launch. The start_url is the main screen of your app. It’s what users get when they tap an icon on the home screen. The background_color is used on the splash screen. The theme_color sets the color of the toolbar. The standalone value for display mode says that the app is to be operated in full-screen mode (hiding the browser’s toolbar). When a user installs a PWA, its size is merely in kilobytes, rather than the megabytes of native applications.

Service workers, web push notifications, IndexedDB, and the home screen position make up for offline support, reliability, and engagement. It should be noted that a service worker doesn’t come to life and start doing its work on the very first load. The first load will still be slow until all of the static assets and other resources have been cached. We can implement some strategies to optimize the first load.

Bundling Assets

All of the resources, including the HTML, style sheets, images and JavaScript, are to be fetched from the server. The more files, the more HTTPS requests needed to fetch them. We can use bundlers like WebPack to bundle our static assets, hence reducing the number of HTTP requests to the server. WebPack does a great job of further optimizing the bundle by using techniques such as code-splitting (i.e. bundling only those files that are required for the current page load, instead of bundling all of them together) and tree shaking (i.e. removing duplicate dependencies or dependencies that are imported but not used in the code).

Reducing Round Trips

One of the main reasons for slowness on the web is network latency. The time it takes for a byte to travel from A to B varies with the network connection. For example, a particular round trip over Wi-Fi takes 50 milliseconds and 500 milliseconds on a 3G connection, but 2500 milliseconds on a 2G connection. These requests are sent using the HTTP protocol, which means that while a particular connection is being used for a request, it cannot be used for any other requests until the response of the previous request is served. A website can make six asynchronous HTTP requests at a time because six connections are available to a website to make HTTP requests. An average website makes roughly 100 requests; so, with a maximum of six connections available, a user might end up spending around 833 milliseconds in a single round trip. (The calculation is 833 milliseconds – 1006 = 1666. We have to divide 1666 by 2 because we’re calculating the time spend on a round trip.) With HTTP2 in place, the turnaround time is drastically reduced. HTTP2 doesn’t block the connection head, so multiple requests can be sent simultaneously.

Most HTTP responses contain last-modified and etag headers. The last-modified header is the date when the file was last modified, and an etag is a unique value based on the contents of the file. It will only be changed when the contents of a file are changed. Both of these headers can be used to avoid downloading the file again if a cached version is already locally available. If the browser has a version of this file locally available, it can add any of these two headers in the request as such:


Add ETag and Last-Modified Headers to prevent downloading of valid cached assets
ETag and Last-Modified Headers. (Large preview)

The server can check whether the contents of the file have changed. If the contents of the file have not changed, then it responds with a status code of 304 (not modified).


If-None-Match Header to prevent downloading of valid cached assets
If-None-Match Header. (Large preview)

This indicates to the browser to use the locally available cached version of the file. By doing all of this, we’ve prevented the file from being downloaded.

Faster responses are in now place, but our job is not done yet. We still have to parse the HTML, load the style sheets and make the web page interactive. It makes sense to show some empty boxes with a loader to the user, instead of a blank screen. While the HTML document is getting parsed, when it comes across <script src='asset.js'></script>, it will make a synchronous HTTP request to the server to fetch asset.js, and the whole parsing process will be paused until the response comes back. Imagine having a dozen of synchronous static asset references. These could very well be managed just by making use of the async keyword in script references, like <script src='asset.js' async></script>. With the introduction of the async keyword here, the browser will make an asynchronous request to fetch asset.js without hindering the parsing of the HTML. If a script file is required at a later stage, we can defer the downloading of that file until the entire HTML has been parsed. A script file can be deferred by using the defer keyword, like <script src='asset.js' defer></script>.

Conclusion

We’ve learned a lot of many new things that make for a cool web application. Here’s a summary of all of the things we’ve explored in this article:

  1. Service workers make good use of the cache to speed up the loading of assets.
  2. Web push notifications work under the hood.
  3. We use IndexedDB to store a massive amount of data.
  4. Some of the optimizations for instant first load, like using HTTP2 and adding headers like Etag, last-modified and If-None-Match, prevent the downloading of valid cached assets.

That’s all, folks!

Smashing Editorial
(rb, ra, al, yk, il)


Source: Smashing Magazine

Cyber Monday: Elegant Themes Offers 25% OFF in Biggest Discount Ever

This sponsored article was created by our content partner, BAW Media. Thank you for supporting the partners who make SitePoint possible.

Little gems hidden in an array of Cyber Monday details are easy to miss. If you’re looking for the kind of top value deal that doesn’t come your way all that often, you won’t want to miss this one.

Perhaps you’ve already found something that made you squeal with excitement. If so, it’s a good thing you didn’t stop right there, because you’ve just come across this year’s #1 Cyber Monday deal.

This offer comes from Elegant Themes, the creator of Divi, the world’s most popular premium WordPress theme.

They’ve gone all out with their biggest ever discount: 25% off on their Developer and Lifetime accounts, and they’ve added in free Divi layouts.

About Elegant Themes

Here’s what you need to know about Elegant Themes’ wildly popular WordPress toolkit, if you didn’t already:

1. It’s the ultimate WordPress toolkit

An Elegant Themes membership gives you access to 87 themes and 3 plugins, one of which is Divi, the ultimate WordPress Theme and Visual Page Builder. Divi will change your approach to website building forever.

2. You’ll get unlimited use

The one-time fee gives you unlimited use, so you needn’t concern yourself about per-website pricing. There’s a 25% Cyber Monday discount on the fee by the way, which means your membership gives you access to the most value-packed collection of WordPress tools on the market.

3. The pricing plan is SIMPLE

There’s none of the “this plan gives you this, that plan gives you that” nonsense. It’s a single membership, a one-time fee, and you get the entire collection of themes and plugins. Period.

4. You’ll get products you can trust

Elegant Themes didn’t establish itself as a leader in WordPress theme and plugin development without some very good reasons. They’ve been at it for the past decade, during which time constant improvement of each and every product has been the norm.

What You Get with the Cyber Monday Deal

Let’s get into some of the details that make this Cyber Monday deal a not-to-be-missed opportunity. We strongly suspect that you’ll like what you see.

Divi – The World’s Most Popular Premium WordPress Theme

Divi is Elegant Theme’s flagship theme. If BuiltWith.com’s stats are an indication, Divi is the most widely-used premium WordPress theme in the world. Calling Divi a theme is somewhat of an oversimplification however.

A website-building framework would be a more accurate description; a framework that allows you to design beautiful websites without coding, and without requiring assistance from a collection of disjointed plugins.

To date, 500,591 and counting users are building or have built websites with Divi. They make up one of the most empowered WordPress communities on the web.

The Versatile Divi Builder

The Divi Builder is a visual drag and drop builder that can be used with any theme. This page-building plugin uses the same visual page-building technology that helped make the Divi theme such a roaring success. The only difference is, it’s a standalone product so it can be used with any theme.

As is the case with the Divi theme, you can use the Divi Builder’s visual design interface to build anything and customize everything.

Extra – The Ultimate Magazine WordPress Theme Powered by the Divi Builder

Extra is a magazine theme that takes the Divi Builder framework and adds a newly-designed set of 40+ post-based modules to extend its power even further. Extra is ideal for creating blogs and online publications. The content modules serve as page-building content blocks (the easy way to do it).

Choose the content elements you need, customize them, arrange them, and you’re good to go!

Bloom – Email Opt-In and Lead Generation for WordPress

Bloom provides an easy way to gather leads and build a mailing list. If provides six different customizable opt-in types and a sophisticated set of visitors targeting methods.

Email reigns supreme as a marketing tool. This easy-to-work-with plugin does your list building for you and gives you the means to convert a website’s visitors into followers and customers.

Monarch – The Premier Social Media Sharing Plugin for WordPress

Social Media has become the Internet’s lifeblood, and social sharing uses it as a positive force for businesses. Elegant Theme’s Monarch plugin enables its users to engage and empower online communities.

Monarch will help you get more shares and followers, and it will do so without negatively impacting website performance.

Wrapping Up

Given all that it offers, Elegant Theme’s Cyber Monday deal gives you a positively insane amount of value for your investment. After all, Element Theme’s products are so popular (Divi being the prime example) that there’s never been a need to offer discounts like this one to attract new customers.

Divi, Divi Builder, Extra, Bloom, and Monarch. You’ve seen what they can do, which makes this a no-brainer of an offer a very wise investment.

The post Cyber Monday: Elegant Themes Offers 25% OFF in Biggest Discount Ever appeared first on SitePoint.


Source: Sitepoint

jQuery setTimeout() Function Examples

The JavaScript setTimeout function calls a function or executes a code snippet after a specified delay (in milliseconds). This might be useful if, for example, you wished to display a popup after a visitor has been browsing your page for a certain amount of time, or you want a short delay before removing a hover effect from an element (in case the user accidentally moused out).

Basic setTimeout Example

To demonstrate the concept, the following demo displays a popup, two seconds after the button is clicked.

See the Pen CSS3 animation effects for Magnific Popup by SitePoint (@SitePoint) on CodePen.

Syntax

From the MDN documentation, the syntax for setTimeout is as follows:

[code language=”js”]
var timeoutID = window.setTimeout(func, [delay, param1, param2, …]);
var timeoutID = window.setTimeout(code, [delay]);
[/code]

where:

  • timeoutID is a numerical id, which can be used in conjunction with clearTimeout() to cancel the timer.
  • func is the function to be executed.
  • code (in the alternate syntax) is a string of code to be executed.
  • delay is the number of milliseconds by which the function call should be delayed. If omitted, this defaults to 0.

setTimeout vs window.setTimeout

You’ll notice that the syntax above uses window.setTimeout. Why is this?

Well, setTimeout and window.setTimeout are essentially the same, the only difference being that in the second statement we are referencing the setTimeout method as a property of the global window object.

In my opinion this adds complexity, for little or no benefit—if you’ve defined an alternative setTimeout method which would be found and returned in priority in the scope chain, then you’ve probably got bigger issues.

For the purposes of this tutorial, I’ll omit window, but ultimately which syntax you chose is up to you.

The post jQuery setTimeout() Function Examples appeared first on SitePoint.


Source: Sitepoint