Why More Web Designers Should Give Pre-built Websites a Try

This sponsored article was created by our content partner, BAW Media. Thank you for supporting the partners who make SitePoint possible. There’s a heated and seemingly never-ending debate in the web design industry about whether web designers should always start their design work from scratch or not. Another option comprises taking advantage of what pre-built […]

The post Why More Web Designers Should Give Pre-built Websites a Try appeared first on SitePoint.


Source: Sitepoint

Introducing Float.com: A Better Alternative To Spreadsheets

Introducing Float.com: A Better Alternative To Spreadsheets

Introducing Float.com: A Better Alternative To Spreadsheets

Nick Babich

2018-12-11T12:50:00+01:00
2018-12-11T13:50:37+00:00

(This is a sponsored post.) In today’s highly competitive market, it’s vital to move fast. After all, we all know how the famous saying goes: “Time is money.” The faster your product team moves when creating a product, the higher the chance it’ll succeed in the market. If you are a project manager, you need a tool that helps you get the most out of each of your team member’s time.

Though creative teams typically work on new and innovative products, many still use legacy tools to manage their work. Spreadsheets are one of the most common tools in the project manager’s toolbox. While this might be adequate for a team of two or three members, as a team grows, managing the team’s time becomes a demanding job.

Whenever project managers try to manage their team using spreadsheets alone, they usually face the following problems:

1. Lack Of Glanceability

Understanding what’s really happening on a project takes a lot of work. It’s hard to visually grasp who’s busy and who’s not. As a result, some team members might end up overloaded, while others will have too little to do. You also won’t get a clear breakdown of how much time is being devoted to particular work and particular clients, which is crucial not just for billing purposes but also to inform your agency’s future decisions, like who to hire next.

2. Hard To Report To Stakeholders

With spreadsheets as the home of project management, translating data about people and time into tangible insights is a challenge. Data visualization is also virtually impossible with the limited range of chart-building functions that spreadsheet tools provide. As a result, reporting with spreadsheets becomes a time-consuming task. The more people and activities a project has, the more of a project manager’s time will be consumed by reporting.

3. Spreadsheets Manage Tasks, Not People

Managing projects with individual spreadsheets is a disaster waiting to happen. Though a single spreadsheet may give a clear breakdown of a single project, it has no way of indicating overload and underload for particular team members across all projects.

4. Lack Of High-Level Overview Of A Project

It’s well known in the industry that many designers suffer from tunnel vision: Without the big picture of a project in mind, their focus turns to solving ongoing tasks. But the problem of tunnel vision isn’t limited to designers; it also exists in project management.

When a project manager uses a tool like a spreadsheet, it is usually hard (or even impossible) to create a high-level overview of a project. Even when the project manager invests a lot of time in creating this overview, the picture becomes outdated as soon as the company’s direction shifts.

5. The Risk Of Outdated Information

Product development moves quickly, making it a struggle for project managers to continually monitor and introduce all required changes into the spreadsheet. Not only is this time-consuming, but it’s also not much fun — and, as a result, often doesn’t get done.

How Technology Makes Our Life Better: Introducing Float

The purpose of technology has always been to reduce mechanical work in order to focus on innovation. In a creative or product agency, the ultimate goal is to automate as much of the product design and human management process as possible, so that the team can focus on the deep work of creativity and execution.

With this in mind, let’s explore Float, a resource management app for creative agencies. Float makes it easier to understand who’s working on what, becoming a single source of truth for your entire team.

Helping Product Managers Overcome The Challenges Of Time And People Management

Float facilitates team collaboration and makes work more effective. Here’s how:

1. Visual Team Planner: See Team And Tasks At A Glance

Float allows you to plan tasks visually using the “Schedule” tab. In this view, you can allocate and update project assignments. A drag-and-drop interface makes scheduling your team simple.

Here’s an example of the Schedule View:


The schedule is not only a bird’s-eye view of who’s working on what and when, but also a dynamic canvas. Click on any empty space to create a task. Each task can be easily modified, extended or split.
The schedule is not only a bird’s-eye view of who’s working on what and when, but also a dynamic canvas. Click on any empty space to create a task. Each task can be easily modified, extended or split. (Large preview)

But that’s not all. You can do more:

  • Prioritize tasks
    You can do this by simply moving them on top of others.
  • Duplicate tasks
    Simply press “Shift” and drag a selected task to a new location.

You can prioritize tasks, duplicate them, extend or even split a task with Float.
(Large preview)
  • Extend a task
    If someone on your team needs extra time to finish a task, you can extend the time in one click using the visual interface.
  • Split a task
    When it becomes evident that someone on your team needs help with a task, it’s easy to split the task into parts and assign each part to another member.

2. Built-In Reporting And Statistics

Float’s built-in reporting and statistics feature (“utilization” reports) can save project managers hours of manual work at the end of the week, month or quarter.


Float helps you keep track of all of a project’s hours.
Float helps you keep track of all of a project’s hours. (Large preview)

It’s relatively easy to customize the format of the report. You can:

  • Choose the roles of team members (employees, contractors, all) in “People”;
  • Choose the type of tasks (tentative, confirmed, completed) in “Task”.

It’s vital to mention that Float allows you to define different roles for team members. For example, if someone works part-time, you can define this in the “Profile” settings and specify that their work is part-time only. This feature also comes handy when you need to calculate a budget.

Emailing team members their individual hours for the week
(Large preview)

You can email team members their individual hours for the week.

3. Search And Filter: All The Information You Need, At Your Fingertips

All of your team’s information sits in one place. You don’t need to use different spreadsheets for different projects; to add a new project, simply click on “Projects” and add a new object.

You can also use the filtering tools to highlight specific projects. A simple way to see relevant tasks at a glance is to filter by tags, leaving only particular types of projects (for example, projects that have contractors).


Customizing the format of the report
(Large preview)

4. Getting A High-Level Overview Of All Your Projects

With Float, you’ll always have a record of what happened and when.
– Set milestones in a project’s settings, and keep an eye on upcoming milestones in your Activity Feed.


Set milestones for your project.
With Float, you can set milestones for your project. (Large preview)
  • Drill down into a user or a team and review their schedule. When you see that someone is overscheduled (red will tell you that), you can move other team members to that activity and regroup them.

Float helps you keep track of all of a project’s hours.
Float helps you keep track of all of a project’s hours. (Large preview)

You can view by day, week or month, making it easy to zoom out and see a big picture of your project.


You can zoom in for a detailed day view, or zoom out to view a monthly forecast.
Zoom in for a detailed day view, or zoom out to view a monthly forecast. (Large preview)

5. Reducing The Risk Of Outdated Information

You don’t need to switch between multiple tools to find out everything you need to know. This means your information will be up to date (it’s much easier to manage your project using a single tool, rather than many).

Float can be easily connected to all the services you use.


Using the filtering tools to highlight specific projects
“Schedule”, “People” and “Projects Tools” work together to create context. (Large preview)

Conclusion

If your team’s project management leaves a lot to be desired, Float is a no-brainer. Float makes it easy to see everything you need to know about your team’s projects in a single place, giving project managers the information and functionality they need to handle the fast-paced world of digital design and development. Get started with Float today!

Smashing Editorial
(ms, ra, al, il)


Source: Smashing Magazine

How To Build A Real-Time App With GraphQL Subscriptions On Postgres

How To Build A Real-Time App With GraphQL Subscriptions On Postgres

How To Build A Real-Time App With GraphQL Subscriptions On Postgres

Sandip Devarkonda

2018-12-10T14:00:23+01:00
2018-12-10T13:04:58+00:00

In this article, we’ll take a look at the challenges involved in building real-time applications and how emerging tooling is addressing them with elegant solutions that are easy to reason about. To do this, we’ll build a real-time polling app (like a Twitter poll with real-time overall stats) just by using Postgres, GraphQL, React and no backend code!

The primary focus will be on setting up the backend (deploying the ready-to-use tools, schema modeling), and aspects of frontend integration with GraphQL and less on UI/UX of the frontend (some knowledge of ReactJS will help). The tutorial section will take a paint-by-numbers approach, so we’ll just clone a GitHub repo for the schema modeling, and the UI and tweak it, instead of building the entire app from scratch.

All Things GraphQL

Do you know everything you need to know about GraphQL? If you have your doubts, Eric Baer has you covered with a detailed guide on its origins, its drawbacks and the basics of how to work with it. Read article →

Before you continue reading this article, I’d like to mention that a working knowledge of the following technologies (or substitutes) are beneficial:

  • ReactJS
    This can be replaced with any frontend framework, Android or IOS by following the client library documentation.
  • Postgres
    You can work with other databases but with different tools, the principles outlined in this post will still apply.

You can also adapt this tutorial context for other real-time apps very easily.

A demonstration of the features in the polling app that is built in this tutorial
A demonstration of the features in the polling app that we’ll be building. (Large preview)

As illustrated by the accompanying GraphQL payload at the bottom, there are three major features that we need to implement:

  1. Fetch the poll question and a list of options (top left).
  2. Allow a user to vote for a given poll question (the “Vote” button).
  3. Fetch results of the poll in real-time and display them in a bar graph (top right; we can gloss over the feature to fetch a list of currently online users as it’s an exact replica of this use case).

Challenges With Building Real-Time Apps

Building real-time apps (especially as a frontend developer or someone who’s recently made a transition to becoming a fullstack developer), is a hard engineering problem to solve.

This is generally how contemporary real-time apps work (in the context of our example app):

  1. The frontend updates a database with some information; A user’s vote is sent to the backend, i.e. poll/option and user information (user_id, option_id).
  2. The first update triggers another service that aggregates the poll data to render an output that is relayed back to the app in real-time (every time a new vote is cast by anyone; if this done efficiently, only the updated poll’s data is processed and only those clients that have subscribed to this poll are updated):
    • Vote data is first processed by an register_vote service (assume that some validation happens here) that triggers a poll_results service.
    • Real-time aggregated poll data is relayed by the poll_results service to the frontend for displaying overall statistics.

Traditional design for a real-time poll app
A poll app designed traditionally

This model is derived from a traditional API-building approach, and consequently has similar problems:

  1. Any of the sequential steps could go wrong, leaving the UX hanging and affecting other independent operations.
  2. Requires a lot of effort on the API layer as it’s a single point of contact for the frontend app, that interacts with multiple services. It also needs to implement a websockets-based real-time API — there is no universal standard for this and therefore sees limited support for automation in tools.
  3. The frontend app is required to add the necessary plumbing to consume the real-time API and may also have to solve the data consistency problem typically seen in real-time apps (less important in our chosen example, but critical in ordering messages in a real-time chat app).
  4. Many implementations resort to using additional non-relational databases on the server-side (Firebase, etc.) for easy real-time API support.

Let’s take a look at how GraphQL and associated tooling address these challenges.

What Is GraphQL?

GraphQL is a specification for a query language for APIs, and a server-side runtime for executing queries. This specification was developed by Facebook to accelerate app development and provide a standardized, database-agnostic data access format. Any specification-compliant GraphQL server must support the following:

  1. Queries for reads
    A request type for requesting nested data from a data source (which can be either one or a combination of a database, a REST API or another GraphQL schema/server).
  2. Mutations for writes
    A request type for writing/relaying data into the aforementioned data sources.
  3. Subscriptions for live-queries
    A request type for clients to subscribe to real-time updates.

GraphQL also uses a typed schema. The ecosystem has plenty of tools that help you identify errors at dev/compile time which results in fewer runtime bugs.

Here’s why GraphQL is great for real-time apps:

  • Live-queries (subscriptions) are an implicit part of the GraphQL specification. Any GraphQL system has to have native real-time API capabilities.
  • A standard spec for real-time queries has consolidated community efforts around client-side tooling, resulting in a very intuitive way of integrating with GraphQL APIs.

GraphQL and a combination of open-source tooling for database events and serverless/cloud functions offer a great substrate for building cloud-native applications with asynchronous business logic and real-time features that are easy to build and manage. This new paradigm also results in great user and developer experience.

In the rest of this article, I will use open-source tools to build an app based on this architecture diagram:


GraphQL-based design for a real-time poll app
A poll app designed with GraphQL

Building A Real-Time Poll/Voting App

With that introduction to GraphQL, let’s get back to building the polling app as described in the first section.

The three features (or stories highlighted) have been chosen to demonstrate the different GraphQL requests types that our app will make:

  1. Query
    Fetch the poll question and its options.
  2. Mutation
    Let a user cast a vote.
  3. Subscription
    Display a real-time dashboard for poll results.

GraphQL elements in the poll app
GraphQL request types in the poll app (Large preview)

Prerequisites

  • A Heroku account (use the free tier, no credit card required)
    To deploy a GraphQL backend (see next point below) and a Postgres instance.
  • Hasura GraphQL Engine (free, open-source)A ready-to-use GraphQL server on Postgres.
  • Apollo Client (free, open-source SDK)
    For easily integrating clients apps with a GraphQL server.
  • npm (free, open-source package manager)
    To run our React app.

Deploying The Database And A GraphQL Backend

We will deploy an instance each of Postgres and GraphQL Engine on Heroku’s free tier. We can use a nifty Heroku button to do this with a single click.

Heroku button
Heroku button

Note: You can also follow this link or search for documentation Hasura GraphQL deployment for Heroku (or other platforms).


Deploying app backend to Heroku’s free tier
Deploying Postgres and GraphQL Engine to Heroku’s free tier (Large preview)

You will not need any additional configuration, and you can just click on the “Deploy app” button. Once the deployment is complete, make a note of the app URL:

<app-name>.herokuapp.com

For example, in the screenshot above, it would be:

hge-realtime-app-tutorial.herokuapp.com

What we’ve done so far is deploy an instance of Postgres (as an add-on in Heroku parlance) and an instance of GraphQL Engine that is configured to use this Postgres instance. As a result of doing so, we now have a ready-to-use GraphQL API but, since we don’t have any tables or data in our database, this is not useful yet. So, let’s address this immediately.

Modeling the database schema

The following schema diagram captures a simple relational database schema for our poll app:


Schema design for the poll app
Schema design for the poll app. (Large preview)

As you can see, the schema is a simple, normalized one that leverages foreign-key constraints. It is these constraints that are interpreted by the GraphQL Engine as 1:1 or 1:many relationships (e.g. poll:options is a 1: many relationship since each poll will have more than 1 option that are linked by the foreign key constraint between the id column of the poll table and the poll_id column in the option table). Related data can be modelled as a graph and can thus power a GraphQL API. This is precisely what the GraphQL Engine does.

Based on the above, we’ll have to create the following tables and constraints to model our schema:

  1. Poll
    A table to capture the poll question.
  2. Option
    Options for each poll.
  3. Vote
    To record a user’s vote.
  4. Foreign-key constraint between the following fields (table : column):
    • option : poll_id → poll : id
    • vote : poll_id → poll : id
    • vote : created_by_user_id → user : id

Now that we have our schema design, let’s implement it in our Postgres database. To instantly bring this schema up, here’s what we’ll do:

  1. Download the GraphQL Engine CLI.
  2. Clone this repo:
    $ git clone clone https://github.com/hasura/graphql-engine
    
    $ cd graphql-engine/community/examples/realtime-poll
  3. Go to hasura/ and edit config.yaml:
    endpoint: https://<app-name>.herokuapp.com
  4. Apply the migrations using the CLI, from inside the project directory (that you just downloaded by cloning):
    $ hasura migrate apply

That’s it for the backend. You can now open the GraphQL Engine console and check that all the tables are present (the console is available at https://<app-name>.herokuapp.com/console).

Note: You could also have used the console to implement the schema by creating individual tables and then adding constraints using a UI. Using the built-in support for migrations in GraphQL Engine is just a convenient option that was available because our sample repo has migrations for bringing up the required tables and configuring relationships/constraints (this is also highly recommended regardless of whether you are building a hobby project or a production-ready app).

Integrating The Frontend React App With The GraphQL Backend

The frontend in this tutorial is a simple app that shows poll question, the option to vote and the aggregated poll results in one place. As I mentioned earlier, we’ll first focus on running this app so you get the instant gratification of using our recently deployed GraphQL API , see how the GraphQL concepts we looked at earlier in this article power the different use-cases of such an app, and then explore how the GraphQL integration works under the hood.

NOTE: If you are new to ReactJS, you may want to check out some of these articles. We won’t be getting into the details of the React part of the app, and instead, will focus more on the GraphQL aspects of the app. You can refer to the source code in the repo for any details of how the React app has been built.

Configuring The Frontend App
  1. In the repo cloned in the previous section, edit HASURA_GRAPHQL_ENGINE_HOSTNAME in the src/apollo.js file (inside the /community/examples/realtime-poll folder) and set it to the Heroku app URL from above:
    export const HASURA_GRAPHQL_ENGINE_HOSTNAME = 'random-string-123.herokuapp.com';
  2. Go to the root of the repository/app-folder (/realtime-poll/) and use npm to install the prequisite modules and then run the app:
    $ npm install
    
    $ npm start
    

Screenshot of the live poll app
Screenshot of the live poll app (Large preview)

You should be able to play around with the app now. Go ahead and vote as many times as you want, you’ll notice the results changing in real time. In fact, if you set up another instance of this UI and point it to the same backend, you’ll be able to see results aggregated across all the instances.

So, how does this app use GraphQL? Read on.

Behind The Scenes: GraphQL

In this section, we’ll explore the GraphQL features powering the app, followed by a demonstration of the ease of integration in the next one.

The Poll Component And The Aggregated Results Graph

The poll component on the top left that fetches a poll with all of its options and captures a user’s vote in the database. Both of these operations are done using the GraphQL API. For fetching a poll’s details, we make a query (remember this from the GraphQL introduction?):

query {
  poll {
    id
    question
    options {
      id
      text
    }
  }
}

Using the Mutation component from react-apollo, we can wire up the mutation to a HTML form such that the mutation is executed using variables optionId and userId when the form is submitted:

mutation vote($optionId: uuid!, $userId: uuid!) {
  insert_vote(objects: [{option_id: $optionId, created_by_user_id: $userId}]) {
    returning {
      id
    }
  }
}

To show the poll results, we need to derive the count of votes per option from the data in vote table. We can create a Postgres View and track it using GraphQL Engine to make this derived data available over GraphQL.

CREATE VIEW poll_results AS
 SELECT poll.id AS poll_id, o.option_id, count(*) AS votes
 FROM (( SELECT vote.option_id, option.poll_id, option.text
   FROM ( vote
          LEFT JOIN 
      public.option ON ((option.id = vote.option_id)))) o
 
           LEFT JOIN poll ON ((poll.id = o.poll_id)))
 GROUP BY poll.question, o.option_id, poll.id;

The poll_results view joins data from vote and poll tables to provide an aggregate count of number of votes per each option.

Using GraphQL Subscriptions over this view, react-google-charts and the subscription component from react-apollo, we can wire up a reactive chart which updates in realtime when a new vote happens from any client.

subscription getResult($pollId: uuid!) {
  poll_results(where: {poll_id: {_eq: $pollId}}) {
    option {
      id
      text
    }
    votes
  }
}

GraphQL API Integration

As I mentioned earlier, I used Apollo Client, an open-source SDK to integrate a ReactJS app with the GraphQL backend. Apollo Client is analogous to any HTTP client library like requests for python, the standard http module for JavaScript, and so on. It encapsulates the details of making an HTTP request (in this case POST requests). It uses the configuration (specified in src/apollo.js) to make query/mutation/subscription requests (specified in src/GraphQL.jsx with the option to use variables that can be dynamically substituted in the JavaScript code of your REACT app) to a GraphQL endpoint. It also leverages the typed schema behind the GraphQL endpoint to provide compile/dev time validation for the aforementioned requests. Let’s see just how easy it is for a client app to make a live-query (subscription) request to the GraphQL API.

Configuring The SDK

The Apollo Client SDK needs to be pointed at a GraphQL server, so it can automatically handle the boilerplate code typically needed for such an integration. So, this is exactly what we did when we modified src/apollo.js when setting up the frontend app.

Making A GraphQL Subscription Request (Live-Query)

Define the subscription we looked at in the previous section in the src/GraphQL.jsx file:

const SUBSCRIPTION_RESULT = `
subscription getResult($pollId: uuid!) {
  poll_results (
    order_by: option_id_desc,
    where: { poll_id: {_eq: $pollId} }
  ) {
    option_id
    option { id text }
    votes
  }
}`;

We’ll use this definition to wire up our React component:

export const Result = (pollId) => (
  <Subscription subscription={gql`${SUBSCRIPTION_RESULT}`} variables={pollId}>
    {({ loading, error, data }) => {
       if (loading) return 

Loading...</p>; if (error) return

Error :</p>; return ( <div> <div> {renderChart(data)} </div> </div> ); }} </Subscription> )

One thing to note here is that the above subscription could also have been a query. Merely replacing one keyword for another gives us a “live-query”, and that’s all it takes for the Apollo Client SDK to hook this real-time API with your app. Every time there’s a new dataset from our live-query, the SDK triggers a re-render of our chart with this updated data (using the renderChart(data) call). That’s it. It really is that simple!

Final Thoughts

In three simple steps (creating a GraphQL backend, modeling the app schema, and integrating the frontend with the GraphQL API), you can quickly wire-up a fully-functional real-time app, without getting mired in unnecessary details such as setting up a websocket connection. That right there is the power of community tooling backing an abstraction like GraphQL.

If you’ve found this interesting and want to explore GraphQL further for your next side project or production app, here are some factors you may want to use for building your GraphQL toolchain:

  • Performance & Scalability
    GraphQL is meant to be consumed directly by frontend apps (it’s no better than an ORM in the backend; real productivity benefits come from doing this). So your tooling needs to be smart about efficiently using database connections and should be able scale effortlessly.
  • Security
    It follows from above that a mature role-based access-control system is needed to authorize access to data.
  • Automation
    If you are new to the GraphQL ecosystem, handwriting a GraphQL schema and implementing a GraphQL server may seem like daunting tasks. Maximize the automation from your tooling so you can focus on the important stuff like building user-centric frontend features.
  • ArchitectureAs trivial as the above efforts seem like, a production-grade app’s backend architecture may involve advanced GraphQL concepts like schema-stitching, etc. Moreover, the ability to easily generate/consume real-time APIs opens up the possibility of building asynchronous, reactive apps that are resilient and inherently scalable. Therefore, it’s critical to evaluate how GraphQL tooling can streamline your architecture.

Related Resources

  • You can check out a live version of the app over here.
  • The complete source code is available on GitHub.
  • If you’d like to explore the database schema and run test GraphQL queries, you can do so over here.
Smashing Editorial
(rb, ra, yk, il)


Source: Smashing Magazine

What Can Be Learned From The Gutenberg Accessibility Situation?

What Can Be Learned From The Gutenberg Accessibility Situation?

What Can Be Learned From The Gutenberg Accessibility Situation?

Andy Bell

2018-12-07T11:30:08+01:00
2018-12-07T10:36:51+00:00

So far, Gutenberg has had a very mixed reception from the WordPress community and that reception has become increasingly negative since a hard deadline was set for the 5.0 release, even though many considered it to be incomplete. A hard release deadline in software is usually fine, but there is a glaring issue with this particular one: what will be the main editor for a platform that powers about 32% of the web isn’t fully accessible. This issue has been raised many times by the community, and it’s been effectively brushed under the carpet by Automattic’s leadership — at least it comes across that way.

Sounds like a messy situation, right? I’m going to dive into what’s happened and how this sort of situation might be avoided by others in the future.

Further Context

For those amongst us who haven’t been following along or don’t know much about WordPress, I’ll give you a bit of context. For those that know what’s gone on, you can skip straight to the main part of the article.

WordPress powers around 32% of the web with both the open-source, self-hosted CMS and the wordpress.com hosted blogs. Although WordPress, the CMS software is open-source, it is heavily contributed to by Automattic, who run wordpress.com, amongst other products. Automattic’s CEO, Matt Mullenweg is also the co-founder of the WordPress open source project.

It’s important to understand that WordPress, the CMS is not a commercial Automattic project — it is open source. Automattic do however make lots of decisions about the future of WordPress, including the brand new editor, Gutenberg. The editor has been available as a plugin while it’s been in development, so WordPress users can use it as their main editor and provide feedback — a lot of which has been negative. Gutenberg is shipping as the default editor in the 5.0 major release of WordPress, and it will be the forced default editor, with only the download of the Classic Editor preventing it. This forced change has had a mixed response from the community, to say the least.

I’ve personally been very positive about Gutenberg with my writing, teaching and speaking, as I genuinely think it’ll be a positive step for WordPress in the long run. As the launch of WordPress 5.0 has come ever closer, though, my concerns about accessibility have been growing. The accessibility issues are being “fixed” as I write this, but the handling of the situation has been incredibly poor, from Automattic.

I invite you to read this excellent, ever-updating Twitter thread by Adrian Roselli. He’s done a very good job of collecting information and providing expert commentary. He’s covered all of the events in a very straightforward manner.

Right, you’re up to speed, so let’s crack on.

What Happened?

For as long as the Gutenberg plugin has been available to install, there have been accessibility issues. Even when I very excitedly installed it and started hacking away at custom blocks back in March, I could see there was a tonne of issues with the basics, such as focus management. I kept telling myself, “This editor is very early doors, so it’ll all get fixed before WordPress 5.” The problem is: it didn’t. (Well, mostly, anyway.)

This situation was bad as it is, but two key things happened that made it worse. The accessibility lead, Rian Rietveld, resigned in October, citing political and codebase issues. The second thing is that Automattic set a hard deadline for WordPress 5’s release, regardless of whether accessibility issues were fixed or not.

Let me just illustrate how bad this is. As cited in Rian’s article: after an accessibility test round in March, the results indicated so many accessibility issues, most testers refused to look at Gutenberg again. We know that the situation has gotten a lot better since then, but there are still a tonne of open issues, even now.

I’ve got to say it how I see it, too. There’s clearly a cultural issue at Automattic in terms of their attitude towards accessibility and how they apparently compensate people who are willing to fix them, with a strange culture of free work, even from “outsiders”. Frankly, the company’s CEO, Matt Mullenweg’s attitude absolutely stinks — especially when he appears to be holding a potential professional engagement hostage over someone’s personal blog decision:

Allow me to double-down on the attitude towards accessibility for a moment. When a big company like Automattic decides to prioritize a deadline they pluck out of thin air over enabling people with impairments to use the editor that they will be forced to use it is absolutely shocking. Even more shocking is the message that it sends out that accessibility compliance is not as important as flashy new features. Ironically, there’s clearly commercial undertones to this decision for a hard deadline, but as always, free work is expected to sort it out. You’d expect a company like Automattic to fix the situation that they created with their own resource, right?

You’ll probably find it shocking that a crowd funding campaign has been put together to get an accessibility audit done on Gutenberg. I know I certainly do. You heard me correctly, too. The Gutenberg editor, which is a product of Automattic’s influence on WordPress who (as a company) were valued at over $1 Billion in 2014 are not paying for a much-needed accessibility audit. They are instead sitting back and waiting for everyone else to pay for it. Well, at least they were, until Matt Mullenweg finally committed to funding an audit on 29 November.

How Could This Mess Be Avoided?

Enough dragging people over coals (for now) and let us instead think about how this could have been avoided. Apart from the cultural issues that seem to de-prioritize accessibility at Automattic, I think the design process is mostly at fault in the context of the Gutenberg editor.

A lot of the issues are based around complexity and cognitive load. Creating blocks, editing the content, and maneuvering between blocks is a nightmare for visually impaired and/or keyboard users. Perhaps if accessibility was considered at the very start of the project, the process of creating, editing and moving blocks would be a lot simpler and thus, not a cognitive overload. The problem now is that accessibility is a fix rather than a core feature. The cognitive issues will continue to exist, albeit improved.

Another very obvious thing that could have been done differently would be to provide help and training on the JS-heavy codebase that was introduced. A lot of the accessibility fixing work seems to have been very difficult because the accessibility team had no React developers within it. There was clearly a big decision to utilize modern JavaScript because Mullenweg told everyone to “Learn JavaScript Deeply”. At that point, it would have made a lot of sense to help people who contribute a lot to WordPress for free to also learn JavaScript deeply so that they could have been involved way earlier in the process. I even saw this as an issue and made learning modern JavaScript and React a core focus in a tutorial series I co-authored with Lara Schenck.

I’m convinced that some foresight and investment in processes, planning, and people would have prevented a tonne of the accessibility issues from existing at all. Again, this points at issues with attitude from Automattic’s leader, in my opinion. He’s had the attitude that ignoring accessibility is fine because Gutenberg is a fantastic, empowering new editor. While this is true, it can’t be labeled as truly empowering if it prevents a huge number of users from managing content — in some cases, even doing their jobs. A responsible CEO in this position would probably write an incredibly apologetic statement that addressed the massive oversights. They would probably also postpone the hard deadline set until every accessibility issue was fixed. At the very least, they wouldn’t force the new editor on every single WordPress user.

Wrapping Up

I’ve got to add to this article that I am a massive WordPress fan and can see some unbelievably good opportunities for managing content that Gutenberg provides. It’s not just a new editor — it is a movement. It’s going to shape WordPress for years to come, and it should allow more designers and front-end developers into the ecosystem. This should be welcomed with open arms. Well, if and when it is fully accessible, anyway.

There are also a lot of incredible people working at Automattic and on the WordPress core team, who I have heaps of respect and love for. I know these people will help this situation come good in the end and will and do welcome this sort of critique. I also know that lessons will be learned and I have faith that a mess like this won’t happen again.

Use this situation as a warning, though. You simply can’t ignore accessibility, and you should study up and integrate it into the entire process of your projects as a priority.

Smashing Editorial
(dm, ra, il)


Source: Smashing Magazine

How to Use TypeScript to Build a Node API with Express

Like it or not, JavaScript has been helping developers power the Internet since 1995. In that time, JavaScript usage has grown from small user experience enhancements to complex full-stack applications using Node.js on the server and one of many frameworks on the client such as Angular, React, or Vue.

Today, building JavaScript applications at scale remains a challenge. More and more teams are turning to TypeScript to supplement their JavaScript projects.

Node.js server applications can benefit from using TypeScript, as well. The goal of this tutorial is to show you how to build a new Node.js application using TypeScript and Express.

The Case for TypeScript

As a web developer, I long ago stopped resisting JavaScript, and have grown to appreciate its flexibility and ubiquity. Language features added to ES2015 and beyond have significantly improved its utility and reduced common frustrations of writing applications.

However, larger JavaScript projects demand tools such as ESLint to catch common mistakes, and greater discipline to saturate the code base with useful tests. As with any software project, a healthy team culture that includes a peer review process can improve quality and guard against issues that can creep into a project.

The primary benefits of using TypeScript are to catch more errors before they go into production and make it easier to work with your code base.

TypeScript is not a different language. It’s a flexible superset of JavaScript with ways to describe optional data types. All “standard” and valid JavaScript is also valid TypeScript. You can dial in as much or little as you desire.

As soon as you add the TypeScript compiler or a TypeScript plugin to your favorite code editor, there are immediate safety and productivity benefits. TypeScript can alert you to misspelled functions and properties, detect passing the wrong types of arguments or the wrong number of arguments to functions, and provide smarter autocomplete suggestions.

Build a Guitar Inventory Application with TypeScript and Node.js

Among guitar players, there’s a joke everyone should understand.

Q: “How many guitars do you need?”

A: “n + 1. Always one more.”

In this tutorial, you are going to create a new Node.js application to keep track of an inventory of guitars. In a nutshell, this tutorial uses Node.js with Express, EJS, and PostgreSQL on the backend, Vue, Materialize, and Axios on the frontend, Okta for account registration and authorization, and TypeScript to govern the JavaScripts!

Guitar Inventory Demo

Create Your Node.js Project

Open up a terminal (Mac/Linux) or a command prompt (Windows) and type the following command:

node --version

If you get an error, or the version of Node.js you have is less than version 8, you’ll need to install Node.js. On Mac or Linux, I recommend you first install nvm and use nvm to install Node.js. On Windows, I recommend you use Chocolatey.

After ensuring you have a recent version of Node.js installed, create a folder for your project.

mkdir guitar-inventory
cd guitar-inventory

Use npm to initialize a package.json file.

npm init -y

Hello, world!

In this sample application, Express is used to serve web pages and implement an API. Dependencies are installed using npm. Add Express to your project with the following command.

npm install express

Next, open the project in your editor of choice.

If you don’t already have a favorite code editor, I use and recommend Visual Studio Code. VS Code has exceptional support for JavaScript and Node.js, such as smart code completing and debugging, and there’s a vast library of free extensions contributed by the community.

Create a folder named src. In this folder, create a file named index.js. Open the file and add the following JavaScript.

const express = require( "express" );
const app = express();
const port = 8080; // default port to listen

// define a route handler for the default home page
app.get( "/", ( req, res ) => {
    res.send( "Hello world!" );
} );

// start the Express server
app.listen( port, () => {
    console.log( `server started at http://localhost:${ port }` );
} );

Next, update package.json to instruct npm on how to run your application. Change the main property value to point to src/index.js, and add a start script to the scripts object.

  "main": "src/index.js",
  "scripts": {
    "start": "node .",
    "test": "echo "Error: no test specified" && exit 1"
  },

Now, from the terminal or command line, you can launch the application.

npm run start

If all goes well, you should see this message written to the console.

server started at http://localhost:8080

Launch your browser and navigate to http://localhost:8080. You should see the text “Hello world!”

Hello World

Note: To stop the web application, you can go back to the terminal or command prompt and press CTRL+C.

Set Up Your Node.js Project to Use TypeScript

The first step is to add the TypeScript compiler. You can install the compiler as a developer dependency using the --save-dev flag.

npm install --save-dev typescript

The next step is to add a tsconfig.json file. This file instructs TypeScript how to compile (transpile) your TypeScript code into plain JavaScript.

Create a file named tsconfig.json in the root folder of your project, and add the following configuration.

{
    "compilerOptions": {
        "module": "commonjs",
        "esModuleInterop": true,
        "target": "es6",
        "noImplicitAny": true,
        "moduleResolution": "node",
        "sourceMap": true,
        "outDir": "dist",
        "baseUrl": ".",
        "paths": {
            "*": [
                "node_modules/*"
            ]
        }
    },
    "include": [
        "src/**/*"
    ]
}

Based on this tsconfig.json file, the TypeScript compiler will (attempt to) compile any files ending with .ts it finds in the src folder, and store the results in a folder named dist. Node.js uses the CommonJS module system, so the value for the module setting is commonjs. Also, the target version of JavaScript is ES6 (ES2015), which is compatible with modern versions of Node.js.

It’s also a great idea to add tslint and create a tslint.json file that instructs TypeScript how to lint your code. If you’re not familiar with linting, it is a code analysis tool to alert you to potential problems in your code beyond syntax issues.

Install tslint as a developer dependency.

npm install --save-dev typescript tslint

Next, create a new file in the root folder named tslint.json file and add the following configuration.

{
    "defaultSeverity": "error",
    "extends": [
        "tslint:recommended"
    ],
    "jsRules": {},
    "rules": {
        "trailing-comma": [ false ]
    },
    "rulesDirectory": []
}

Next, update your package.json to change main to point to the new dist folder created by the TypeScript compiler. Also, add a couple of scripts to execute TSLint and the TypeScript compiler just before starting the Node.js server.

  "main": "dist/index.js",
  "scripts": {
    "prebuild": "tslint -c tslint.json -p tsconfig.json --fix",
    "build": "tsc",
    "prestart": "npm run build",
    "start": "node .",
    "test": "echo "Error: no test specified" && exit 1"
  },

Finally, change the extension of the src/index.js file from .js to .ts, the TypeScript extension, and run the start script.

npm run start

Note: You can run TSLint and the TypeScript compiler without starting the Node.js server using npm run build.

TypeScript errors

Oh no! Right away, you may see some errors logged to the console like these.

ERROR: /Users/reverentgeek/Projects/guitar-inventory/src/index.ts[12, 5]: Calls to 'console.log' are not allowed.

src/index.ts:1:17 - error TS2580: Cannot find name 'require'. Do you need to install type definitions for node? Try `npm i @types/node`.

1 const express = require( "express" );
                  ~~~~~~~

src/index.ts:6:17 - error TS7006: Parameter 'req' implicitly has an 'any' type.

6 app.get( "/", ( req, res ) => {
                  ~~~

The two most common errors you may see are syntax errors and missing type information. TSLint considers using console.log to be an issue for production code. The best solution is to replace uses of console.log with a logging framework such as winston. For now, add the following comment to src/index.ts to disable the rule.

app.listen( port, () => {
    // tslint:disable-next-line:no-console
    console.log( `server started at http://localhost:${ port }` );
} );

TypeScript prefers to use the import module syntax over require, so you’ll start by changing the first line in src/index.ts from:

const express = require( "express" );

to:

import express from "express";

Getting the right types

To assist TypeScript developers, library authors and community contributors publish companion libraries called TypeScript declaration files. Declaration files are published to the DefinitelyTyped open source repository, or sometimes found in the original JavaScript library itself.

Update your project so that TypeScript can use the type declarations for Node.js and Express.

npm install --save-dev @types/node @types/express

Next, rerun the start script and verify there are no more errors.

npm run start

Build a Better User Interface with Materialize and EJS

Your Node.js application is off to a great start, but perhaps not the best looking, yet. This step adds Materialize, a modern CSS framework based on Google’s Material Design, and Embedded JavaScript Templates (EJS), an HTML template language for Express. Materialize and EJS are a good foundation for a much better UI.

First, install EJS as a dependency.

npm install ejs

Next, make a new folder under /src named views. In the /src/views folder, create a file named index.ejs. Add the following code to /src/views/index.ejs.

<!DOCTYPE html>
<html>
<head>
    <meta charset="utf-8" />
    <meta http-equiv="X-UA-Compatible" content="IE=edge">
    <title>Guitar Inventory</title>
    <meta name="viewport" content="width=device-width, initial-scale=1">
    <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/materialize/1.0.0/css/materialize.min.css">
    <link rel="stylesheet" href="https://fonts.googleapis.com/icon?family=Material+Icons">
</head>
<body>
    <div class="container">
        <h1 class="header">Guitar Inventory</h1>
        <a class="btn" href="/guitars"><i class="material-icons right">arrow_forward</i>Get started!</a>
    </div>
</body>
</html>

Update /src/index.ts with the following code.

import express from "express";
import path from "path";
const app = express();
const port = 8080; // default port to listen

// Configure Express to use EJS
app.set( "views", path.join( __dirname, "views" ) );
app.set( "view engine", "ejs" );

// define a route handler for the default home page
app.get( "/", ( req, res ) => {
    // render the index template
    res.render( "index" );
} );

// start the express server
app.listen( port, () => {
    // tslint:disable-next-line:no-console
    console.log( `server started at http://localhost:${ port }` );
} );

Add an asset build script for Typescript

The TypeScript compiler does the work of generating the JavaScript files and copies them to the dist folder. However, it does not copy the other types of files the project needs to run, such as the EJS view templates. To accomplish this, create a build script that copies all the other files to the dist folder.

Install the needed modules and TypeScript declarations using these commands.

npm install --save-dev ts-node shelljs fs-extra nodemon rimraf npm-run-all
npm install --save-dev @types/fs-extra @types/shelljs

Here is a quick overview of the modules you just installed.

  1. ts-node. Use to run TypeScript files directly.
  2. shelljs. Use to execute shell commands such as to copy files and remove directories.
  3. fs-extra. A module that extends the Node.js file system (fs) module with features such as reading and writing JSON files.
  4. rimraf. Use to recursively remove folders.
  5. npm-run-all. Use to execute multiple npm scripts sequentially or in parallel.
  6. nodemon. A handy tool for running Node.js in a development environment. Nodemon watches files for changes and automatically restarts the Node.js application when changes are detected. No more stopping and restarting Node.js!

Make a new folder in the root of the project named tools. Create a file in the tools folder named copyAssets.ts. Copy the following code into this file.

import * as shell from "shelljs";

// Copy all the view templates
shell.cp( "-R", "src/views", "dist/" );

Update npm scripts

Update the scripts in package.json to the following code.

  "scripts": {
    "clean": "rimraf dist/*",
    "copy-assets": "ts-node tools/copyAssets",
    "lint": "tslint -c tslint.json -p tsconfig.json --fix",
    "tsc": "tsc",
    "build": "npm-run-all clean lint tsc copy-assets",
    "dev:start": "npm-run-all build start",
    "dev": "nodemon --watch src -e ts,ejs --exec npm run dev:start",
    "start": "node .",
    "test": "echo "Error: no test specified" && exit 1"
  },

Note: If you are not familiar with using npm scripts, they can be very powerful and useful to any Node.js project. Scripts can be chained together in several ways. One way to chain scripts together is to use the pre and post prefixes. For example, if you have one script labeled start and another labeled prestart, executing npm run start at the terminal will first run prestart, and only after it successfully finishes does start run.

Now run the application and navigate to http://localhost:8080.

npm run dev

Guitar Inventory home page

The home page is starting to look better! Of course, the Get Started button leads to a disappointing error message. No worries! The fix for that is coming soon!

A Better Way to Manage Configuration Settings in Node.js

Node.js applications typically use environment variables for configuration. However, managing environment variables can be a chore. A popular module for managing application configuration data is dotenv.

Install dotenv as a project dependency.

npm install dotenv
npm install --save-dev @types/dotenv

Create a file named .env in the root folder of the project, and add the following code.

# Set to production when deploying to production
NODE_ENV=development

# Node.js server configuration
SERVER_PORT=8080

Note: When using a source control system such as git, do not add the .env file to source control. Each environment requires a custom .env file. It is recommended you document the values expected in the .env file in the project README or a separate .env.sample file.

Now, update src/index.ts to use dotenv to configure the application server port value.

import dotenv from "dotenv";
import express from "express";
import path from "path";

// initialize configuration
dotenv.config();

// port is now available to the Node.js runtime 
// as if it were an environment variable
const port = process.env.SERVER_PORT;

const app = express();

// Configure Express to use EJS
app.set( "views", path.join( __dirname, "views" ) );
app.set( "view engine", "ejs" );

// define a route handler for the default home page
app.get( "/", ( req, res ) => {
    // render the index template
    res.render( "index" );
} );

// start the express server
app.listen( port, () => {
    // tslint:disable-next-line:no-console
    console.log( `server started at http://localhost:${ port }` );
} );

You will use the .env for much more configuration information as the project grows.

Easily Add Authentication to Node and Express

Adding user registration and login (authentication) to any application is not a trivial task. The good news is Okta makes this step very easy. To begin, create a free developer account with Okta. First, navigate to developer.okta.com and click the Create Free Account button, or click the Sign Up button.

Sign up for free account

After creating your account, click the Applications link at the top, and then click Add Application.

Create application

Next, choose a Web Application and click Next.

Create a web application

Enter a name for your application, such as Guitar Inventory. Verify the port number is the same as configured for your local web application. Then, click Done to finish creating the application.

Application settings

Copy and paste the following code into your .env file.

# Okta configuration
OKTA_ORG_URL=https://{yourOktaDomain}
OKTA_CLIENT_ID={yourClientId}
OKTA_CLIENT_SECRET={yourClientSecret}

In the Okta application console, click on your new application’s General tab, and find near the bottom of the page a section titled “Client Credentials.” Copy the Client ID and Client secret values and paste them into your .env file to replace {yourClientId} and {yourClientSecret}, respectively.

Client credentials

Enable self-service registration

One of the great features of Okta is allowing users of your application to sign up for an account. By default, this feature is disabled, but you can easily enable it. First, click on the Users menu and select Registration.

User registration

  1. Click on the Edit button.
  2. Change Self-service registration to Enabled.
  3. Click the Save button at the bottom of the form.

Enable self-registration

Secure your Node.js application

The last step to securing your Node.js application is to configure Express to use the Okta OpenId Connect (OIDC) middleware.

npm install @okta/oidc-middleware express-session
npm install --save-dev @types/express-session

Next, update your .env file to add a HOST_URL and SESSION_SECRET value. You may change the SESSION_SECRET value to any string you wish.

# Node.js server configuration
SERVER_PORT=8080
HOST_URL=http://localhost:8080
SESSION_SECRET=MySuperCoolAndAwesomeSecretForSigningSessionCookies

Create a folder under src named middleware. Add a file to the src/middleware folder named sessionAuth.ts. Add the following code to src/middleware/sessionAuth.ts.

import { ExpressOIDC } from "@okta/oidc-middleware";
import session from "express-session";

export const register = ( app: any ) => {
    // Create the OIDC client
    const oidc = new ExpressOIDC( {
        client_id: process.env.OKTA_CLIENT_ID,
        client_secret: process.env.OKTA_CLIENT_SECRET,
        issuer: `${ process.env.OKTA_ORG_URL }/oauth2/default`,
        redirect_uri: `${ process.env.HOST_URL }/authorization-code/callback`,
        scope: "openid profile"
    } );

    // Configure Express to use authentication sessions
    app.use( session( {
        resave: true,
        saveUninitialized: false,
        secret: process.env.SESSION_SECRET
    } ) );

    // Configure Express to use the OIDC client router
    app.use( oidc.router );

    // add the OIDC client to the app.locals
    app.locals.oidc = oidc;
};

At this point, if you are using a code editor like VS Code, you may see TypeScript complaining about the @okta/oidc-middleware module. At the time of this writing, this module does not yet have an official TypeScript declaration file. For now, create a file in the src folder named global.d.ts and add the following code.

declare module "@okta/oidc-middleware";

Refactor routes

As the application grows, you will add many more routes. It is a good idea to define all the routes in one area of the project. Make a new folder under src named routes. Add a new file to src/routes named index.ts. Then, add the following code to this new file.

import * as express from "express";

export const register = ( app: express.Application ) => {
    const oidc = app.locals.oidc;

    // define a route handler for the default home page
    app.get( "/", ( req: any, res ) => {
        res.render( "index" );
    } );

    // define a secure route handler for the login page that redirects to /guitars
    app.get( "/login", oidc.ensureAuthenticated(), ( req, res ) => {
        res.redirect( "/guitars" );
    } );

    // define a route to handle logout
    app.get( "/logout", ( req: any, res ) => {
        req.logout();
        res.redirect( "/" );
    } );

    // define a secure route handler for the guitars page
    app.get( "/guitars", oidc.ensureAuthenticated(), ( req: any, res ) => {
        res.render( "guitars" );
    } );
};

Next, update src/index.ts to use the sessionAuth and routes modules you created.

import dotenv from "dotenv";
import express from "express";
import path from "path";
import * as sessionAuth from "./middleware/sessionAuth";
import * as routes from "./routes";

// initialize configuration
dotenv.config();

// port is now available to the Node.js runtime
// as if it were an environment variable
const port = process.env.SERVER_PORT;

const app = express();

// Configure Express to use EJS
app.set( "views", path.join( __dirname, "views" ) );
app.set( "view engine", "ejs" );

// Configure session auth
sessionAuth.register( app );

// Configure routes
routes.register( app );

// start the express server
app.listen( port, () => {
    // tslint:disable-next-line:no-console
    console.log( `server started at http://localhost:${ port }` );
} );

Next, create a new file for the guitar list view template at src/views/guitars.ejs and enter the following HTML.

<!DOCTYPE html>
<html>
<head>
    <meta charset="utf-8" />
    <meta http-equiv="X-UA-Compatible" content="IE=edge">
    <title>Guitar Inventory</title>
    <meta name="viewport" content="width=device-width, initial-scale=1">
    <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/materialize/1.0.0/css/materialize.min.css">
    <link rel="stylesheet" href="https://fonts.googleapis.com/icon?family=Material+Icons">
</head>
<body>
    <div class="container">
        <h1 class="header">Guitar Inventory</h1>
        <p>Your future list of guitars!</p>
    </div>
</body>
</html>

Finally, run the application.

npm run dev

Note: To verify authentication is working as expected, open a new browser or use a private/incognito browser window.

Click the Get Started button. If everything goes well, log in with your Okta account, and Okta should automatically redirect you back to the “Guitar List” page!

Okta login

Add a Navigation Menu to Your Node + Typescript App

With authentication working, you can take advantage of the user profile information returned from Okta. The OIDC middleware automatically attaches a userContext object and an isAuthenticated() function to every request. This userContext has a userinfo property that contains information that looks like the following object.

{ 
  sub: '00abc12defg3hij4k5l6',
  name: 'First Last',
  locale: 'en-US',
  preferred_username: 'account@company.com',
  given_name: 'First',
  family_name: 'Last',
  zoneinfo: 'America/Los_Angeles',
  updated_at: 1539283620 
}

The first step is get the user profile object and pass it to the views as data. Update the src/routes/index.ts with the following code.

import * as express from "express";

export const register = ( app: express.Application ) => {
    const oidc = app.locals.oidc;

    // define a route handler for the default home page
    app.get( "/", ( req: any, res ) => {
        const user = req.userContext ? req.userContext.userinfo : null;
        res.render( "index", { isAuthenticated: req.isAuthenticated(), user } );
    } );

    // define a secure route handler for the login page that redirects to /guitars
    app.get( "/login", oidc.ensureAuthenticated(), ( req, res ) => {
        res.redirect( "/guitars" );
    } );

    // define a route to handle logout
    app.get( "/logout", ( req: any, res ) => {
        req.logout();
        res.redirect( "/" );
    } );

    // define a secure route handler for the guitars page
    app.get( "/guitars", oidc.ensureAuthenticated(), ( req: any, res ) => {
        const user = req.userContext ? req.userContext.userinfo : null;
        res.render( "guitars", { isAuthenticated: req.isAuthenticated(), user } );
    } );
};

Make a new folder under src/views named partials. Create a new file in this folder named nav.ejs. Add the following code to src/views/partials/nav.ejs.

<nav>
    <div class="nav-wrapper">
        <a href="/" class="brand-logo"><% if ( user ) { %><%= user.name %>'s <% } %>Guitar Inventory</a>
        <ul id="nav-mobile" class="right hide-on-med-and-down">
            <li><a href="/guitars">My Guitars</a></li>
            <% if ( isAuthenticated ) { %>
            <li><a href="/logout">Logout</a></li>
            <% } %>
            <% if ( !isAuthenticated ) { %>
            <li><a href="/login">Login</a></li>
            <% } %>
        </ul>
    </div>
</nav>

Modify the src/views/index.ejs and src/views/guitars.ejs files. Immediately following the <body> tag, insert the following code.

<body>
    <% include partials/nav %>

With these changes in place, your application now has a navigation menu at the top that changes based on the login status of the user.

Navigation

Create an API with Node and PostgreSQL

The next step is to add the API to the Guitar Inventory application. However, before moving on, you need a way to store data.

Create a PostgreSQL database

This tutorial uses PostgreSQL. To make things easier, use Docker to set up an instance of PostgreSQL. If you don’t already have Docker installed, you can follow the install guide.

Once you have Docker installed, run the following command to download the latest PostgreSQL container.

docker pull postgres:latest

Now, run this command to create an instance of a PostgreSQL database server. Feel free to change the administrator password value.

docker run -d --name guitar-db -p 5432:5432 -e 'POSTGRES_PASSWORD=p@ssw0rd42' postgres

Note: If you already have PostgreSQL installed locally, you will need to change the -p parameter to map port 5432 to a different port that does not conflict with your existing instance of PostgreSQL.

Here is a quick explanation of the previous Docker parameters.

  • -d – This launches the container in daemon mode, so it runs in the background.
  • -name – This gives your Docker container a friendly name, which is useful for stopping and starting containers.
  • -p – This maps the host (your computer) port 5432 to the container’s port 5432. PostgreSQL, by default, listens for connections on TCP port 5432.
  • -e – This sets an environment variable in the container. In this example, the administrator password is p@ssw0rd42. You can change this value to any password you desire.
  • postgres – This final parameter tells Docker to use the postgres image.

Note: If you restart your computer, may need to restart the Docker container. You can do that using the docker start guitar-db command.

Install the PostgreSQL client module and type declarations using the following commands.

npm install pg pg-promise
npm install --save-dev @types/pg

Database configuration settings

Add the following settings to the end of the .env file.

# Postgres configuration
PGHOST=localhost
PGUSER=postgres
PGDATABASE=postgres
PGPASSWORD=p@ssw0rd42
PGPORT=5432

Note: If you changed the database administrator password, be sure to replace the default p@ssw0rd42 with that password in this file.

Add a database build script

You need a build script to initialize the PostgreSQL database. This script should read in a .pgsql file and execute the SQL commands against the local database.

In the tools folder, create two files: initdb.ts and initdb.pgsql. Copy and paste the following code into initdb.ts.

The post How to Use TypeScript to Build a Node API with Express appeared first on SitePoint.


Source: Sitepoint

Elements To Ditch Or Repurpose On Mobile

Elements To Ditch Or Repurpose On Mobile

Elements To Ditch Or Repurpose On Mobile

Suzanne Scacca

2018-12-06T14:00:02+01:00
2018-12-06T13:01:57+00:00

With the end of the year quickly approaching, everyone is chiming in with predictions for 2019 web design trends. For the most part, I think these predictions look quite similar to the ones made for 2018 — which is surprising.

As we move deeper into the mobile-first territory, we can’t adhere to the same predictions that made sense for websites viewed on desktop. We, of course, can’t forget about the desktop experience, but it needs to take a backseat to mobile. This is why I wish 2019 predictions (and beyond) would be more practical in nature.

We need to design websites primarily with mobile users in mind, which means having a more efficient system of content delivery. Rather than spend the next year or so adding more design techniques to our repertoire, maybe we should be taking some away?

As the abstract expressionist painter Hans Hofmann said:

“The ability to simplify means to eliminate the unnecessary so that the necessary may speak.”

So, today, I’m going to talk about the mobile design elements we’ve held onto for a little too long and what you should do about them going forward.

Why Do We Need To Get Rid Of Mobile Design Elements In 2019?

Although responsive design and minimalism have inched us closer to the desired effect of mobile first I don’t think it’s taken us as far as we can go. And part of that is because we’re reticent to let go of design elements that have been with us for a long time. They might seem essential, but I suspect that many of them can be removed from websites without harming the experience.

This is why: On desktop, there’s a lot of room to play with. Even if you don’t populate every inch of the screen with content, you find creative ways to use the space. With mobile, you’ve drastically reduced the real estate. One of the biggest side effects of this is the amount of scrolling that mobile visitors have to do.

Why does this matter?

A 2018 study from the Nielsen Norman Group on scrolling and attention demonstrates that many users (57%) don’t mind scrolling past the above-the-fold line. That said, 74% of all viewing time occurs within the first two screenfuls.


Percentage of views per scroll
NNG stats on how much content is consumed on a page (Source: Nielsen Norman Group) (Large preview)

If you try to fit all of those extraneous design elements from the traditional desktop experience into the mobile one, there’s a good chance your visitors won’t ever encounter them.

Although a longer scroll on mobile might be easy enough to execute, you might also find your visitors suffer from scrolling fatigue. My suggestion is to delete design elements on mobile that create excessive scrolls and, consequently, test visitors’ patience.

4 Mobile Design Elements You Should Ditch In 2019

If we’re not going to drastically change web design trends from 2018 to 2019, then I think now is a great time to clean up the mobile web experience. If you’re looking to increase times spent on site as well as your conversion rates, creating a sleeker and more efficient experience would greatly improve your mobile web designs.

In order to explain which mobile design elements you should ditch this year, I’m going to pit the desktop and mobile experiences against one another. This way you get a sense for why you need to say goodbye to it on mobile.

1. Sidebars

A sidebar has been a handy web design element for blogs and other news authorities for a long time. However, with responsive and mobile-first design taking over, the sidebar tends just to get shoved at the very bottom of blog posts now. But is that the best place for it?

The Blonde Abroad is an example of one that puts most of the sidebar content into the bottom of a post.

Here is how a post appears on desktop:


The Blonde Abroad on desktop
The Blonde Abroad blog sidebar on desktop (Source: The Blonde Abroad) (Large preview)

Note that this isn’t the end of the sidebar either. There are a number of other widgets below the ones shown in this screenshot. Which is why the mobile counterpart runs on way too long for this website:


The Blonde Abroad on desktop
The Blonde Abroad blog sidebar on mobile (Source: The Blonde Abroad) (Large preview)

What you’re seeing here isn’t a cool social media-centric page. This is what mobile users find after they scroll past:

  • Ads,
  • A promotion of her web store,
  • Recommended/related posts,
  • A subscriber form,
  • A comment form.

The Instagram feed then shows up, followed by the subscriber form once again! All in all, it takes about half of the page’s scrolls to get to the end of the content. The rest of the page is then filled with self-promotional material. It’s just way too much.

If Instagram is that prominent of a platform for her, she should have a link to it in the header. I would also suggest cutting back on the number of forms on the mobile web pages. Three forms (two of which are duplicates) is excessive. And I’d also probably recommend turning the recommended posts with images and titles into plain text links.

An example of an authority site that handles sidebars well is the MarketingSherpa blog. As you can see here, there is a fairly dense sidebar included in the desktop experience.


MarketingSherpa desktop sidebar
The MarketingSherpa desktop sidebar (Source: MarketingSherpa) (Large preview)

Turn your attention to mobile, however, and the sidebar disappears completely. Instead, you’ll encounter a super lightweight experience:


MarketingSherpa mobile sidebar
The MarketingSherpa mobile sidebar (Source: MarketingSherpa) (Large preview)

Below each post on the blog, you’ll find a succinct list of links recommended by the author. There is also a Previous/Next widget that enables readers to quickly move to the next published post. It’s a great way to keep readers moving through the site without having to make a mobile web page unnecessarily long.

2. Modal Pop-ups

I know that mobile pop-ups aren’t dying, at least so far as Google is concerned. But intrusive pop-ups aside, does the traditional pop-up have a place on mobile anymore? If we’re really thinking about ways to optimize the user experience, wouldn’t it make sense to do away with the modal altogether?

Here’s an example from Akamai that I’m shocked even exists:


A mobile pop-up on Akamai site
The top of a mobile pop-up on the Akamai website (Source: Akamai) (Large preview)

While perusing one of the internal pages of the mobile site, this pop-up appeared on my screen. At first, I thought, “Oh, cool! A pop-up with a graphic and statistic.” But then I read it and realized it was a scrolling pop-up!


A very long mobile pop-up on Akamai site
The bottom of a mobile pop-up on the Akamai website (Source: Akamai) (Large preview)

I’m honestly not sure I’ve seen one of these before, but I think it’s the perfect example of why modal mobile pop-ups are never a great idea. In addition to blocking the content of the site almost completely, the pop-up requires the visitor to do work in order to see the whole message.

I ran into another example of a bad pop-up. This one is on the Paul Mitchell website:


Duplicate pop-up promotion on Paul Mitchell
A Paul Mitchell pop-up matches the main header graphic (Source: Paul Mitchell) (Large preview)

I thought it was an odd choice to place the same promotion in both the pop-up and the scrolling hero image. This one, however, is easy enough to dismiss since it’s clear what is the pop-up and what is the image.

On mobile, it’s not that easy to distinguish:


Duplication of mobile pop-up on Paul Mitchell
A confusing duplication of a mobile ad or pop-up on the Paul Mitchell site (Source: Paul Mitchell) (Large preview)

If I hadn’t seen the matching pop-up on desktop, I likely would’ve thought this web page had an error upon first seeing the duplication. It also doesn’t help that the hero banner now has an arrow icon in a black box, which could easily be confused for the “X” that closes out the matching pop-up.

It’s a very odd design choice and one I’d tell everyone else to stay away from. Not only does the pop-up appear instantly on the home page (which is a no-no), but it creates a confusing first impression. It might not be the traditional modal, but it still looks bad.

Switching gears, the Four Seasons website does a very nice job of handling its pop-ups. Here is the desktop pop-up widget:


Interactive pop-up widget on Four Seasons
An interactive pop-up offer for the Four Seasons (Source: Four Seasons) (Large preview)

Click on the pop-up, and it will open a full-screen pop-up offer. This is a nice touch as it gives the visitor full control over whether they want to see the pop-up or not.


Interactive pop-up widget expands on Four Seasons
An interactive pop-up offer is revealed for the Four Seasons (Source: Four Seasons) (Large preview)

The mobile pop-up counterpart does something similar:


Interactive pop-up on mobile Four Seasons
An interactive pop-up offer appears on the Four Seasons mobile site (Source: Four Seasons) (Large preview)

The pop-up offer sits snug against the header, never intruding on the experience of the mobile site.


Interactive pop-up expands on mobile Four Seasons
An interactive pop-up offer expands on the Four Seasons mobile site (Source: Four Seasons) (Large preview)

Even once the pop-up is clicked, it never blocks the mobile website from view. It only pushes the content down further on the page. It’s simply designed, easy to follow and gives all of the control over to the mobile user in terms of engagement. It’s a great design choice and one I’d like to see more mobile designers use when designing pop-up elements going forward.

3. Sticky Side Elements

I think a sticky navigation bar or bottom bar on a mobile website is a brilliant idea. As we’ve already seen, visitors are willing to scroll on a website. But visitors are more likely to scroll further down a page if they have an easy way to go somewhere else — to another page, to check out, to a special discount offer, etc.

That said, I’m not a fan of sticky elements on the side of mobile websites. On desktop, they work well. They’re typically tiny icons or widgets that stick to the side or bottom corner of the site. They’re boldly colored, easy to recognize and give visitors the choice of interacting when they’re ready.

On mobile, however, sticky side elements are a bad idea.

Let’s take a look at the Sofitel website, for example.


Sofitel Feedback widget
Feedback widget sticks to side of Sofitel desktop site (Source: Sofitel) (Large preview)

As you can see, there’s an orange “Feedback” button stuck to the left side of the screen. As you scroll down the page, it remains put, making it convenient for visitors to drop the developer a line if something goes wrong.

Here’s how that same button appears on mobile:


Sofitel Feedback widget on mobile
Feedback widget covers content on Sofitel mobile site (Source: Sofitel) (Large preview)

Although the “Feedback” button is not always blocking content, there are occasions where it overlaps an image or text as a user scrolls. It might seem like a minor inconvenience, but it could easily be what takes a visitor from feeling annoyed or frustrated with a website to feeling completely over it.

Wreaths Across America is another example of a sticky element getting in the way. On desktop, the blue live chat widget is well-placed.


Wreaths Across America live chat
Wreaths Across America includes live chat widget on every page (Source: Wreaths Across America) (Large preview)

Then, move it over to mobile, and the live chat continuously covers a decent amount of content residing in the bottom-right corner.


Wreaths Across America live chat on mobile
Wreaths Across America live chat covers mobile content (Source: Wreaths Across America) (Large preview)

If your visitors aren’t actively engaging with live chat or other sticky side elements on mobile (and your statistics should tell you this), don’t leave them there. Or, at the very least, present an easy way to dismiss them.

One way to get around the sticky overlap issue is the solution BuzzFeed has chosen.

In recent years, many websites utilized floating and sticky social media icons. It was a logical choice as you never knew how long it would take for readers to decide that they just had to share your web page or post with their social media connections.


Sticky social and share icons on BuzzFeed
BuzzFeed sticks social media and sharing icons to the bottom of the screen (Source: BuzzFeed) (Large preview)

As we’ve seen with the live chat and Feedback widgets above, elements that stick to the side of the screen just don’t work on mobile. Instead, we should look to what BuzzFeed has done here and make those icons stick flush with the bottom of the screen.

We already know that sticky navigation and bottom bars stay out of the way of content, so let’s use these key areas of the mobile device to place sticky elements we want people to engage with.

4. Content

It’s not just these extraneous design elements or outliers that you should think about removing on the mobile experience. I believe there are times when content itself doesn’t need to be there.

If you want to get visitors to the crux of your message in just a few scrolls, you can’t be afraid to cut out content that isn’t 100% necessary.

I think ads are one of the worst offenders of this. TechRepublic has a particularly nasty example of this — both for desktop and mobile.


Oversized banner ad on TechRepublic
An oversized banner ad on the TechRepublic desktop site (Source: TechRepublic) (Large preview)

This is what the TechRepublic desktop website looks when you first visit it. That alone is horrendous. Why does anyone use ad banners above the header anymore? And why does this one have to be so large in size? Shouldn’t TechRepublic’s logo and navigation be the first thing people see?

It was my hope that, upon visiting the mobile site, the ad would’ve gone away. Sadly, that wasn’t the case.


Oversized banner ad on TechRepublic mobile
An oversized banner ad on the TechRepublic mobile site (Source: TechRepublic) (Large preview)

What we have here is a Best Buy ad that takes up roughly a third of the TechRepublic mobile home page. Sure, once a visitor scrolls down, it will go away. But where do you think visitors’ eyes will go to first? I’m willing to bet some of them will see the logo in the top left and wonder how the heck they ended up on the Best Buy website.

This is one of those times when it’s best to rethink your monetization strategy if it’s going to intrude and confuse the mobile user’s experience.

Now, let’s look at the good.

Kohl’s has a pretty standard product page for an e-commerce website:


Kohl’s desktop products
Kohl’s product page on desktop (Source: Kohl’s) (Large preview)

When displayed on mobile, however, you’ll find that the product views disappear:


Kohl’s mobile product views
Kohl’s product page on mobile (Source: Kohl’s) (Large preview)

Instead of trying to make room for them, the different product views are hidden under a slider. This is a nice choice if you would prefer not to compromise on how much content is displayed — especially if it’s essential to selling the product.

Another great example of picking and choosing your battles when it comes to displaying content on mobile comes from The Blonde Abroad.

Readers of her blog can choose content based on the global destination, as shown here on the desktop website:


The Blonde Abroad desktop search
The Blonde Abroad includes a searchable map on desktop (Source: The Blonde Abroad) (Large preview)

It’s a pretty neat search function, especially since it places the content within the context of an actual map.

Rather than try to force a graphic like this to fit to mobile, The Blonde Abroad includes only the essentials needed to conduct a search:


The Blonde Abroad mobile search
The Blonde Abroad includes only the standard search on mobile (Source: The Blonde Abroad) (Large preview)

While mobile readers might miss out on the mapped content, this provides a much more streamlined experience. Mobile users don’t want to have to scroll left and right, up and down, in order to search for content from an oversized graphic. At its core, this section of the site is about search. And, on mobile, this clean presentation of search options is enough to impress readers and inspire them to read more.

Wrapping Up

In Stephen King’s guide to writing, On Writing, he says something to this extent:

“Create your content. Then, review it and delete 10% of what you created.”

Granted, this applies to writing a story, but I believe this same logic applies to the designing of a mobile website. In other words: Why test your visitors’ patience — or even worse — create too cumbersome of an experience that they miss the most important parts of it? Go ahead and translate the idea you had for the traditional desktop landscape into a mobile setting. Then, review it on mobile and gut all of the unnecessary bits of content or design elements.

Smashing Editorial
(ra, yk, il)


Source: Smashing Magazine

What Is The Role Of Creativity In UX Design?

What Is The Role Of Creativity In UX Design?

What Is The Role Of Creativity In UX Design?

Susan Weinschenk

2018-12-06T10:00:41+01:00
2018-12-06T13:01:57+00:00

(This article is kindly sponsored by Adobe.) You are working on a project for your client, designing the interface for a new application. There have been lots of meetings about the new product, and now it’s time for you to start working on sketching and prototyping a design.

The screens, pages, and forms you are about to create have to fit within the desires and constraints of several players — the marketing department, the developers, the business owner. Some questions start to form as you work on the interface:

“How creative should I be/am I expected to be with this design? Is my role to implement the vision that someone else has come up with? Should I be taking the ideas and constraints and creating my own vision? How much can I stray from the ideas I’ve been given?”

You speak to your main contact on the project and she says:

“Go for it, be creative. Let’s see what you come up with.”

You’re excited to be given a free hand, but now you have to figure out what does it mean to be creative with UX design and how do you go about “being creative”? Is creativity something you can just turn on? Is it a process you go do? Are some people just creative and others aren’t? And if so, which one are you?

Let’s explore.

What Is Creativity?

Creativity can mean a lot of things to a lot of people. The definition I find the most useful is:

“Creativity is a process that results in outcomes that are original and of value.”

This definition has several implications:

  • Process
    It’s not just the end result that defines whether someone is creative. In order to be creative the assumption is that you have followed a particular process. We’ll discuss the process a little later in this post.
  • Outcomes
    Although process is important, process alone is not enough. In order to claim creativity, you have to have something at the end of the process. You have to have an outcome.
  • Original and of value
    The outcome that you have at the end of the process has to be unique and be of some value to someone.

So what is this creative process? In order to know what process would result in creativity, you first have to understand how the brain works in terms of solving problems or coming up with new ideas.

The Brain Science Of Creativity

A popular idea about creativity is that creativity happens in the “right brain”. That’s actually not accurate. There is new and interesting research on what happens in the brain when people are being creative.

Both the left and right half of the brain are involved in creativity. In fact, there is no one area of the brain where creativity happens. Instead, there are three brain “networks” that are involved in creativity. A network is a collection of different parts of the brain that work together.


Both the left and right half of the brain are involved in creativity.
(Large preview)

The Executive Attention Network

The Executive Attention Network refers to brain activity when you are focused on identifying and solving a problem, or deciding what you need to be creative about.

“What is the best way to design this form so that people who just want the default selections aren’t distracted, but also so that when someone needs one of the exceptions they can find what they need to fill out the form?”

When you stare at your screen and think of the above, you are using the Executive Attention Network. Once you have focused on the problem you want to solve, or the creative idea you want to work on, the next network that gets to work is called the Imagination Network.

The Imagination Network

The Imagination Network works in a mostly unconscious way. It reviews your knowledge and memories, and then runs simulations of possible ways to create what it is you set your intention to create with the executive attention network.

The Salience Network

Lastly there is the Salience Network. This is also a largely unconscious process. The Salience Network monitors the activity in the Imagination Network, and decides what to pick out and bring to your conscious awareness. This is when you have an “Ah-ha!” moment and say to yourself, “Oh, I know, I could try…”

According to Scott Barry Kaufman, these three networks (all working together) is how our brains normally work to solve problems or create (art, music, screens, writing, and so on).

Putting Your Creativity To Work

Given the way the brain works there are things you can do that help it be more creative. Here’s a list of four practical ideas that may sound like common sense, but actually go further than just common sense. They actually help you work with the three networks.

1. Clearly Identify Your Intention

Whether it is a problem you want to solve or a creative idea you want to work on, state and/or write down exactly what you are working on. For example:

  • How should I organize the data on this screen so that people can easily find what they need?
  • What would be a good color to use for the hover on this navigation bar?
  • Is there a container object I should consider so that the conceptual model of this design is clear?
  • Is there a way to show this data visually rather than just having it in a table?

By stating clearly what you are wanting to be creative about, you effectively engage the Executive Attention Network.

2. Take A Break

You’ve clearly stated your intention and your Imagination Network is ready to go to work. The next thing you should do is take some time off from that question/issue. In fact, you should take some time off from any intense mental activity.

If your Executive Attention Network is constantly engaged then it is hard for your Imagination Network to do its work. If possible, go do something that doesn’t require much thought. Go to lunch, go for a walk, return some phone calls; anything that will free up your brain is a good idea. Ideally, you would actually take a nap or go to sleep for the night.


Take  break
(Large preview)

3. Record Your Ah-ha! Ideas

Your Salience Network will eventually start sending ideas to your conscious brain, and you need to be ready. Inspiration may arrive at any time, so be sure to be ready. Keep a pen and paper handy everywhere you go, or a phone near you with your voice recording app easy to access.

There is even waterproof paper and pencils that you can buy to keep in your shower (I do). When you get an idea, write it down. Don’t assume that good ideas will come around again. Capture them when they appear.



(Large preview)

4. Switch Between The High Level And The Detail

Research shows that one of the hallmarks of a creative person is their ability to zoom in and out, from the high level to the detail. I have found that to be true of UX Designers as well.

The best and most creative UX Designers that I know are able to think at a conceptual level about a design and then start sketching a detail screen level the next moment. Then they think about the high-level implication of what they just sketched, and then they swoop back down to the detail level.

If you are not used to doing this, then practice. Let’s say you are going to work on a design for an hour. Set a timer for ten minutes and start with some planning and thinking about the problem/solution at hand on a high level.

When the ten minutes are up, do some detail sketching of one detailed part of what you were thinking about. After ten more minutes, stop working on the details and pull back and think about how what you just worked on fits with the larger structure.

Go back and forth every ten minutes. If you are not used to working this way, it may be hard at first, but try it and see if you find that it actually helps your creativity.

I’ve talked to some UX designers that think that this means that they are disorganized, or that they haven’t done enough high-level work upfront. Upfront high-level work can be very important, but you need to allow yourself the freedom to zoom from high level to detail through the creative design process.

Some designers think that the “right” way to design is to go through the design process in a step-by-step way, going from the large picture (macro-design) to the micro-level. But the best designs are the ones that allow you to go back and forth between macro and micro as you need to.

Practice moving back and forth from a high-level macro view to a low-level micro view. The practice will help you to see the relationships between macro and micro and will also help you improve as a designer.

Encouraging A Creative Mindset

Two more ideas that to encourage a creative mindset are:

  1. Don’t be fooled into thinking that following a process is not being creative.
    A good process helps you be creative. A good process takes care of details, helps you think things through, and helps you set your intention. Most creative UX people follow a process.
  2. Don’t worry about constraints.
    Some people think that having constraints means they can’t be creative. The research shows that people are more creative when there are constraints. There is a lot of research on this topic, but an example study is one by Brent Rosso who concludes from the research that:

“Teams experiencing the right kinds of constraints in the right environments, and which saw opportunity in constraints, benefitted creatively from them. The results of this research challenge the assumption that constraints kill creativity, demonstrating instead that for teams able to accept and embrace them, there is freedom in constraint.”

Source: Creativity and Constraints: Exploring the Role of Constraints in the Creative Processes of Research and Development Teams (Organization Studies, 2014, Vol. 35(4) 551–585)

Of course, being too tightly constrained can stifle creativity, but most of the time the constraints we have (colors we can or can’t use, standards and guidelines we need to follow, fonts we have to use, technology considerations we have to align with, deadlines about when we have to have the prototype done) can actually help us be more creative because they engage the brain networks.

The best thing to do with constraints is to be very clear with them when you are setting your intention with the Executive Attention Network. For example, let’s say you have to come up with a prototype for an app, but you only have three days. When you set your intention with the Executive Attention Network, don’t forget to explicitly remind yourself you only have three days. Other constraints might have to do with the technology you can or can’t use, the size of the screen, the number of pages or screens, and so on.

I hope these ideas have sparked some creativity for you. The next time you start on a new project, set an intention for the next part of your project, then go for a walk, and bring a small pad of paper and a pen with you. You will be amazed at what happens next.

This article is part of the UX design series sponsored by Adobe. Adobe XD tool is made for a fast and fluid UX design process, as it lets you go from idea to prototype faster. Design, prototype and share — all in one app. You can check out more inspiring projects created with Adobe XD on Behance, and also sign up for the Adobe experience design newsletter to stay updated and informed on the latest trends and insights for UX/UI design.

Smashing Editorial
(cm, ms, yk, il)


Source: Smashing Magazine

How to Debug Python Errors

This article was originally published on Sentry. Thank you for supporting the partners who make SitePoint possible.

One of Sentry’s more powerful features comes with languages like Python. In Python, PHP, and the JVM, we’re able to introspect more deeply into the runtime and give you additional data about each frame in your call stack. At a high level, this lets you know things like calling arguments to your functions, allowing you to more easily reproduce and understand an error. Let’s dive into what this looks like, and how it works under the hood.

We’ll start with what a Python error might typically look like in your terminal, or in a standard logging system:

TypeError: expected string or buffer
  File "sentry/stacktraces.py", line 309, in process_single_stacktrace
    processable_frame, processing_task)
  File "sentry/lang/native/plugin.py", line 196, in process_frame
    in_app = (in_app and not self.sym.is_internal_function(raw_frame.get('function')))
  File "sentry/lang/native/symbolizer.py", line 278, in is_internal_function
    return _internal_function_re.search(function) is not None

While this gives us an idea of the type and location of the error, it unfortunately doesn’t help us understand what is truly causing it. It’s possible that it’s passing an integer or a NoneType, but, realistically, it could be any number of things. Guessing at it will only get us so far, and we really need to know what function actually is.

An easy and often very accessible option to do this would be to add some logging. There’s a few different entry points where we could put logging, which makes it easier to implement. It also lets us ensure we specifically get the answer we want. For example, we want to figure out what the type of function is:

import logging
# ...
logging.debug("function is of type %s", type(function))

Another benefit of logging like this is that it could carry over to production. The consequence is generally you’re not recording DEBUG level log statements in production as the volume can be significant and not very useful. It also often requires you to plan ahead to ensure you’re logging the various failure cases that could happen and the appropriate context with each.

For the sake of this tutorial, let’s assume we can’t do this in production, didn’t plan ahead for it, and, instead, are trying to debug and reproduce this in development.

The Python Debugger

The Python Debugger (PDB) is a tool that allows you to step through your callstack using breakpoints. The tool itself is inspired by GNU’s Debugger (GDB) and, while powerful, often can be overwhelming if you’re not familiar with it. It’s definitely an experience that will get easier with repetition, and we’re only going to go into a few high level concepts with it for our example.

So the first thing we’d want to do is instrument our code to add a breakpoint. In our example above, we could actually instrument symbolizer.py ourselves. That’s not always the case, as sometimes the exception happens in third party code. No matter where you instrument it, you’ll still be able to jump up and down the stack. Let’s start by changing that code:

def is_internal_function(self, function):
    # add a breakpoint for PDB
    try:
        return _internal_function_re.search(function) is not None
    except Exception:
        import pdb; pdb.set_trace()
        raise

We’re limiting this to the exception, as it’s common you’ll have code that runs successfully most of the time, sometimes in a loop, and you wouldn’t want to pause execution on every single iteration.

Once we’ve hit this breakpoint (which is what set_trace() is registering), we’ll be dropped into a special shell-like environment:

# ...
(Pdb)

This is the PDB console, and it works similar to the Python shell. In addition to being able to run most Python code, we’re also executing under a specific context within our call stack. That location is the entry point. Rather, it’s where you called set_trace(). In the above example, we’re right where we need to be, so we can easily grab the type of function:

(Pdb) type(function)
<type 'NoneType'>

Of course, we could also simply grab all locally scoped variables using one of Python’s builtins:

(Pdb) locals()
{..., 'function': None, ...}

In some cases we may have to navigate up and down the stack to get to the frame where the function is executing. For example, if our set_trace() instrumentation had dropped us higher up in the stack, potentially at the top frame, we would use down to jump into the inner frames until we hit a location that had the information needed:

(Pdb) down
-> in_app = (in_app and not self.sym.is_internal_function(raw_frame.get('function')))
(Pdb) down
-> return _internal_function_re.search(function) is not None
(Pdb) type(function)
<type 'NoneType'>

So we’ve identified the problem: function is a NoneType. While that doesn’t really tell us why it’s that way, it at least gives us valuable information to speed up our test cases.

Debugging in Production

So PDB works great in development, but what about production? Let’s look a bit more deeply at what Python gives us to answer that question.

The great thing about the CPython runtime — that’s the standard runtime most people use — is it allows easy access to the current call stack. While some other runtimes (like PyPy) will provide similar information, it’s not guaranteed. When you hit an exception the stack is exposed via sys.exc_info(). Let’s look at what that gives us for a typical exception:

>>> try:
...   1 / 0
... except:
...   import sys; sys.exc_info()
...
(<type 'exceptions.ZeroDivisionError'>,
    ZeroDivisionError('integer division or modulo by zero',),
    <traceback object at 0x105da1a28>)

We’re going to avoid going too deep into this, but we’ve got a tuple of three pieces of information: the class of the exception, the actual instance of the exception, and a traceback object. The bit we care about here is the traceback object. Of note, you can also grab this information outside of exceptions using the traceback module. There’s some documentation on using these structures, but let’s just dive in ourselves to try and understand them. Within the traceback object we’ve got a bunch of information available to us, though it’s going to take a bit of work — and magic — to access:

>>> exc_type, exc_value, tb = exc_info
>>> tb.tb_frame
<frame object at 0x105dc0e10>

Once we’ve got a frame, CPython exposes ways to get stack locals — that is all scoped variables to that executing frame. For example, look at the following code:

The post How to Debug Python Errors appeared first on SitePoint.


Source: Sitepoint

Caching Smartly In The Age Of Gutenberg

Caching Smartly In The Age Of Gutenberg

Caching Smartly In The Age Of Gutenberg

Leonardo Losoviz

2018-12-05T13:00:15+01:00
2018-12-05T12:32:20+00:00

Caching is needed for speeding up a site: instead of having the server dynamically create the HTML output for each request, it can create the HTML only after it is requested the first time, cache it, and serve the cached version from then on. Caching delivers a faster response, and frees up resources in the server. When optimizing the speed of our sites from the server side, caching ranks among the most critical tasks to get right.

When generating the HTML output for the page, if it contains code with user state, such as printing a welcome message “Hello {{User name}}!” for the logged in user, then the page cannot be cached. Otherwise, if Peter visits the site first, and the HTML output is cached, all users would then be welcomed with “Hello Peter!”

Hence, caching plugins, such as those available for WordPress, will generally offer to disable caching when the user is logged in, as shown below for plugin WP Super Cache:


Disabled caching for known users in WP Super Cache
WP Super Cache recommends to disable caching for logged in users. (Large preview)

Disabling caching for logged in users is undesirable and should be avoided, because even if the amount of HTML code with user state is minimal compared to the static content in the page, still nothing will be cached. The reason is that the entity to be cached is the page, and not the particular pieces of HTML code within the page, so by including a single line of code which cannot be cached, then nothing will be cached. It is an all-or-nothing situation.

To address this, we can architect our application to avoid rendering HTML code with user state on the server-side, and render it on the client-side only, after fetching its required data through an API (often based on REST or GraphQL). By removing user state from code rendered on the server, that page can then be cached, even if the user is logged in.

In this article, we will explore the following issues:

  • How do we identify those sections of code that require user state, isolate them from the page, and make them be rendered on the client-side only?
  • How can it be implemented for WordPress sites through Gutenberg?

Gutenberg Is Bringing Components To WordPress

As I explained in my previous article Implications of thinking in blocks instead of blobs, Gutenberg is a JavaScript-based editor for WordPress (more specifically, it is a React-based editor, encapsulating the React libraries behind the global wp object), slated for release in either November 2018 or January 2019. Through its drag-and-drop interface, Gutenberg will utterly transform the experience of creating content for WordPress and, at some later stage in the future, the process of building sites, switching from the current creation of a page through templates (header.php, index.php, sidebar.php, footer.php), and the content of the page through a single blob of HTML code, to creating components to be placed anywhere on the page, which can control their own logic, load their own data, and self-render.

To appreciate the upcoming change visually, WordPress is moving from this:


The page contains templates with HTML code
Currently pages are built through PHP templates. (Large preview)

To this:


The page contains autonomous components
In the near future, pages will be built by placing self-rendering components in them. (Large preview)

Even though Gutenberg as a site builder is not ready yet, we can already think in terms of components when designing the architecture of our site. As for the topic of this article, architecting our application using components as the unit for building the page can help implement an enhanced caching strategy, as we shall see below.

Evaluating The Relationship Between Pages And Components

As mentioned earlier, the entity being cached is the page. Hence, we need to evaluate how components will be placed on the page as to maximize the page’s cacheability. Based on their dependence on user state, we can broadly categorize pages into the following 3 groups:

  1. Pages without any user state, such as “Who we are” page.
  2. Pages with bits and pieces of user state, such as the homepage when welcoming the user (“Welcome Peter!”), or an archive page with a list of posts, showing a “Like” button under each post which is painted blue if the logged in user has liked that post.
  3. Pages naturally with user state, in which content depends directly from the logged in user, such as “My posts” of “Edit my profile” pages.

Components, on the other side, can simply be categorized as requiring user state or not. Because the architecture considers the component as the unit for building the page, the component has the faculty of knowing if it requires user state or not. Hence, a <WelcomeUser /> component, which renders “Welcome Peter!”, knows it requires user state, while a <WhoWeAre /> component knows that it does not.

Next, we need to place components on the page, and depending on the combination of page and component requiring user state or not, we can establish a proper strategy for caching the page and for rendering content to the user as soon as possible. We have the following cases:

1. Pages Without Any User State

These can be cached with no issues.

  • Page is cached => It can’t access user state.
  • Components, none of them requiring user state, are rendered in the server.

Page without user state
A page without user state can only contain components without user state. (Large preview)

2. Pages With Bits And Pieces Of User State

We could make the page either require user state or not. If we make the page require user state, then it cannot be cached, which is a wasted opportunity when most of the content in the page is static. Hence, we’d rather make the page not require user state, and those components requiring user state which are placed on the page, such as <WelcomeUser /> on the homepage, are made lazy-load: the server-side renders an empty shell, and the component is rendered instead in the client-side, after getting its data from an API.

Following this approach, all static content in the page will be rendered immediately through server-side rendering (SSR), and those bits and pieces with user state after some delay through client-side rendering (CSR).

  • Page is cached => It can’t access user state.
  • Components not requiring user state are rendered in the server.
  • Components requiring user state are rendered in the client.

Page with bits of user state
A page with bits of user state contains CSR components with user state, and SSR components without user state. (Large preview)

3. Pages Naturally With User State

If the library or framework only enables client-side rendering, then we must follow the same approach as with #2: do not make the page require user state, and add a component, such as <MyPosts />, to self-render in the client.

However, since the main objective of the page is to show user content, making the user wait for this content to be loaded on a 2nd stage is not ideal. Let’s see this with an example: a user who has not logged in yet accesses page “Edit my profile”. If the site renders the content in the server, since the user is not logged in the server will immediately redirect to the login page. Instead, if the content is rendered in the client through an API, the user will first be presented a loading message, and only after the response from the API is back will the user be redirected to the login page, making the experience slower.

Hence, we are better off using a library or framework that supports server-side rendering, and we make the page require user state (making it non-cacheable):

  • Page is not cached => It can access user state.
  • Components, both requiring and not requiring user state, are rendered in the server.

Page with user state
A page with user state contains SSR components both with and without user state. (Large preview)

From this strategy and all the combinations it produces, deciding if a component must be rendered server or client-side simply boils down to the following pseudo-code:

if (component requires user state and page can’t access user state) {
    render component in client
}
else {
    render component in server
}

This strategy allows to attain our objective: implemented for all pages in the site, for all components placed in each page, and configuring the site to not cache pages which access the user state, we can then avoid disabling caching any page whenever the user is logged in.

Rendering Components Client/Server-Side Through Gutenberg

In Gutenberg, those components which can be embedded on the page are called “blocks” (or also Gutenblocks). Gutenberg supports two types of blocks, static and dynamic:

  • Static blocks produce their HTML code already in the client (when the user is interacting with the editor) and save it inside the post content. Hence, they are client-side JavaScript-based blocks.
  • Dynamic blocks, on the other hand, are those which can change their content dynamically, such as a latest posts block, so they cannot save the HTML output inside the post content. Hence, in addition to creating their HTML code on the client-side, they must also produce it from the server on runtime through a PHP function (which is defined under parameter render_callback when registering the block in the backend through function register_block_type.)

Because HTML code with user state cannot be saved in the post’s content, a block dealing with user state will necessarily be a dynamic block. In summary, through dynamic blocks we can produce the HTML for a component both in the server and client-side, enabling to implement our optimized caching strategy. The previous pseudo-code, when using Gutenberg, will look like this:

if (block requires user state and page can’t access user state) {
    render block in client through JavaScript
}
else {
    render (dynamic) block in server through PHP code
}

Unfortunately, implementing the dual client/server-side functionality doesn’t come without hardship: Gutenberg’s SSR is not isomorphic, ie it does not allow a single codebase to produce the output for both client and server-side code. Hence, developers would need to maintain 2 codebases, one in PHP and one in JavaScript, which is far from optimal.

Gutenberg also implements a <ServerSideRender /> component, however it advices against using it: this component was not thought for improving the speed of the site and rendering an immediate response to the user, but for providing compatibility with legacy code, such as shortcodes.

As it is explained in the documentation:

“ServerSideRender should be regarded as a fallback or legacy mechanism, it is not appropriate for developing new features against.

“New blocks should be built in conjunction with any necessary REST API endpoints, so that JavaScript can be used for rendering client-side in the edit function. This gives the best user experience, instead of relying on using the PHP render_callback. The logic necessary for rendering should be included in the endpoint, so that both the client-side JavaScript and server-side PHP logic should require a minimal amount of differences.”

As a result, when building our sites, we will need to decide if to implement SSR, which boosts the site’s speed by enabling an optimal caching experience and by providing an immediate response to the user when loading the page, but which comes at the cost of maintaining 2 codebases. Depending on the context, it may be worth it or not.

Configuring What Pages Require User State

Pages requiring (or accessing) user state will be made non-cacheable, while all other pages will be cacheable. Hence, we need to identify which pages require user state. Please notice that this applies only to pages, and not to REST endpoints, since the goal is to render the component already in the server when accessing the page, and calling the WP REST API’s endpoints implies getting the data for rendering the component in the client. Hence, from the perspective our our caching strategy, we can assume all REST endpoints will require user state, and so they don’t need to be cached.

To identifying which pages require user state, we simply create a function get_pages_with_user_state, like this:

function get_pages_with_user_state() {

    return apply_filters(
        'get_pages_with_user_state',
        array()
    );
}

Upon which we implement hooks with the corresponding pages, like this:

// ID of the pages, retrieved from the WordPress admin
define ('MYPOSTS_PAGEID', 5);
define ('ADDPOST_PAGEID', 8);

add_filter('get_pages_with_user_state', 'get_pages_with_user_state_impl');
function get_pages_with_user_state_impl($pages) {
    
  $pages[] = MYPOSTS_PAGEID;

  // "Add Post" may not require user state!
  // $pages[] = ADDPOST_PAGEID;
    
  return $pages;
}

Please notice how we may not need to add user state for page “Add Post” (making this page cacheable), even though this page requires to validate that the user is logged in when submitting a form to create content on the site. This is because the “Add Post” page may simply display an empty form, requiring no user state whatsoever. Then, submitting the form will be a POST operation, which cannot be cached in any case (only GET requests are cached).

Disabling Caching Of Pages With User State In WP Super Cache

Finally, we configure our application to disable caching for those pages which require user state (and cache everything else.) We will do this for plugin WP Super Cache, by blacklisting the URIs of those pages in the plugin settings page:


WP Super Cache settings to disable caching for blacklisted strings
We can disable caching URLs containing specific strings in WP Super Cache. (Large preview)

What we need to do is create a script that obtains the paths for all pages with user state, and saves it in the corresponding input field. This script can then be invoked manually, or automatically as part of the application’s deployment process.

First we obtain all the URIs for the pages with user state:

function get_rejected_strings() {

  $rejected_strings = array();
  $pages_with_user_state = get_pages_with_user_state();
  foreach ($pages_with_user_state as $page) {

      // Calculate the URI for that page to the list of rejected strings
      $path = substr(get_permalink($page), strlen(home_url()));
      $rejected_strings[] = $path;
  }

  return $rejected_strings;
}

And then, we must add the rejected strings into WP Super Cache’s configuration file, located in wp-content/wp-cache-config.php, updating the value of entry $cache_rejected_uri with our list of paths:

function set_rejected_strings_in_wp_super_cache() {

  if ($rejected_strings = get_rejected_strings()) {

    // Keep the original values in
    $rejected_strings = array_merge(
      array('wp-.*\.php', 'index\.php'),
      $rejected_strings
    );
      
    global $wp_cache_config_file;
    $cache_rejected_uri = "array('".implode("', '", $rejected_strings)."')";
    wp_cache_replace_line('^ *$cache_rejected_uri', "$cache_rejected_uri = " . $cache_rejected_uri . ";", $wp_cache_config_file);
  }
}

Upon execution of function set_rejected_strings_in_wp_super_cache, the settings will be updated with the new rejected strings:


WP Super Cache settings to disable caching blacklisted strings
Blacklisting the paths from pages accessing user state in WP Super Cache. (Large preview)

Finally, because we are now able to disable caching for the specific pages that require user state, there is no need to disable caching for logged in users anymore:


Disabled caching for known users in WP Super Cache
No need to disable caching for logged in users anymore! (Large preview)

That’s it!

Conclusion

In this article, we explored a way to enhance our site’s caching — mainly aimed at enabling caching on the site even when the users are logged in. The strategy relies on disabling caching only for those pages which require user state, and on using components which can decide if to be rendered on the client or on the server-side, depending on the page accessing the user state or not.

As a general concept, the strategy can be implemented on any architecture that supports server-side rendering of components. In particular, we analyzed how it can be implemented for WordPress sites through Gutenberg, advising to assess if it is worth the trouble of maintaining two codebases, one in PHP for the server-side code, and one in JavaScript for the client-side code.

Finally, we explained that the solution can be integrated into the caching plugin through a custom script to automatically produce the list of pages to avoid caching, and produced the code for plugin WP Super Cache.

After implementing this strategy to my site, it doesn’t matter anymore if visitors are logged in or not. They will always access a cached version of the homepage, providing a faster response and a better user experience.

Smashing Editorial
(rb, ra, yk, il)


Source: Smashing Magazine

How to Create and Verify JWTs with Node

This article was originally published on the Okta developer blog. Thank you for supporting the partners who make SitePoint possible.

Authentication on the internet has evolved quite a bit over the years. There are many ways to do it, but what worked well enough in the 90s doesn’t quite cut it today. In this tutorial, I’ll briefly cover some older, simpler forms of authentication, then show you how a more modern and more secure approach. By the end of this post, you’ll be able to create and verify JWTs yourself in Node. I’ll also show you how you can leverage Okta to do it all for you behind the scenes.

Traditionally, the simplest way to do authorization is with a username and password. This is called Basic Authorization and is done by just sending username:password as an encoded string that can be decoded by anybody looking. You could think of that string as a “token”. The problem is, you’re sending your password with every request. You could also send your username and password a single time, and let the server create a session ID for you. The client would then send that ID along with every request instead of a username and password. This method works as well, but it can be a hassle for the client to store and maintain sessions, especially for large sets of users.

The third method for managing authorization is via JSON Web Tokens, or JWTs. JWTs have become the de facto standard over the last few years. A JWT makes a set of claims, (e.g. “I’m Abe Froman, the Sausage King of Chicago”) that can be verified. Like Basic Authorization, the claims can be read by anybody. Unlike Basic Auth, however, you wouldn’t be sharing your password with anyone listening in. Instead, it’s all about trust.

Trust, but Verify… Your JWTs

it must be true

OK, maybe don’t believe everything you read on the internet. You might be wondering how someone can just make some claims and expect the server to believe them. When you make a claim using a JWT, it’s signed off by a server that has a secret key. The server reading the key can easily verify that the claim is valid, even without knowing the secret that was used. However, it would be nearly impossible for someone to modify the claims and make sure the signature was valid without having access to that secret key.

Why Use a JWT?

Using a JWT allows a server to offload authentication to a 3rd party they trust. As long as you trust the 3rd party, you can let them ensure that the user is who they say they are. That 3rd party will then create a JWT to be passed to your server, with whatever information is necessary. Typically this includes at least the user’s user id (standardly referred to as sub for “subject”), the “issuer” (iss) of the token, and the “expiration time” (exp). There are quite a few standardized claims, but you can really put any JSON you want in a claim. Just remember the more info you include, the longer the token will be.

Build a Simple Node App

To create and verify your own JWTs, you’ll first need to set up a Node server (well, you don’t have to, but that’s what I’ll be teaching you today). To get started, run the following commands to set up a new project:

mkdir fun-with-jwts
cd fun-with-jwts
npm init -y
npm install express@4.16.4
npm install -D nodemon@1.18.6

Next, create a new file index.js that will contain a super simple node server. There are three endpoints in here, that are just stubbed with TODOs as notes for what to implement.

The /create endpoint will require basic authorization to log you in. If you were writing a real OAuth server, you would probably use something other than Basic Auth. You would also need to look up the user in a database and make sure they provided the right password. To keep things simple for the demo, I’ve just hard-coded a single username and password here, so we can focus on the JWT functionality.

The /verify endpoint takes a JWT as a parameter to be decoded.

const express = require('express')
const app = express()
const port = process.env.PORT || 3000

app.get('/create', (req, res) => {
  if (req.headers.authorization !== 'Basic QXp1cmVEaWFtb25kOmh1bnRlcjI=') {
    res.set('WWW-Authenticate', 'Basic realm="401"')
    res.status(401).send('Try user: AzureDiamond, password: hunter2')
    return
  }

  res.send('TODO: create a JWT')
})

app.get('/verify/:token', (req, res) => {
  res.send(`TODO: verify this JWT: ${req.params.token}`)
})

app.get('/', (req, res) => res.send('TODO: use Okta for auth'))

app.listen(port, () => console.log(`JWT server listening on port ${port}!`))

You can now run the server by typing node_modules/.bin/nodemon .. This will start a server on port 3000 and will restart automatically as you make changes to your source code. You can access it by going to http://localhost:3000 in your browser. To hit the different endpoints, you’ll need to change the URL to http://localhost:3000/create or http://localhost:3000/verify/asdf. If you prefer to work in the command line, you can use curl to hit all those endpoints:

$ curl localhost:3000
TODO: use Okta for auth

$ curl localhost:3000/create
Try user: AzureDiamond, password: hunter2

$ curl AzureDiamond:hunter2@localhost:3000/create
TODO: create a JWT

$ curl localhost:3000/verify/asdf
TODO: verify this JWT: asdf

Create JSON Web Tokens in Your Node App

A JSON Web Token has three parts. The header, the payload, and the signature, separated by .s.

The header is a base64 encoded JSON object specifying which algorithm to use and the type of the token.

The payload is also a base64 encoded JSON object containing pretty much anything you want. Typically it will at least contain an expiration timestamp and some identifying information.

The signature hashes the header, the payload, and a secret key together using the algorithm specified in the header.

There are a number of tools out there to create JWTs for various languages. For Node, one simple one is njwt. To add it to your project, run

npm install njwt@0.4.0

Now replace the res.send('TODO: create a JWT') line in index.js with the following:

  const jwt = require('njwt')
  const claims = { iss: 'fun-with-jwts', sub: 'AzureDiamond' }
  const token = jwt.create(claims, 'top-secret-phrase')
  token.setExpiration(new Date().getTime() + 60*1000)
  res.send(token.compact())

Feel free to mess around with the payload. With the setExpiration() function above, the token will expire in one minute, which will let you see what happens when it expires, without having to wait too long.

To test this out and get a token, log in via the /create endpoint. Again, you can go to your browser at http://localhost:3000/create, or use curl:

$ curl AzureDiamond:hunter2@localhost:3000/create
eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJoZWxsbyI6IndvcmxkISIsIm51bWJlciI6MC41MzgyNzE0MTk3Nzg5NDc4LCJpYXQiOjE1NDIxMDQ0NDgsImV4cCI6MTU0MjEwNDUwOCwiaXNzIjoiZnVuLXdpdGgtand0cyIsInN1YiI6IkF6dXJlRGlhbW9uZCJ9.LRVmeIzAYk5WbDoKfSTYwPx5iW0omuB76Qud-xR8We4

Verify JSON Web Tokens in Your Node App

Well, that looks a bit like gibberish. You can see there are two .s in the JWT, separating the header, payload, and signature, but it’s not human readable. The next step is to write something to decode that string into something that makes a little more legible.

Replace the line containing TODO: verify this JWT with the following:

  const jwt = require('njwt')
  const { token } = req.params
  jwt.verify(token, 'top-secret-phrase', (err, verifiedJwt) => {
    if(err){
      res.send(err.message)
    }else{
      res.send(verifiedJwt)
    }
  })

In the route /verify/:token, the :token part tells express that you want to read that section of the URL in as a param, so you can get it on req.params.token. You can then use njwt to try to verify the token. If it fails, that could mean a number of things, like the token was malformed or it has expired.

Back on your website, or in curl, create another token using http://localhost:3000/create. Then copy and paste that into the URL so you have http://localhost:3000/verify/eyJhb...R8We4. You should get something like the following:

{
  "header": { "typ": "JWT", "alg": "HS256" },
  "body": {
    "iss": "fun-with-jwts",
    "sub": "AzureDiamond",
    "jti": "3668a38b-d25d-47ee-8da2-19a36d51e3da",
    "iat": 1542146783,
    "exp": 1542146843
  }
}

If you wait a minute and try again, you’ll instead get jwt expired.

Add OIDC Middleware to Your Node App to Handle JWT Functionality

Well, that wasn’t so bad. But I sure glossed over a lot of details. That top-secret-phrase isn’t really very top secret. How do you make sure you have a secure one and it’s not easy to find? What about all the other JWT options? How do you actually store that in a browser? What’s the optimal expiration time for a token?

This is where Okta comes in to play. Rather than dealing with all this yourself, you can leverage Okta’s cloud service to handle it all for you. After a couple minutes of setup, you can stop thinking about how to make your app secure and just focus on what makes it unique.

Why Auth with Okta?

Okta is a cloud service that allows developers to create, edit, and securely store user accounts and user account data, and connect them with one or multiple applications. Our API enables you to:

If you don’t already have one, sign up for a forever-free developer account.

Create an Okta Server

You’re going to need to save some information to use in your app. Create a new file named .env. In it, enter your Okta organization URL.

The post How to Create and Verify JWTs with Node appeared first on SitePoint.


Source: Sitepoint