Monthly Web Development Update 12/2018: WebP, The State Of UX, And A Low-Stress Experiment

Monthly Web Development Update 12/2018: WebP, The State Of UX, And A Low-Stress Experiment

Monthly Web Development Update 12/2018: WebP, The State Of UX, And A Low-Stress Experiment

Anselm Hannemann

2018-12-14T12:20:03+01:00
2018-12-14T11:35:15+00:00

It’s the last edition of this year, and I’m pretty stoked what 2018 brought for us, what happened, and how the web evolved. Let’s recap that and remind us of what each of us learned this year: What was the most useful feature, API, library we used? And how have we personally changed?

For this month’s update, I’ve collected yet another bunch of articles for you. If that’s not enough reading material for you yet, you can always find more in the archive or the Evergreen list which contains the most important articles since the beginning of the Web Development Reading List. I hope your days until the end of the year won’t be too stressful and wish you all the best. See you next year!

News

  • Microsoft just announced that they’ll change their Edge strategy: They’re going to use Chromium as the new browser engine for Desktop instead of EdgeHTML and might even provide Microsoft Edge for macOS. They’ll also help with development on the Blink engine from now on.
  • Chrome 71 is out and brings relative time support via the Internationalization API. Also new is that speech synthesis now requires user activation.
  • Safari Technology Preview 71 is out, bringing supported-color-schemes in CSS and adding Web Authentication as an experimental feature.
  • Firefox will soon offer users a browser setting to block all permission requests automatically. This will affect autoplaying videos, web notifications, geolocation requests, camera and microphone access requests. The need to auto-block requests shows how horribly wrong developers are using these techniques. Sad news for those who rely on such requests for their services, like WebRTC calling services, for example.

General

  • We’ve finally come up with ways to access and use websites offline with amazing technology. But one thing we forgot about is that for the past thirty years we taught people that the web is online, so most people don’t know that offline usage even exists. A lesson in user experience design and the importance of reminding us of the history of the medium we’re building for.

UI/UX

  • Matthew Ström wrote about the importance of fixing things later and not trying to be perfect.
  • A somewhat satiric resource about the state of UX in 2019.
  • Erica Hall shows us examples of why most of ‘UX design’ is a myth and why not only design makes a great product but also the right product strategy and business model. The best example why you should read this is when Erica writes “Virgin America. Rdio. Google Reader. Comcast. Which of these offered a good experience? Which of these still exists?” A truth you can’t ignore and, luckily, this is not a pessimistic but very thought-provoking article with great tips for how we can use that knowledge to improve our products. With strategy, with design, with a business model that fits.

Illustration of a woman with a tablet running some design software where her face should be.
After curating and sharing 2,239 links with 264,016 designers around the world, the folks at UX Collective have isolated a few trends in what the UX industry is writing, talking, and thinking about. (Image credit; illustration by Camilla Rosa)

Tooling

HTML & SVG

Accessibility

CSS

JavaScript

  • Google is about to bring us yet another API: the Badging API allows Web Desktop Apps to indicate new notifications or similar. The spec is still in discussion, and they’d be happy to hear your thoughts about it.
  • Hidde de Vries explains how we can use modern JavaScript APIs to scroll an element into the center of the viewport.
  • Available behind flags in Chrome 71, the new Background Fetch makes it possible to fetch resources that take a while to load — movies, for example — in the background.
  • Pete LePage explains how we can use the Web Share Target API to register a service as Share Target.
  • Is it still a good idea to use JavaScript for loading web fonts? Zach Leatherman shares why we should decide case by case and why it’s often best to use modern CSS and font-display: swap;.
  • Doka is a new standalone JavaScript image editor worth keeping in mind. While it’s not a free product, it features very handy methods for editing with a pleasant user experience, and by paying an annual fee, you ensure that you get bugfixes and support.
  • The Power of Web Components” shares the basic concepts, how to start using them, and why using your own HTML elements instead of gluing HTML, the related CSS classes, and a JavaScript trigger together can simplify things so much.

Security

Privacy

  • Do you have a husband or wife? Kids? Other relatives? Then this essential guide to protecting your family’s data is something you should read and turn into action. The internet is no safe place, and you want to ensure your relatives understand what they’re doing — and it’s you who can protect them by teaching them or setting up better default settings.

Web Performance


Comparison of the quality of JPG and WebP images
WebP offers both performance and features. Ire Aderinokun shares why and how to use it. (Image credit)

Work & Life

  • Shana Lynch tells us what makes someone an ethical business leader, which values are important, how to stand upright when things get tough, and how to prepare for uncomfortable situations upfront.
  • Ozoemena Nonso tries to explain why we often aren’t happy. The thief of our happiness is not comparing ourselves with others; it’s that we struggle to get the model of comparison right. An incredibly good piece of life advice if you compare yourself with others often and feel that your happiness suffers from it.
  • A rather uncommon piece of advice: Why forcing others to leave their comfort zone might be a bad idea.
  • Sandor Dargo on how he managed to avoid distractions during work time and do his job properly again.
  • Paul Robert Lloyd writes about Cennydd Bowles’ book “Future Ethics” and while explaining what it is about, he also points out the challenges of ethics with a simple example.
  • Jeffrey Silverstein is a teacher and struggled a lot with finding time for side projects while working full-time. Now he found a solution which he shares with us in this great article about “How to balance full-time work with creative projects.” An inspiring read that I can totally relate to.
  • Ben Werdmüller shares his thoughts on why lifestyle businesses are massively underrated. But what’s a lifestyle business? He defines them as non-venture-funded businesses that allow their owners to maintain a certain level of income but not more. As a fun sidenote, this article shows how crazy rental prizes have become on the U.S. West Coast.
  • Jake Knapp shares how he survived six years with a distraction-free smartphone — no emails, no notifications. And he has some great tips for us and an exercise to try out. I recently moved all my apps into one folder on the second screen to ensure I need to search for the app which usually means I really want to open it and don’t just do it to distract myself.
  • Ryan Avent wrote about why we work so hard. This essay is well-researched and explains why we see work as crucial, why we fall in love with it, and why our lifestyle and society embraces to work harder all the time.

An illustration of a hand holding a phone. The phone shows a popup saying: Wait seriously? You wanna delete Gmail? Are you nuts?
Jake Knapp spent six years with a distraction-free phone: no email, no social media, no browser. Now he shares what he learned from it and how you can try your own low-stress experiment. (Image credit)

Going Beyond…

Smashing Editorial
(cm)


Source: Smashing Magazine

How To Convert An Infographic Into A Gifographic Using Adobe Photoshop

How To Convert An Infographic Into A Gifographic Using Adobe Photoshop

How To Convert An Infographic Into A Gifographic Using Adobe Photoshop

Manish Dudharejia

2018-12-13T14:00:11+01:00
2018-12-13T12:59:33+00:00

Visuals have played a critical role in the marketing and advertising industry since their inception. For years, marketers have relied on images, videos, and infographics to better sell products and services. The importance of visual media has increased further with the rise of the Internet and consequently, of social media.

Lately, gifographics (animated infographics) have also joined the list of popular visual media formats. If you are a marketer, a designer, or even a consumer, you must have come across them. What you may not know, however, is how to make gifographics, and why you should try to add them to your marketing mix. This practical tutorial should give you answers to both questions.

In this tutorial, we’ll be taking a closer look at how a static infographic can be animated using Adobe Photoshop, so some Photoshop knowledge (at least the basics) is required.

What Is A Gifographic?

Some History

The word gifographic is a combination of two words: GIF and infographic. The term gifographic was popularized by marketing experts (and among them, by Neil Patel) around 2014. Let’s dive a little bit into history.

CompuServe introduced the GIF ( Graphics Interchange Format) on June 15, 1987, and the format became a hit almost instantly. Initially the use of the format remained somewhat restricted owing to patent disputes in the early years (related to the compression algorithm used in GIF files — LZW) but later, when most GIF patents expired, and owing to their wide support and portability, GIFs gained a lot in popularity which even lead the word “GIF” to become “Word of the year” in 2012. Even today, GIFs are still very popular on the web and on social media(*).

The GIF is a bitmap image format. It supports up to 8 bits per pixel so a single GIF can use a limited palette of up to 256 different colors (including — optionally — one transparent color). The Lempel–Ziv–Welch (LZW) is a lossless data compression technique that is used to compress GIF images, which in turn, reduces the file size without affecting their visual quality. What’s more interesting though, is that the format also supports animations and allows a separate palette of up to 256 colors for each animation frame.

Tracing back in history as to when the first infographic was created is much more difficult, but the definition is easy — the word “infographic” comes from “information” and “graphics,” and, as the name implies, an infographic serves the main purpose of presenting information (data, knowledge, etc.) quickly and clearly, in a graphical way.

In his 1983 book The Visual Display of Quantitative Information, Edward Tufte gives a very detailed definition for “graphical displays” which many consider today to be one of the first definitions of what infographics are, and what they do: to condense large amounts of information into a form where it will be more easily absorbed by the reader.

A Note On GIFs Posted On The Web (*)

Animated GIF images posted to Twitter, Imgur, and other services most often end up as H.264 encoded video files (HTML5 video), and are technically not GIFs anymore when viewed online. The reason behind this is pretty obvious — animated GIFs are perhaps the worst possible format to store video, even for very short clips, as unlike actual video files, GIF cannot use any of the modern video compression techniques. (Also, you can check this article: “Improve Animated GIF Performance With HTML5 Video” which explains how with HTML5 video you can reduce the size of GIF content by up to 98% while still retaining the unique qualities of the GIF format.)

On the other hand, it’s worth noting that gifographics most often remain in their original format (as animated GIF files), and are not encoded to video. While this leads to not-so-optimal file sizes (as an example, a single animated GIF in this “How engines work?” popular infographic page is between ~ 500 KB and 5 MB in size), on the plus side, the gifographics remain very easy to share and embed, which is their primary purpose.

Why Use Animated Infographics In Your Digital Marketing Mix?

Infographics are visually compelling media. A well-designed infographic not only can help you present a complex subject in a simple and enticing way, but it can also be a very effective mean of increasing your brand awareness as part of your digital marketing campaign.

Remember the popular saying, “A picture is worth a thousand words”? There is a lot of evidence that animated pictures can be even more successful and so recently motion infographics have witnessed an increase in popularity owing to the element of animation.

From Boring To Beautiful

They can breathe life into sheets of boring facts and mundane numbers with the help of animated charts and graphics. Motion infographics are also the right means to illustrate complex processes or systems with moving parts to make them more palatable and meaningful. Thus, you can easily turn boring topics into visually-engaging treats. For example, we created the gifographic “The Most Important Google Search Algorithm Updates Of 2015” elaborating the changes Google made to its search algorithm in 2015.

Cost-Effective

Gifographics are perhaps the most cost-effective alternative to video content. You don’t need expensive cameras, video editing, sound mixing software, and a shooting crew to create animated infographics. All it takes is a designer who knows how to make animations by using Photoshop or similar graphic design tools.

Works For Just About Anything

You can use a gifographic to illustrate just about anything in bite-sized sequential chunks. From product explainer videos to numbers and stats, you can share anything through a GIF infographic. Animated infographics can also be interactive. For example, you can adjust a variable to see how it affects the data in an animated chart.

Note: An excellent example of an interactive infographic is “Building An Interactive Infographic With Vue.js” written by Krutie Patel. It was built with the help of Vue.js, SVG and GSAP (GreenSock Animation Platform).

SEO Boost

As a marketer, you are probably aware that infographics can provide a substantial boost to your SEO. People love visual media. As a result, they are more likely to share a gifographic if they liked it. The more your animated infographics are shared, the higher will be the boost in site traffic. Thus, gifographics can indirectly help improve your SEO and, therefore, your search engine rankings.

How To Create A Gifographic From An Infographic In Photoshop

Now that you know the importance of motion in infographics, let’s get practical and see how you can create your first gifographic in Photoshop. And if you already know how to make infographics in Photoshop, it will be even easier for you to convert your existing static infographic into an animated one.

Step 1: Select (Or Prepare) An Infographic

The first thing you need to do is to choose the static infographic that you would like to transform into a gifographic. For learning purposes you can animate any infographic, but I recommend you to pick up an image that has elements that are suitable for animation. Explainers, tutorials, and process overviews are easy to convert into motion infographics.

If you are going to start from scratch, make sure you have first finished the static infographic to the last detail before proceeding to the animation stage as this will save you a lot of time and resources — if the original infographic keeps changing you will also need to rework your gifographic.

Next, once you have finalized the infographic, the next step is to decide which parts you are going to animate.


Finalize Your Infographic
Infographic finalization (Large preview)

Step 2: Decide What The Animation Story Will Be

You can include some — or all — parts of the infographic in the animation. However, as there are different ways to create animations, you must first decide on the elements you intend to animate, and how. In my opinion, sketching (outlining) various animation case scenarios on paper is the best way to pick your storyline. It will save you a lot of time and confusion down the road.

Start by deciding which “frames” you would like to include in the animation. At this stage, frames will be nothing else but rough sketches made on sheets of paper. The higher the number of frames, the better the quality of your gifographic will be.

You may need to divide the animated infographic into different sections. So, be sure to choose an equal count of frames for all parts. If not, the gifographic will look uneven with each GIF file moving at a different speed.


Pick Your Animation Storyline
Deciding and picking your animation story (Large preview)

Step 3: Create The Frames In Photoshop

Open Adobe Photoshop to create different frames for each section of the gifographic. You will need to cut, rotate, and move the images painstakingly. You will need to remember the ultimate change you made to the last frame. You can use Photoshop ruler for the same.

You will need to build your animation from Layers in Photoshop. But, in this case, you will be copying all Photoshop layers together and editing each layer individually.

You can check the frames one by one by hiding/showing different layers. Once you have finished creating all the frames, check them for possible errors.

Create Frames in Photoshop. (Large preview)

You can also create a short Frame Animation using just the first and the last frame. You need to select both frames by holding the Ctrl/Cmd key (Windows/Mac). Now click on “Tween.” Select the number of frames you want to add in between. Select First frame if you want to add the new frames between the first and the last frames. Selecting “Previous Frame” option will add frames between your current selection and the one before it. Check the “All Layers” option to add all the layers from your selections.


How to create Short Frame Animation
Short frame animation (Large preview)

Step 4: Save PNG (Or JPG) Files Into A New Folder

The next step is to export each animation frame individually into PNG or JPG format. (Note: JPG is a lossy format, so PNG would be usually a better choice.)

You should save these PNG files in a separate folder for the sake of convenience. I always number the saved images as per their sequence in the animation. It’s easy for me to remember that “Image-1” will be the first image in the sequence followed by “Image-2,” “Image-3,” and so on. Of course, you can save them in a way suitable for you.


How to Save JPG Files in a New Folder
Saving JPG files in a new folder (Large preview)

Step 5: “Load Files Into Stack”

Next comes loading the saved PNG files to Photoshop.

Go to the Photoshop window and open File > Scripts > Load files into Stack…

A new dialog box will open. Click on the “Browse” button and open the folder where you saved the PNG files. You can select all files at once and click “OK.”

Note: You can check the “Attempt to Automatically Align Source Images” option to avoid alignment issues. However, if your source images are all the same size, this step is not needed. Furthermore, automatic alignment can also cause issues in some cases as Photoshop will move the layers around in an attempt to try to align them. So, use this option based on the specific situation — there is no “one size fits them all” recipe.

It may take a while to load the files, depending on their size and number. While Photoshop is busy loading these files, maybe you can grab a cup of coffee!

Load Files into Stack. (Large preview)

Step 6: Set The Frames

Once the loading is complete, go to Window > Layers (or you can press F7) and you will see all the layers in the Layers panel. The number of Layers should match the number of frames loaded into Photoshop.

Once you have verified this, go to Window > Timeline. You will see the Timeline Panel at the bottom (the default display option for this panel). Choose “Create Frame Animation” option from the panel. Your first PNG file will appear on the Timeline.

Now, Select “Make Frames from Layers” from the right side menu (Palette Option) of the Animation Panel.

Note: Sometimes the PNG files get loaded in reverse, making your “Image-1” appear at the end and vice versa. If this happens, select “Reverse Layers” from Animation Panel Menu (Palette Option) to get the desired image sequence.

Set the Frames. (Large preview)

Step 7: Set The Animation Speed

The default display time for each image is 0.00 seconds. Toggling this time will determine the speed of your animation (or GIF file). If you select all the images, you can set the same display time for all of them. Alternatively, you can also set up different display time for each image or frame.

I recommend going with the former option though as using the same animation time is relatively easy. Also, setting up different display times for each frame may lead to a not-so-smooth animation.

You can also set custom display time if you don’t want to choose from among the available choices. Click the “Other” option to set a customized animation speed.

You can also make the animation play in reverse. Copy the Frames from the Timeline Pallet and choose “Reverse Layers” option. You can drag frames with the Ctrl key (on Windows) or the Cmd key (on Mac).

You can set the number of times the animation should loop. The default option is “Once.” However, you can set a custom loop value using the “Other” option. Use the “Forever” option to keep your animation going in a non-stop loop.

To preview your GIF animation, press the Enter key or the “Play” button at the bottom of the Timeline Panel.

Set the Animation Speed. (Large preview)

Step 8: Ready To Save/Export

If everything goes according to plan, the only thing left is to save (export) your GIF infographic.

To Export the animation as a GIF: Go to File > Export > Save for Web (Legacy)

  1. Select “GIF 128 Dithered” from the “Preset” menu.
  2. Select “256” from the “Colors” menu.
  3. If you will be using the GIF online or want to limit the file size of the animation, change Width and Height fields in the “Image Size” options accordingly.
  4. Select “Forever” from the “Looping Options” menu.

Click the “Preview” button in the lower left corner of the Export window to preview your GIF in a web browser. If you are happy with it, click “Save” and select a destination for your animated GIF file.

Note: There are lots of options that control the quality and file size of GIFs — number of colors, amount of dithering, etc. Feel free to experiment until you achieve the optimal GIF size and animation quality.

Your animated infographic is ready!


How to Save Your GIF infographic
Saving your GIF infographic (Large preview)

Step 9 (Optional): Optimization

Gifsicle (a free command-line program for creating, editing, and optimizing animated GIFs), and other similar GIF post-processing tools can help reduce the exported GIF file size beyond Photoshop’s abilities.

ImageOptim is also worth mentioning — dragging files to ImageOptim will directly run Gifsicle on them. (Note: ImageOptim is Mac-only but there are quite a few alternative apps available as well.)

Troubleshooting Tips

You are likely to run into trouble at two crucial stages.

Adding New Layers

Open the “Timeline Toolbar” drop-down menu and select the “New Layers Visible in all Frames” option. It will help tune your animation without any hiccups.


How to Add New Layers Visible in all Frames
Adding new layers (Large preview)

Layers Positioning

Sometimes, you may end up putting layers in the wrong frames. To fix this, you can select the same layer in a fresh frame and select “Match Layer Across Frames” option.


How to Match Layers Across Frames
Positioning layers (Large preview)

Gifographic Examples

Before wrapping this up, I would like to share a few good examples of gifographics. Hopefully, they will inspire you just as they did me.

  1. Google’s Biggest Search Algorithm Updates Of 2016
    This one is my personal favorite. Incorporating Google algorithm updates in a gifographic is difficult owing to its complexity. But, with the use of the right animations and some to-the-point text, you can turn a seemingly complicated subject into an engaging piece of content.
  2. Virtual Reality: A Fresh Perspective For Marketers
    This one turns a seemingly descriptive topic into a smashing gifographic. The gifographic breaks up the Virtual Reality topic into easy-to-understand numbers, graphs, and short paragraphs with perfect use of animation.
  3. How Google Works
    I enjoy reading blog posts by Neil Patel. Just like his post, this gifographic is also comprehensive. The only difference is Neil conveys the essential message through accurately placed GIFs instead of short paragraphs. He uses only the colors that Google’s logo comprises.
  4. The Author Rank Building Machine
    This one lists different tips to help you become an authoritative writer. The animation is simple with a motion backdrop of content creation factory. Everything else is broken down into static graphs, images, and short text paragraphs. But, the simple design works, resulting in a lucid gifographic.
  5. How Car Engines Work
    Beautifully illustrated examples of how car engines work (petrol internal combustion engines and hybrid gas/electric engines). Btw, it’s worth noting that in some articles, Wikipedia is also using animated GIFs for some very similar purposes.

Wrapping Things Up

As you can see, turning your static infographic into an animated one is not very complicated. Armed with Adobe Photoshop and some creative ideas, you can create engaging and entertaining animations, even from scratch.

Of course, your gifographic can have multiple animated parts and you’ll need to work on them individually, which, in turn, will require more planning ahead and more time. (Again, a good example of a rather complex gifographic would be the one shown in “How Car Engines Work?” where different parts of the engine are explained in a series of connected animated images.) But if you plan well, sketch, create, and test, you will succeed and you will be able to make your own cool gifographics.

If you have any questions, ask me in the comments and I’ll be happy to help.

Further Resources

Smashing Editorial
(mb, ra, yk, il)


Source: Smashing Magazine

The Importance Of Macro And Micro-Moment Design

The Importance Of Macro And Micro-Moment Design

The Importance Of Macro And Micro-Moment Design

Susan Weinschenk

2018-12-13T12:30:10+01:00
2018-12-13T12:59:33+00:00

(This article is kindly sponsored by Adobe.) When you design the information architecture, the navigation bars of an application, or the overall layout and visual design of a product, then you are focusing on macro design. When you design (one part of a page, one form, or one single task and interaction), then you are focusing on micro-moment design.

In my experience, designers often spend a lot of time on macro design issues, and sometimes less so on critical micro-moment design issues. That might be a mistake.

Here’s an example of how critical micro-moment design can be.

I read a lot of books. We are talking over a hundred books a year. I don’t even know for sure how many books I read, and because I read so many books, I am a committed library patron. Mainly for reading fiction for fun (and even sometimes for reading non-fiction), I rely on my library to keep my Kindle full of interesting things to read.

Luckily for me, the library system in my county and in my state is pretty good in terms of having books available for my Kindle. Unluckily, this statewide library website and app need serious UX improvements.

I was thrilled when my library announced that instead of using a (poorly designed) website (that did not have a mobile responsive design), the library was rolling out a brand new mobile app, designed specifically to optimize the experience on a mobile phone. “Yay!” I thought. “This will be great!”

Perhaps I spoke too soon.

Let me walk you through the experience of signing into the app. First, I downloaded the app and then went to log in:


A screenshot of signing into Wisconsin’s digital library
(Large preview)

I didn’t have my library card with me (I was traveling), and I wasn’t sure what “Sign in with OverDrive” was about, but I figured I could select my library from the list, so I pressed on the down arrow.


Pressing on the down arrow to find out more details on how to log into Wisconsin’s digital library
(Large preview)

“Great,” I thought. Now I can just scroll to get to my library. I know that my library is in Marathon County here in Wisconsin. In fact, I know from using the website that they call my library: “Marathon County, Edgar Branch” or something similar, since I live in a village called Edgar, so I figured that would be what I should look for especially since I could see that the list went from B (Brown County) to F (Fond du Lac Public Library) with no E for Edgar showing. So I proceeded to scroll.

I scrolled for a while, looking for M (in hope of finding Marathon).


Searching for the desired library name in alphabetical order in Wisconsin’s digital library
(Large preview)

Hmmm. I see Lone Rock, and then the next one on the list is McCoy. I know that I am in Marathon County, and that in fact, there are several Marathon County libraries. Yet, we seem to have skipped Marathon in the list.

I keep scrolling.


Scrolling in the list of library names on the site
(Large preview)

Uh oh. We got to the end of the list (to the W’s), but now we seem to be starting with A again. Well, then, perhaps Marathon will now appear if I keep scrolling.

You know how many libraries there are in Wisconsin and are on this list? I know because as I started to document this user experience I decided to count the number of entries on this list (only a crazy UX professional would take time to do this, I think).

There are 458 libraries on this list, and the list kept getting to the end of the alphabet and then for some reason starting over. I never did figure out why.

Finally, though, I did get to Marathon!


Scrolling further down the list just to find a number of libraries named “Marathon County Public Library”
(Large preview)

And then I discovered I was really in trouble since several libraries start with “Marathon County Public Library”. Since the app only shows the first 27 or so characters, I don’t know which one is mine.

You know what I did at this point?

I decided to give up. And right after I decided that, I got this screen (as “icing on the cake” so to speak):


Error message on Wisconsin’s digital library
(Large preview)

Did you catch the “ID” that I’m supposed to reference if I contact support? Seriously?

This is a classic case of micro-moment design problems.

I can guess that by now some of you are thinking, “Well, that wouldn’t happen to (me, my team, an experienced UX person).” And you might be right. Especially this particular type of micro-moment design fail.

However, I can tell you that I see micro-moment design failures in all kinds of apps, software, digital products, websites, and from all kinds of companies and teams. I’ve seen micro-moment design failures from organizations with and without experienced UX teams, tech-savvy organizations, customer-centric organizations, large established companies and teams, and new start-ups.

Let’s pause for a moment and contrast micro-moment design with macro design.

Let’s say that you are hired to evaluate the user experience of a product. You gather data about the app, the users, the context, and then you start walking through the app. You notice a lot of issues that you want to raise with the team — some large, some small:

  • There are some inconsistencies from page-to-page/screen-to-screen in the app. You would like to see whether they have laid out pages on a grid and if that can be improved;
  • You have questions about whether the color scheme meets branding guidelines;
  • You suspect there are some information architecture issues. The organization of items in menus and the use of icons seems not quite intuitive;
  • One of the forms that users are supposed to fill out and submit is confusing, and you think people may not be able to complete the form and submit the information because it isn’t clear what the user is supposed to enter.

There are many ways to categorize user experience design factors, issues, and/or problems. Ask any UX professional and you will probably get a similar, but slightly different list. For example, UX people might think about the conceptual model, visual design, information architecture, navigation, content, typography, context of use, and more. Sometimes, though, it might be useful to think about UX factors, issues, and design in terms of just two main categories: macro design and micro-moment design.

In the example above, most of the factors on the list were macro design issues: inconsistencies in layout, color schemes, and information architecture. Some people talk about macro design issues as “high-level design” or “conceptual model design”. These are UX design elements that cross different screens and pages. These are UX design elements that give hints and cues about what the user can do with the app, and where to go next.

Macro design is critical if you want to design a product that people want to use. If the product doesn’t match the user’s mental model, if the product is not “intuitive” — these are often (not always, but often) macro design issues.

Which means, of course, that macro design is very important.

It’s not just micro-moment design problems that cause trouble. Macro design issues can result in massive UX problems, too. But macro design issues are more easily spotted by an experienced UX professional because they can be more obvious, and macro design usually gets time devoted to it relatively early in the design process.

If you want to make sure you don’t have macro design problems then do the following:

  • Do the UX research upfront that you need to do in order to have a good idea of the users’ mental models. What does the user expect to do with this product? What do they expect things to be called? Where do they expect to find information?
  • For each task that the user is going to do, make sure you have picked one or two “objects” and made them obvious. For instance, when the user opens an app for looking for apartments to rent the objects should be apartments, and the views of the objects should be what they expect: List, detail, photo, and map. If the user opens an app for paying an insurance bill, then the objects should be policy, bill, clinic visit, while the views should be a list, detail, history, and so on.
  • The reason you do all the UX-research-related things UXers do (such as personas, scenarios, task analyses, and so on) is so that you can design an effective, intuitive macro design experience.

It’s been my experience, however, that teams can get caught up in designing, evaluating, or fixing macro design problems, and not spend enough time on micro-moment design.

In the example earlier, the last issue is a micro-moment design issue:

  • One of the forms that users are supposed to fill out and submit is confusing, and you think people may not be able to complete the form and submit the information because it isn’t clear what the user is supposed to enter.

And the library example at the start of the article is also an example of micro-moment design gone awry.

Micro-moment design refers to problems with one very specific page/form/task that someone is trying to accomplish. It’s that “make-or-break” moment that decides not just whether someone wants to use the app, but whether they can even use the app at all, or whether they give up and abandon, or end up committing errors that are difficult to correct. Not being able to choose my library is a micro-moment design flaw. It means I can’t continue. I can’t use the app anymore. It’s a make-or-break moment for the app.

When we are designing a new product, we often focus on the macro design. We focus on the overall layout, information architecture, conceptual model, navigation model, and so on. That’s because we haven’t yet designed any micro-moments.

The danger is that we will forget to pay close attention to micro-moment design.

So, going back to our library example, and your possible disbelief that such a micro-moment design fail could happen on your watch. It can. Micro-moment design failures can happen for many reasons.

Here are a few common ones I’ve seen:

  • A technical change (for example, how many characters can be displayed in a field) is made after a prototype has been reviewed and tested. So the prototype worked well and did not have a UX problem, but the technical change occurred later, thereby causing a UX problem without anyone noticing.
  • Patterns and standards that worked well in one form or app are re-used in a different context/form/app, and something about the particular field for form in the new context means there is a UX issue.
  • Features are added later by a different person or team who does not realize the impact that particular feature, field, form has on another micro-moment earlier or later in the process.
  • User testing is not done, or it’s done on only a small part of the app, or it’s done early and not re-done later when changes are made.

If you want to make sure you don’t have micro-moment design problems then do the following:

  • Decide what are the critical make-or-break moments in the interface.
  • At each of these moments, decide what is it exactly that the user wants to do.
  • At each of these moments, decide what is it exactly that the product owner wants users to do.
  • Figure out exactly what you can do with design to make sure both of the above can be satisfied.
  • Make that something the highest priority of the interface.

Takeaways

Both macro and micro-moment design are critical to the user experience success of a product. Make sure you have a process for designing both, and that you are giving equal time and resources to both.

Identify the critical make-or-break micro-design moments when they finally do get designed, and do user testing on those as soon as you can. Re-test when changes are made.

Try talking about micro-moment design and macro design with your team. You may find that this categorization of design issues makes sense to them, perhaps more than whichever categorization scheme you’ve been using.

This article is part of the UX design series sponsored by Adobe. Adobe XD tool is made for a fast and fluid UX design process, as it lets you go from idea to prototype faster. Design, prototype and share — all in one app. You can check out more inspiring projects created with Adobe XD on Behance, and also sign up for the Adobe experience design newsletter to stay updated and informed on the latest trends and insights for UX/UI design.

Smashing Editorial
(cm, ms, il)


Source: Smashing Magazine

Protecting Your Site With Feature Policy

Protecting Your Site With Feature Policy

Protecting Your Site With Feature Policy

Rachel Andrew

2018-12-12T11:30:30+02:00
2018-12-12T14:20:08+00:00

One of the web platform features highlighted at the recent Chrome Dev Summit was Feature Policy, which aims to “allow site authors to selectively enable and disable use of various browser features and APIs.” In this article, I’ll take a look at what that means for web developers, with some practical examples.

In his introductory article on the Google Developers site, Eric Bidelman describes Feature Policy as the following:

“The feature policies themselves are little opt-in agreements between developer and browser that can help foster our goals of building (and maintaining) high-quality web apps.”

The specification has been developed at Google by as part of the Web Platform Incubator Group activity. The aim of Feature Policy is for us, as web developers, to be able to state our usage of a web platform feature, explicitly to the browser. By doing so, we make an agreement about our use, or non-use of this particular feature. Based on this the browser can act to block certain features, or report back to us that a feature it did not expect to see is being used.

Examples might include:

  1. I am embedding an iframe and I do not want the embedded site to be able to access the camera of my visitor;
  2. I want to catch situations where unoptimized images are deployed to my site via the CMS;
  3. There are many developers working on my project, and I would like to know if they use outdated APIs such as document.write.

All of these things can be tracked, blocked or reported on as part of Feature Policy.

How To Use Feature Policy

In order to use Feature Policy, the browser needs to know two things: which feature you are creating a policy for, and how you want that feature to be handled.

Feature-Policy: <directive> <allowlist>

The <directive> is the name of the feature that you are setting the policy on.

The current list of features (sourced from the presentation given at Chrome Dev Summit) are as follows:

  • accelerometer
  • ambient-light-sensor
  • autoplay
  • camera
  • document-write
  • encrypted-media
  • fullscreen
  • geolocation
  • gyroscope
  • layout-animations
  • lazyload
  • legacy-image-formats
  • magnetometer
  • midi
  • oversized-images
  • payment
  • picture-in-picture
  • speaker
  • sync-script
  • sync-xhr
  • unoptimized-images
  • unsized-media
  • usb
  • vertical-scroll
  • vr

The <allowlist> details how the feature can be used — if at all — and takes one or more of the following values.

  • *
    The most liberal policy, stating that the feature will be allowed in this document, and any iframes whether from this domain or elsewhere. May only be used as a single value as it makes no sense to enable everything and also pass in a list of domains, for example.
  • self
    The feature will be available in the document and any iframes, however, the iframes must have the same origin.
  • src
    Only applicable when using an iframe allow attribute. This allows a feature as long as the document loaded into it comes from the same origin as the URL in the iframe’s src attribute.
  • none
    Disables the feature for the document and any nested iframes. May only be used as a single value.
  • <origin(s)>
    The feature is allowed for specific origins; this means that you can specify a list of domains where the feature is allowed. The list of domains is space separated.

There are two methods by which you can enable feature policies on your site: You can send an HTTP Header, or use the allow attribute on an iframe.

HTTP Header

Sending an HTTP Header means that you can enable a feature policy for the page or entire site setting that header, and also anything embedded in the site. Headers can be set for your entire site at the web server or can be sent from your application.

For example, if I wanted to prevent the use of the geolocation API and I was using the NGINX web server, I could edit the configuration files for my site in NGINX to add the following header, which would prevent any document in my site and any iframe embedded in from using the geolocation API.

add_header Feature-Policy "geolocation none;";

Multiple policies can be set in a single header. To prevent geolocation and vibrate but allow unsized-media from the domain example.com I could set the following:

add_header Feature-Policy "vibrate none; geolocation none; unsized-media http://example.com;";

The allow Attribute On iFrames

If we are primarily concerned with what happens with the content in an iframe, we can use Feature Policy on the iframe itself; this benefits from slightly better browser support at the time of writing with Chrome and Safari supporting this use.

If I am embedding a site and do not want that site to use geolocation, camera or microphone APIs then my iframe would look like the following example:

<iframe allow="geolocation 'none'; camera 'none'; microphone 'none'">

You may already be familiar with the individual attributes which control the content of iframes allowfullscreen, allowpaymentrequest, and allowusermedia. These can be replaced by the Feature Policy allow attribute, and for browser compatibility reasons you can use both on an iframe. If you do use both attributes, then the most restrictive one will apply. The Google article shows an example of an iframe that uses allowfullscreen — meaning that the iframe is allowed to enter fullscreen, but then a conflicting Feature Policy of fullscreen none. These conflict, so the most restrictive policy wins and this iframe would not be allowed to enter fullscreen.

<iframe allowfullscreen allow="fullscreen 'none'" src="...">

The iframe element also has a sandbox attribute designed to manage support for many features. This feature was also added to Content Security Policy with a sandbox value which disables all sandbox features, which can then be opted back into selectively. There is some crossover between sandbox features and those controlled by Feature Policy, and Feature Policy does not seek to duplicate those values already covered by sandbox. It does, however, address some of the limitations of sandbox by taking a more fine-grained approach to managing these policies, rather than one of turning everything off globally as one large policy set.

Feature Policy And Reporting API

Feature Policy violations can be reported via the Reporting API, which means that you could develop a comprehensive set of policies tracking feature usage across your site. This would be completely transparent to your users but give you a huge amount of information about how features were being used.

Browser Support For Feature Policy

Currently, browser support for Feature Policy is limited to Chrome, however, in many cases where you are using Feature Policy during development and when previewing sites this is not necessarily a problem.

Many of the use cases I will outline below are usable right now, without causing any impact to site visitors who are using browsers without support.

When To Use Feature Policy

I really like the idea of being able to use Feature Policy to help back up decisions made when developing the site. Decisions which may well be written up in documents such as a performance budget, or as part of a GDPR audit, but which then become something we have to remember to preserve through the life of the site. This is not always easy when multiple people work on a site; people who perhaps weren’t involved during that initial decision making, or may simply be unaware of the requirements. We think a lot about third parties managing to somehow impact our site, however, sometimes our sites need protecting from ourselves!

Keeping An Eye On Third Parties

You could prevent a third-party site from accessing the camera or microphone using a feature policy on the iframe with the allow attribute. If the reason for embedding that site has nothing to do with those features, then disabling them means that the site can never start to ask for those. This could then be linked with your processes for ensuring GDPR compliance. As you audit the privacy impact of your site, you can build in processes for locking down the access of third parties by way of feature policy — giving you and your visitors additional security and peace of mind.

This usage does rely on browser support for Feature Policy to block the usage. However, you could use Feature Policy reporting mode to inform you of usage of these APIs if the third party changed what they would be doing. This would give you a very quick heads-up — essentially as soon as the first person using Chrome hits the site.

Selectively Enabling Features

We also might want to selectively enable some features which are normally blocked. Perhaps we wish to allow an iframe loading content from another site to use the geolocation feature in the browser. Chrome by default blocks this, but if you are loading content from a trusted site you could enable the cross-origin request using Feature Policy. This means that you can safely turn on features when loading content from another domain that is under your control.

Catching Use Of Outdated APIs And Poorly Performing Features

Feature Policy can be run in a report-only mode. It can then track usage of certain features and let you know when they are found on the site. This can be useful in many scenarios. If you have a very large site with a lot of legacy code, enabling Feature Policy would help you to track down the places that need attention. If you work with a large team (especially if developers often pull in some third party libraries of code), Feature Policy can catch things that you would rather not see on the site.

Dealing With Poorly Optimized Images

While most of the articles I’ve seen about Feature Policy concentrate on the security and privacy aspects, the features around image optimization really appealed to me, as someone who deals with a lot of content generated by technical and non-technical users. Feature Policy can be used to help protect the user experience as well as the performance of your site by preventing overly large — or unoptimized images — being downloaded by visitors.

In an ideal world, your CMS would deal with image management, ensuring that images were sensibly resized, optimized for the web and the context they will be displayed in. Real life is rarely that ideal world, however, and so sometimes the job of resizing and optimizing images is left to content editors to ensure they are not uploading huge images to the web. This is particularly an issue if you are using a static CMS with no content management layer on top of it. Even as a technical person, it is very easy to forget to resize that giant screenshot or camera image you popped into a folder as a placeholder.

Currently behind a flag in Chrome are features which can help. The idea behind these features is to highlight the problematic images so that they can be fixed — without completely breaking the site.

The unsized-media feature policy looks for images or video which do not have a size set in the HTML or CSS. When an unsized media element loads, it can cause the content on the page to reflow.

In order to prevent any unsized media being added to the site, set the following header. Media will then be displayed with a default size of 300×150 pixels. You will see your site loading with small media, and realize you have a problem to fix.

Feature-Policy: unsized-media 'none'

See a demo (needs Chrome Canary with Experimental Web Platform Features on).

The oversized-images feature policy checks to see that images are not much large than their container. If they are, a placeholder will be shown instead. This policy is incredibly useful to check that you are not sending huge desktop images to your mobile users.

Feature-Policy: oversized-images 'none'

See a demo (needs Chrome Canary with Experimental Web Platform Features on).

The unoptimized-images feature policy checks to see if the data size of images in bytes is no more than 0.5x bigger than its rendering area in pixels. If this policy is enabled and images violate it, a placeholder will be shown instead of the image.

Feature-Policy: unoptimized-images 'none'

See a demo (needs Chrome Canary with Experimental Web Platform Features on).

Testing And Reporting On Feature Policy

Chrome DevTools will display a message to inform you that certain features have been blocked or enabled by a Feature Policy. If you have enabled Feature Policy on your site, you can check that this is working.

Support for Feature Policy has also been added to the Security Headers site, which means you can check for these along with headers such as Content Security Policy on your site — or other sites on the web.

There is a Chrome DevTools Extension which lets you toggle on and off different Feature Policies (also a great way to check your pages without needing to configure any headers).

If you would like to get into integrating your Feature Ppolicies with the Reporting API, then there is further information in terms of how to do this here.

Further Reading And Resources

I have found a number of resources, many of which I used when researching this article. These should give you all that you need to begin implementing Feature Policy in your own applications. If you are already using Content Security Policy, this seems an additional logical step towards controlling the way your site works with the browser to help ensure the security and privacy of people using your site. You have the added bonus of being able to use Feature Policy to help you keep on top of performance-damaging elements being added to your site over time.

Smashing Editorial
(il)


Source: Smashing Magazine

Why More Web Designers Should Give Pre-built Websites a Try

This sponsored article was created by our content partner, BAW Media. Thank you for supporting the partners who make SitePoint possible. There’s a heated and seemingly never-ending debate in the web design industry about whether web designers should always start their design work from scratch or not. Another option comprises taking advantage of what pre-built […]

The post Why More Web Designers Should Give Pre-built Websites a Try appeared first on SitePoint.


Source: Sitepoint

Introducing Float.com: A Better Alternative To Spreadsheets

Introducing Float.com: A Better Alternative To Spreadsheets

Introducing Float.com: A Better Alternative To Spreadsheets

Nick Babich

2018-12-11T12:50:00+01:00
2018-12-11T13:50:37+00:00

(This is a sponsored post.) In today’s highly competitive market, it’s vital to move fast. After all, we all know how the famous saying goes: “Time is money.” The faster your product team moves when creating a product, the higher the chance it’ll succeed in the market. If you are a project manager, you need a tool that helps you get the most out of each of your team member’s time.

Though creative teams typically work on new and innovative products, many still use legacy tools to manage their work. Spreadsheets are one of the most common tools in the project manager’s toolbox. While this might be adequate for a team of two or three members, as a team grows, managing the team’s time becomes a demanding job.

Whenever project managers try to manage their team using spreadsheets alone, they usually face the following problems:

1. Lack Of Glanceability

Understanding what’s really happening on a project takes a lot of work. It’s hard to visually grasp who’s busy and who’s not. As a result, some team members might end up overloaded, while others will have too little to do. You also won’t get a clear breakdown of how much time is being devoted to particular work and particular clients, which is crucial not just for billing purposes but also to inform your agency’s future decisions, like who to hire next.

2. Hard To Report To Stakeholders

With spreadsheets as the home of project management, translating data about people and time into tangible insights is a challenge. Data visualization is also virtually impossible with the limited range of chart-building functions that spreadsheet tools provide. As a result, reporting with spreadsheets becomes a time-consuming task. The more people and activities a project has, the more of a project manager’s time will be consumed by reporting.

3. Spreadsheets Manage Tasks, Not People

Managing projects with individual spreadsheets is a disaster waiting to happen. Though a single spreadsheet may give a clear breakdown of a single project, it has no way of indicating overload and underload for particular team members across all projects.

4. Lack Of High-Level Overview Of A Project

It’s well known in the industry that many designers suffer from tunnel vision: Without the big picture of a project in mind, their focus turns to solving ongoing tasks. But the problem of tunnel vision isn’t limited to designers; it also exists in project management.

When a project manager uses a tool like a spreadsheet, it is usually hard (or even impossible) to create a high-level overview of a project. Even when the project manager invests a lot of time in creating this overview, the picture becomes outdated as soon as the company’s direction shifts.

5. The Risk Of Outdated Information

Product development moves quickly, making it a struggle for project managers to continually monitor and introduce all required changes into the spreadsheet. Not only is this time-consuming, but it’s also not much fun — and, as a result, often doesn’t get done.

How Technology Makes Our Life Better: Introducing Float

The purpose of technology has always been to reduce mechanical work in order to focus on innovation. In a creative or product agency, the ultimate goal is to automate as much of the product design and human management process as possible, so that the team can focus on the deep work of creativity and execution.

With this in mind, let’s explore Float, a resource management app for creative agencies. Float makes it easier to understand who’s working on what, becoming a single source of truth for your entire team.

Helping Product Managers Overcome The Challenges Of Time And People Management

Float facilitates team collaboration and makes work more effective. Here’s how:

1. Visual Team Planner: See Team And Tasks At A Glance

Float allows you to plan tasks visually using the “Schedule” tab. In this view, you can allocate and update project assignments. A drag-and-drop interface makes scheduling your team simple.

Here’s an example of the Schedule View:


The schedule is not only a bird’s-eye view of who’s working on what and when, but also a dynamic canvas. Click on any empty space to create a task. Each task can be easily modified, extended or split.
The schedule is not only a bird’s-eye view of who’s working on what and when, but also a dynamic canvas. Click on any empty space to create a task. Each task can be easily modified, extended or split. (Large preview)

But that’s not all. You can do more:

  • Prioritize tasks
    You can do this by simply moving them on top of others.
  • Duplicate tasks
    Simply press “Shift” and drag a selected task to a new location.

You can prioritize tasks, duplicate them, extend or even split a task with Float.
(Large preview)
  • Extend a task
    If someone on your team needs extra time to finish a task, you can extend the time in one click using the visual interface.
  • Split a task
    When it becomes evident that someone on your team needs help with a task, it’s easy to split the task into parts and assign each part to another member.

2. Built-In Reporting And Statistics

Float’s built-in reporting and statistics feature (“utilization” reports) can save project managers hours of manual work at the end of the week, month or quarter.


Float helps you keep track of all of a project’s hours.
Float helps you keep track of all of a project’s hours. (Large preview)

It’s relatively easy to customize the format of the report. You can:

  • Choose the roles of team members (employees, contractors, all) in “People”;
  • Choose the type of tasks (tentative, confirmed, completed) in “Task”.

It’s vital to mention that Float allows you to define different roles for team members. For example, if someone works part-time, you can define this in the “Profile” settings and specify that their work is part-time only. This feature also comes handy when you need to calculate a budget.

Emailing team members their individual hours for the week
(Large preview)

You can email team members their individual hours for the week.

3. Search And Filter: All The Information You Need, At Your Fingertips

All of your team’s information sits in one place. You don’t need to use different spreadsheets for different projects; to add a new project, simply click on “Projects” and add a new object.

You can also use the filtering tools to highlight specific projects. A simple way to see relevant tasks at a glance is to filter by tags, leaving only particular types of projects (for example, projects that have contractors).


Customizing the format of the report
(Large preview)

4. Getting A High-Level Overview Of All Your Projects

With Float, you’ll always have a record of what happened and when.
– Set milestones in a project’s settings, and keep an eye on upcoming milestones in your Activity Feed.


Set milestones for your project.
With Float, you can set milestones for your project. (Large preview)
  • Drill down into a user or a team and review their schedule. When you see that someone is overscheduled (red will tell you that), you can move other team members to that activity and regroup them.

Float helps you keep track of all of a project’s hours.
Float helps you keep track of all of a project’s hours. (Large preview)

You can view by day, week or month, making it easy to zoom out and see a big picture of your project.


You can zoom in for a detailed day view, or zoom out to view a monthly forecast.
Zoom in for a detailed day view, or zoom out to view a monthly forecast. (Large preview)

5. Reducing The Risk Of Outdated Information

You don’t need to switch between multiple tools to find out everything you need to know. This means your information will be up to date (it’s much easier to manage your project using a single tool, rather than many).

Float can be easily connected to all the services you use.


Using the filtering tools to highlight specific projects
“Schedule”, “People” and “Projects Tools” work together to create context. (Large preview)

Conclusion

If your team’s project management leaves a lot to be desired, Float is a no-brainer. Float makes it easy to see everything you need to know about your team’s projects in a single place, giving project managers the information and functionality they need to handle the fast-paced world of digital design and development. Get started with Float today!

Smashing Editorial
(ms, ra, al, il)


Source: Smashing Magazine

How To Build A Real-Time App With GraphQL Subscriptions On Postgres

How To Build A Real-Time App With GraphQL Subscriptions On Postgres

How To Build A Real-Time App With GraphQL Subscriptions On Postgres

Sandip Devarkonda

2018-12-10T14:00:23+01:00
2018-12-10T13:04:58+00:00

In this article, we’ll take a look at the challenges involved in building real-time applications and how emerging tooling is addressing them with elegant solutions that are easy to reason about. To do this, we’ll build a real-time polling app (like a Twitter poll with real-time overall stats) just by using Postgres, GraphQL, React and no backend code!

The primary focus will be on setting up the backend (deploying the ready-to-use tools, schema modeling), and aspects of frontend integration with GraphQL and less on UI/UX of the frontend (some knowledge of ReactJS will help). The tutorial section will take a paint-by-numbers approach, so we’ll just clone a GitHub repo for the schema modeling, and the UI and tweak it, instead of building the entire app from scratch.

All Things GraphQL

Do you know everything you need to know about GraphQL? If you have your doubts, Eric Baer has you covered with a detailed guide on its origins, its drawbacks and the basics of how to work with it. Read article →

Before you continue reading this article, I’d like to mention that a working knowledge of the following technologies (or substitutes) are beneficial:

  • ReactJS
    This can be replaced with any frontend framework, Android or IOS by following the client library documentation.
  • Postgres
    You can work with other databases but with different tools, the principles outlined in this post will still apply.

You can also adapt this tutorial context for other real-time apps very easily.

A demonstration of the features in the polling app that is built in this tutorial
A demonstration of the features in the polling app that we’ll be building. (Large preview)

As illustrated by the accompanying GraphQL payload at the bottom, there are three major features that we need to implement:

  1. Fetch the poll question and a list of options (top left).
  2. Allow a user to vote for a given poll question (the “Vote” button).
  3. Fetch results of the poll in real-time and display them in a bar graph (top right; we can gloss over the feature to fetch a list of currently online users as it’s an exact replica of this use case).

Challenges With Building Real-Time Apps

Building real-time apps (especially as a frontend developer or someone who’s recently made a transition to becoming a fullstack developer), is a hard engineering problem to solve.

This is generally how contemporary real-time apps work (in the context of our example app):

  1. The frontend updates a database with some information; A user’s vote is sent to the backend, i.e. poll/option and user information (user_id, option_id).
  2. The first update triggers another service that aggregates the poll data to render an output that is relayed back to the app in real-time (every time a new vote is cast by anyone; if this done efficiently, only the updated poll’s data is processed and only those clients that have subscribed to this poll are updated):
    • Vote data is first processed by an register_vote service (assume that some validation happens here) that triggers a poll_results service.
    • Real-time aggregated poll data is relayed by the poll_results service to the frontend for displaying overall statistics.

Traditional design for a real-time poll app
A poll app designed traditionally

This model is derived from a traditional API-building approach, and consequently has similar problems:

  1. Any of the sequential steps could go wrong, leaving the UX hanging and affecting other independent operations.
  2. Requires a lot of effort on the API layer as it’s a single point of contact for the frontend app, that interacts with multiple services. It also needs to implement a websockets-based real-time API — there is no universal standard for this and therefore sees limited support for automation in tools.
  3. The frontend app is required to add the necessary plumbing to consume the real-time API and may also have to solve the data consistency problem typically seen in real-time apps (less important in our chosen example, but critical in ordering messages in a real-time chat app).
  4. Many implementations resort to using additional non-relational databases on the server-side (Firebase, etc.) for easy real-time API support.

Let’s take a look at how GraphQL and associated tooling address these challenges.

What Is GraphQL?

GraphQL is a specification for a query language for APIs, and a server-side runtime for executing queries. This specification was developed by Facebook to accelerate app development and provide a standardized, database-agnostic data access format. Any specification-compliant GraphQL server must support the following:

  1. Queries for reads
    A request type for requesting nested data from a data source (which can be either one or a combination of a database, a REST API or another GraphQL schema/server).
  2. Mutations for writes
    A request type for writing/relaying data into the aforementioned data sources.
  3. Subscriptions for live-queries
    A request type for clients to subscribe to real-time updates.

GraphQL also uses a typed schema. The ecosystem has plenty of tools that help you identify errors at dev/compile time which results in fewer runtime bugs.

Here’s why GraphQL is great for real-time apps:

  • Live-queries (subscriptions) are an implicit part of the GraphQL specification. Any GraphQL system has to have native real-time API capabilities.
  • A standard spec for real-time queries has consolidated community efforts around client-side tooling, resulting in a very intuitive way of integrating with GraphQL APIs.

GraphQL and a combination of open-source tooling for database events and serverless/cloud functions offer a great substrate for building cloud-native applications with asynchronous business logic and real-time features that are easy to build and manage. This new paradigm also results in great user and developer experience.

In the rest of this article, I will use open-source tools to build an app based on this architecture diagram:


GraphQL-based design for a real-time poll app
A poll app designed with GraphQL

Building A Real-Time Poll/Voting App

With that introduction to GraphQL, let’s get back to building the polling app as described in the first section.

The three features (or stories highlighted) have been chosen to demonstrate the different GraphQL requests types that our app will make:

  1. Query
    Fetch the poll question and its options.
  2. Mutation
    Let a user cast a vote.
  3. Subscription
    Display a real-time dashboard for poll results.

GraphQL elements in the poll app
GraphQL request types in the poll app (Large preview)

Prerequisites

  • A Heroku account (use the free tier, no credit card required)
    To deploy a GraphQL backend (see next point below) and a Postgres instance.
  • Hasura GraphQL Engine (free, open-source)A ready-to-use GraphQL server on Postgres.
  • Apollo Client (free, open-source SDK)
    For easily integrating clients apps with a GraphQL server.
  • npm (free, open-source package manager)
    To run our React app.

Deploying The Database And A GraphQL Backend

We will deploy an instance each of Postgres and GraphQL Engine on Heroku’s free tier. We can use a nifty Heroku button to do this with a single click.

Heroku button
Heroku button

Note: You can also follow this link or search for documentation Hasura GraphQL deployment for Heroku (or other platforms).


Deploying app backend to Heroku’s free tier
Deploying Postgres and GraphQL Engine to Heroku’s free tier (Large preview)

You will not need any additional configuration, and you can just click on the “Deploy app” button. Once the deployment is complete, make a note of the app URL:

<app-name>.herokuapp.com

For example, in the screenshot above, it would be:

hge-realtime-app-tutorial.herokuapp.com

What we’ve done so far is deploy an instance of Postgres (as an add-on in Heroku parlance) and an instance of GraphQL Engine that is configured to use this Postgres instance. As a result of doing so, we now have a ready-to-use GraphQL API but, since we don’t have any tables or data in our database, this is not useful yet. So, let’s address this immediately.

Modeling the database schema

The following schema diagram captures a simple relational database schema for our poll app:


Schema design for the poll app
Schema design for the poll app. (Large preview)

As you can see, the schema is a simple, normalized one that leverages foreign-key constraints. It is these constraints that are interpreted by the GraphQL Engine as 1:1 or 1:many relationships (e.g. poll:options is a 1: many relationship since each poll will have more than 1 option that are linked by the foreign key constraint between the id column of the poll table and the poll_id column in the option table). Related data can be modelled as a graph and can thus power a GraphQL API. This is precisely what the GraphQL Engine does.

Based on the above, we’ll have to create the following tables and constraints to model our schema:

  1. Poll
    A table to capture the poll question.
  2. Option
    Options for each poll.
  3. Vote
    To record a user’s vote.
  4. Foreign-key constraint between the following fields (table : column):
    • option : poll_id → poll : id
    • vote : poll_id → poll : id
    • vote : created_by_user_id → user : id

Now that we have our schema design, let’s implement it in our Postgres database. To instantly bring this schema up, here’s what we’ll do:

  1. Download the GraphQL Engine CLI.
  2. Clone this repo:
    $ git clone clone https://github.com/hasura/graphql-engine
    
    $ cd graphql-engine/community/examples/realtime-poll
  3. Go to hasura/ and edit config.yaml:
    endpoint: https://<app-name>.herokuapp.com
  4. Apply the migrations using the CLI, from inside the project directory (that you just downloaded by cloning):
    $ hasura migrate apply

That’s it for the backend. You can now open the GraphQL Engine console and check that all the tables are present (the console is available at https://<app-name>.herokuapp.com/console).

Note: You could also have used the console to implement the schema by creating individual tables and then adding constraints using a UI. Using the built-in support for migrations in GraphQL Engine is just a convenient option that was available because our sample repo has migrations for bringing up the required tables and configuring relationships/constraints (this is also highly recommended regardless of whether you are building a hobby project or a production-ready app).

Integrating The Frontend React App With The GraphQL Backend

The frontend in this tutorial is a simple app that shows poll question, the option to vote and the aggregated poll results in one place. As I mentioned earlier, we’ll first focus on running this app so you get the instant gratification of using our recently deployed GraphQL API , see how the GraphQL concepts we looked at earlier in this article power the different use-cases of such an app, and then explore how the GraphQL integration works under the hood.

NOTE: If you are new to ReactJS, you may want to check out some of these articles. We won’t be getting into the details of the React part of the app, and instead, will focus more on the GraphQL aspects of the app. You can refer to the source code in the repo for any details of how the React app has been built.

Configuring The Frontend App
  1. In the repo cloned in the previous section, edit HASURA_GRAPHQL_ENGINE_HOSTNAME in the src/apollo.js file (inside the /community/examples/realtime-poll folder) and set it to the Heroku app URL from above:
    export const HASURA_GRAPHQL_ENGINE_HOSTNAME = 'random-string-123.herokuapp.com';
  2. Go to the root of the repository/app-folder (/realtime-poll/) and use npm to install the prequisite modules and then run the app:
    $ npm install
    
    $ npm start
    

Screenshot of the live poll app
Screenshot of the live poll app (Large preview)

You should be able to play around with the app now. Go ahead and vote as many times as you want, you’ll notice the results changing in real time. In fact, if you set up another instance of this UI and point it to the same backend, you’ll be able to see results aggregated across all the instances.

So, how does this app use GraphQL? Read on.

Behind The Scenes: GraphQL

In this section, we’ll explore the GraphQL features powering the app, followed by a demonstration of the ease of integration in the next one.

The Poll Component And The Aggregated Results Graph

The poll component on the top left that fetches a poll with all of its options and captures a user’s vote in the database. Both of these operations are done using the GraphQL API. For fetching a poll’s details, we make a query (remember this from the GraphQL introduction?):

query {
  poll {
    id
    question
    options {
      id
      text
    }
  }
}

Using the Mutation component from react-apollo, we can wire up the mutation to a HTML form such that the mutation is executed using variables optionId and userId when the form is submitted:

mutation vote($optionId: uuid!, $userId: uuid!) {
  insert_vote(objects: [{option_id: $optionId, created_by_user_id: $userId}]) {
    returning {
      id
    }
  }
}

To show the poll results, we need to derive the count of votes per option from the data in vote table. We can create a Postgres View and track it using GraphQL Engine to make this derived data available over GraphQL.

CREATE VIEW poll_results AS
 SELECT poll.id AS poll_id, o.option_id, count(*) AS votes
 FROM (( SELECT vote.option_id, option.poll_id, option.text
   FROM ( vote
          LEFT JOIN 
      public.option ON ((option.id = vote.option_id)))) o
 
           LEFT JOIN poll ON ((poll.id = o.poll_id)))
 GROUP BY poll.question, o.option_id, poll.id;

The poll_results view joins data from vote and poll tables to provide an aggregate count of number of votes per each option.

Using GraphQL Subscriptions over this view, react-google-charts and the subscription component from react-apollo, we can wire up a reactive chart which updates in realtime when a new vote happens from any client.

subscription getResult($pollId: uuid!) {
  poll_results(where: {poll_id: {_eq: $pollId}}) {
    option {
      id
      text
    }
    votes
  }
}

GraphQL API Integration

As I mentioned earlier, I used Apollo Client, an open-source SDK to integrate a ReactJS app with the GraphQL backend. Apollo Client is analogous to any HTTP client library like requests for python, the standard http module for JavaScript, and so on. It encapsulates the details of making an HTTP request (in this case POST requests). It uses the configuration (specified in src/apollo.js) to make query/mutation/subscription requests (specified in src/GraphQL.jsx with the option to use variables that can be dynamically substituted in the JavaScript code of your REACT app) to a GraphQL endpoint. It also leverages the typed schema behind the GraphQL endpoint to provide compile/dev time validation for the aforementioned requests. Let’s see just how easy it is for a client app to make a live-query (subscription) request to the GraphQL API.

Configuring The SDK

The Apollo Client SDK needs to be pointed at a GraphQL server, so it can automatically handle the boilerplate code typically needed for such an integration. So, this is exactly what we did when we modified src/apollo.js when setting up the frontend app.

Making A GraphQL Subscription Request (Live-Query)

Define the subscription we looked at in the previous section in the src/GraphQL.jsx file:

const SUBSCRIPTION_RESULT = `
subscription getResult($pollId: uuid!) {
  poll_results (
    order_by: option_id_desc,
    where: { poll_id: {_eq: $pollId} }
  ) {
    option_id
    option { id text }
    votes
  }
}`;

We’ll use this definition to wire up our React component:

export const Result = (pollId) => (
  <Subscription subscription={gql`${SUBSCRIPTION_RESULT}`} variables={pollId}>
    {({ loading, error, data }) => {
       if (loading) return 

Loading...</p>; if (error) return

Error :</p>; return ( <div> <div> {renderChart(data)} </div> </div> ); }} </Subscription> )

One thing to note here is that the above subscription could also have been a query. Merely replacing one keyword for another gives us a “live-query”, and that’s all it takes for the Apollo Client SDK to hook this real-time API with your app. Every time there’s a new dataset from our live-query, the SDK triggers a re-render of our chart with this updated data (using the renderChart(data) call). That’s it. It really is that simple!

Final Thoughts

In three simple steps (creating a GraphQL backend, modeling the app schema, and integrating the frontend with the GraphQL API), you can quickly wire-up a fully-functional real-time app, without getting mired in unnecessary details such as setting up a websocket connection. That right there is the power of community tooling backing an abstraction like GraphQL.

If you’ve found this interesting and want to explore GraphQL further for your next side project or production app, here are some factors you may want to use for building your GraphQL toolchain:

  • Performance & Scalability
    GraphQL is meant to be consumed directly by frontend apps (it’s no better than an ORM in the backend; real productivity benefits come from doing this). So your tooling needs to be smart about efficiently using database connections and should be able scale effortlessly.
  • Security
    It follows from above that a mature role-based access-control system is needed to authorize access to data.
  • Automation
    If you are new to the GraphQL ecosystem, handwriting a GraphQL schema and implementing a GraphQL server may seem like daunting tasks. Maximize the automation from your tooling so you can focus on the important stuff like building user-centric frontend features.
  • ArchitectureAs trivial as the above efforts seem like, a production-grade app’s backend architecture may involve advanced GraphQL concepts like schema-stitching, etc. Moreover, the ability to easily generate/consume real-time APIs opens up the possibility of building asynchronous, reactive apps that are resilient and inherently scalable. Therefore, it’s critical to evaluate how GraphQL tooling can streamline your architecture.

Related Resources

  • You can check out a live version of the app over here.
  • The complete source code is available on GitHub.
  • If you’d like to explore the database schema and run test GraphQL queries, you can do so over here.
Smashing Editorial
(rb, ra, yk, il)


Source: Smashing Magazine

What Can Be Learned From The Gutenberg Accessibility Situation?

What Can Be Learned From The Gutenberg Accessibility Situation?

What Can Be Learned From The Gutenberg Accessibility Situation?

Andy Bell

2018-12-07T11:30:08+01:00
2018-12-07T10:36:51+00:00

So far, Gutenberg has had a very mixed reception from the WordPress community and that reception has become increasingly negative since a hard deadline was set for the 5.0 release, even though many considered it to be incomplete. A hard release deadline in software is usually fine, but there is a glaring issue with this particular one: what will be the main editor for a platform that powers about 32% of the web isn’t fully accessible. This issue has been raised many times by the community, and it’s been effectively brushed under the carpet by Automattic’s leadership — at least it comes across that way.

Sounds like a messy situation, right? I’m going to dive into what’s happened and how this sort of situation might be avoided by others in the future.

Further Context

For those amongst us who haven’t been following along or don’t know much about WordPress, I’ll give you a bit of context. For those that know what’s gone on, you can skip straight to the main part of the article.

WordPress powers around 32% of the web with both the open-source, self-hosted CMS and the wordpress.com hosted blogs. Although WordPress, the CMS software is open-source, it is heavily contributed to by Automattic, who run wordpress.com, amongst other products. Automattic’s CEO, Matt Mullenweg is also the co-founder of the WordPress open source project.

It’s important to understand that WordPress, the CMS is not a commercial Automattic project — it is open source. Automattic do however make lots of decisions about the future of WordPress, including the brand new editor, Gutenberg. The editor has been available as a plugin while it’s been in development, so WordPress users can use it as their main editor and provide feedback — a lot of which has been negative. Gutenberg is shipping as the default editor in the 5.0 major release of WordPress, and it will be the forced default editor, with only the download of the Classic Editor preventing it. This forced change has had a mixed response from the community, to say the least.

I’ve personally been very positive about Gutenberg with my writing, teaching and speaking, as I genuinely think it’ll be a positive step for WordPress in the long run. As the launch of WordPress 5.0 has come ever closer, though, my concerns about accessibility have been growing. The accessibility issues are being “fixed” as I write this, but the handling of the situation has been incredibly poor, from Automattic.

I invite you to read this excellent, ever-updating Twitter thread by Adrian Roselli. He’s done a very good job of collecting information and providing expert commentary. He’s covered all of the events in a very straightforward manner.

Right, you’re up to speed, so let’s crack on.

What Happened?

For as long as the Gutenberg plugin has been available to install, there have been accessibility issues. Even when I very excitedly installed it and started hacking away at custom blocks back in March, I could see there was a tonne of issues with the basics, such as focus management. I kept telling myself, “This editor is very early doors, so it’ll all get fixed before WordPress 5.” The problem is: it didn’t. (Well, mostly, anyway.)

This situation was bad as it is, but two key things happened that made it worse. The accessibility lead, Rian Rietveld, resigned in October, citing political and codebase issues. The second thing is that Automattic set a hard deadline for WordPress 5’s release, regardless of whether accessibility issues were fixed or not.

Let me just illustrate how bad this is. As cited in Rian’s article: after an accessibility test round in March, the results indicated so many accessibility issues, most testers refused to look at Gutenberg again. We know that the situation has gotten a lot better since then, but there are still a tonne of open issues, even now.

I’ve got to say it how I see it, too. There’s clearly a cultural issue at Automattic in terms of their attitude towards accessibility and how they apparently compensate people who are willing to fix them, with a strange culture of free work, even from “outsiders”. Frankly, the company’s CEO, Matt Mullenweg’s attitude absolutely stinks — especially when he appears to be holding a potential professional engagement hostage over someone’s personal blog decision:

Allow me to double-down on the attitude towards accessibility for a moment. When a big company like Automattic decides to prioritize a deadline they pluck out of thin air over enabling people with impairments to use the editor that they will be forced to use it is absolutely shocking. Even more shocking is the message that it sends out that accessibility compliance is not as important as flashy new features. Ironically, there’s clearly commercial undertones to this decision for a hard deadline, but as always, free work is expected to sort it out. You’d expect a company like Automattic to fix the situation that they created with their own resource, right?

You’ll probably find it shocking that a crowd funding campaign has been put together to get an accessibility audit done on Gutenberg. I know I certainly do. You heard me correctly, too. The Gutenberg editor, which is a product of Automattic’s influence on WordPress who (as a company) were valued at over $1 Billion in 2014 are not paying for a much-needed accessibility audit. They are instead sitting back and waiting for everyone else to pay for it. Well, at least they were, until Matt Mullenweg finally committed to funding an audit on 29 November.

How Could This Mess Be Avoided?

Enough dragging people over coals (for now) and let us instead think about how this could have been avoided. Apart from the cultural issues that seem to de-prioritize accessibility at Automattic, I think the design process is mostly at fault in the context of the Gutenberg editor.

A lot of the issues are based around complexity and cognitive load. Creating blocks, editing the content, and maneuvering between blocks is a nightmare for visually impaired and/or keyboard users. Perhaps if accessibility was considered at the very start of the project, the process of creating, editing and moving blocks would be a lot simpler and thus, not a cognitive overload. The problem now is that accessibility is a fix rather than a core feature. The cognitive issues will continue to exist, albeit improved.

Another very obvious thing that could have been done differently would be to provide help and training on the JS-heavy codebase that was introduced. A lot of the accessibility fixing work seems to have been very difficult because the accessibility team had no React developers within it. There was clearly a big decision to utilize modern JavaScript because Mullenweg told everyone to “Learn JavaScript Deeply”. At that point, it would have made a lot of sense to help people who contribute a lot to WordPress for free to also learn JavaScript deeply so that they could have been involved way earlier in the process. I even saw this as an issue and made learning modern JavaScript and React a core focus in a tutorial series I co-authored with Lara Schenck.

I’m convinced that some foresight and investment in processes, planning, and people would have prevented a tonne of the accessibility issues from existing at all. Again, this points at issues with attitude from Automattic’s leader, in my opinion. He’s had the attitude that ignoring accessibility is fine because Gutenberg is a fantastic, empowering new editor. While this is true, it can’t be labeled as truly empowering if it prevents a huge number of users from managing content — in some cases, even doing their jobs. A responsible CEO in this position would probably write an incredibly apologetic statement that addressed the massive oversights. They would probably also postpone the hard deadline set until every accessibility issue was fixed. At the very least, they wouldn’t force the new editor on every single WordPress user.

Wrapping Up

I’ve got to add to this article that I am a massive WordPress fan and can see some unbelievably good opportunities for managing content that Gutenberg provides. It’s not just a new editor — it is a movement. It’s going to shape WordPress for years to come, and it should allow more designers and front-end developers into the ecosystem. This should be welcomed with open arms. Well, if and when it is fully accessible, anyway.

There are also a lot of incredible people working at Automattic and on the WordPress core team, who I have heaps of respect and love for. I know these people will help this situation come good in the end and will and do welcome this sort of critique. I also know that lessons will be learned and I have faith that a mess like this won’t happen again.

Use this situation as a warning, though. You simply can’t ignore accessibility, and you should study up and integrate it into the entire process of your projects as a priority.

Smashing Editorial
(dm, ra, il)


Source: Smashing Magazine

How to Use TypeScript to Build a Node API with Express

Like it or not, JavaScript has been helping developers power the Internet since 1995. In that time, JavaScript usage has grown from small user experience enhancements to complex full-stack applications using Node.js on the server and one of many frameworks on the client such as Angular, React, or Vue.

Today, building JavaScript applications at scale remains a challenge. More and more teams are turning to TypeScript to supplement their JavaScript projects.

Node.js server applications can benefit from using TypeScript, as well. The goal of this tutorial is to show you how to build a new Node.js application using TypeScript and Express.

The Case for TypeScript

As a web developer, I long ago stopped resisting JavaScript, and have grown to appreciate its flexibility and ubiquity. Language features added to ES2015 and beyond have significantly improved its utility and reduced common frustrations of writing applications.

However, larger JavaScript projects demand tools such as ESLint to catch common mistakes, and greater discipline to saturate the code base with useful tests. As with any software project, a healthy team culture that includes a peer review process can improve quality and guard against issues that can creep into a project.

The primary benefits of using TypeScript are to catch more errors before they go into production and make it easier to work with your code base.

TypeScript is not a different language. It’s a flexible superset of JavaScript with ways to describe optional data types. All “standard” and valid JavaScript is also valid TypeScript. You can dial in as much or little as you desire.

As soon as you add the TypeScript compiler or a TypeScript plugin to your favorite code editor, there are immediate safety and productivity benefits. TypeScript can alert you to misspelled functions and properties, detect passing the wrong types of arguments or the wrong number of arguments to functions, and provide smarter autocomplete suggestions.

Build a Guitar Inventory Application with TypeScript and Node.js

Among guitar players, there’s a joke everyone should understand.

Q: “How many guitars do you need?”

A: “n + 1. Always one more.”

In this tutorial, you are going to create a new Node.js application to keep track of an inventory of guitars. In a nutshell, this tutorial uses Node.js with Express, EJS, and PostgreSQL on the backend, Vue, Materialize, and Axios on the frontend, Okta for account registration and authorization, and TypeScript to govern the JavaScripts!

Guitar Inventory Demo

Create Your Node.js Project

Open up a terminal (Mac/Linux) or a command prompt (Windows) and type the following command:

node --version

If you get an error, or the version of Node.js you have is less than version 8, you’ll need to install Node.js. On Mac or Linux, I recommend you first install nvm and use nvm to install Node.js. On Windows, I recommend you use Chocolatey.

After ensuring you have a recent version of Node.js installed, create a folder for your project.

mkdir guitar-inventory
cd guitar-inventory

Use npm to initialize a package.json file.

npm init -y

Hello, world!

In this sample application, Express is used to serve web pages and implement an API. Dependencies are installed using npm. Add Express to your project with the following command.

npm install express

Next, open the project in your editor of choice.

If you don’t already have a favorite code editor, I use and recommend Visual Studio Code. VS Code has exceptional support for JavaScript and Node.js, such as smart code completing and debugging, and there’s a vast library of free extensions contributed by the community.

Create a folder named src. In this folder, create a file named index.js. Open the file and add the following JavaScript.

const express = require( "express" );
const app = express();
const port = 8080; // default port to listen

// define a route handler for the default home page
app.get( "/", ( req, res ) => {
    res.send( "Hello world!" );
} );

// start the Express server
app.listen( port, () => {
    console.log( `server started at http://localhost:${ port }` );
} );

Next, update package.json to instruct npm on how to run your application. Change the main property value to point to src/index.js, and add a start script to the scripts object.

  "main": "src/index.js",
  "scripts": {
    "start": "node .",
    "test": "echo "Error: no test specified" && exit 1"
  },

Now, from the terminal or command line, you can launch the application.

npm run start

If all goes well, you should see this message written to the console.

server started at http://localhost:8080

Launch your browser and navigate to http://localhost:8080. You should see the text “Hello world!”

Hello World

Note: To stop the web application, you can go back to the terminal or command prompt and press CTRL+C.

Set Up Your Node.js Project to Use TypeScript

The first step is to add the TypeScript compiler. You can install the compiler as a developer dependency using the --save-dev flag.

npm install --save-dev typescript

The next step is to add a tsconfig.json file. This file instructs TypeScript how to compile (transpile) your TypeScript code into plain JavaScript.

Create a file named tsconfig.json in the root folder of your project, and add the following configuration.

{
    "compilerOptions": {
        "module": "commonjs",
        "esModuleInterop": true,
        "target": "es6",
        "noImplicitAny": true,
        "moduleResolution": "node",
        "sourceMap": true,
        "outDir": "dist",
        "baseUrl": ".",
        "paths": {
            "*": [
                "node_modules/*"
            ]
        }
    },
    "include": [
        "src/**/*"
    ]
}

Based on this tsconfig.json file, the TypeScript compiler will (attempt to) compile any files ending with .ts it finds in the src folder, and store the results in a folder named dist. Node.js uses the CommonJS module system, so the value for the module setting is commonjs. Also, the target version of JavaScript is ES6 (ES2015), which is compatible with modern versions of Node.js.

It’s also a great idea to add tslint and create a tslint.json file that instructs TypeScript how to lint your code. If you’re not familiar with linting, it is a code analysis tool to alert you to potential problems in your code beyond syntax issues.

Install tslint as a developer dependency.

npm install --save-dev typescript tslint

Next, create a new file in the root folder named tslint.json file and add the following configuration.

{
    "defaultSeverity": "error",
    "extends": [
        "tslint:recommended"
    ],
    "jsRules": {},
    "rules": {
        "trailing-comma": [ false ]
    },
    "rulesDirectory": []
}

Next, update your package.json to change main to point to the new dist folder created by the TypeScript compiler. Also, add a couple of scripts to execute TSLint and the TypeScript compiler just before starting the Node.js server.

  "main": "dist/index.js",
  "scripts": {
    "prebuild": "tslint -c tslint.json -p tsconfig.json --fix",
    "build": "tsc",
    "prestart": "npm run build",
    "start": "node .",
    "test": "echo "Error: no test specified" && exit 1"
  },

Finally, change the extension of the src/index.js file from .js to .ts, the TypeScript extension, and run the start script.

npm run start

Note: You can run TSLint and the TypeScript compiler without starting the Node.js server using npm run build.

TypeScript errors

Oh no! Right away, you may see some errors logged to the console like these.

ERROR: /Users/reverentgeek/Projects/guitar-inventory/src/index.ts[12, 5]: Calls to 'console.log' are not allowed.

src/index.ts:1:17 - error TS2580: Cannot find name 'require'. Do you need to install type definitions for node? Try `npm i @types/node`.

1 const express = require( "express" );
                  ~~~~~~~

src/index.ts:6:17 - error TS7006: Parameter 'req' implicitly has an 'any' type.

6 app.get( "/", ( req, res ) => {
                  ~~~

The two most common errors you may see are syntax errors and missing type information. TSLint considers using console.log to be an issue for production code. The best solution is to replace uses of console.log with a logging framework such as winston. For now, add the following comment to src/index.ts to disable the rule.

app.listen( port, () => {
    // tslint:disable-next-line:no-console
    console.log( `server started at http://localhost:${ port }` );
} );

TypeScript prefers to use the import module syntax over require, so you’ll start by changing the first line in src/index.ts from:

const express = require( "express" );

to:

import express from "express";

Getting the right types

To assist TypeScript developers, library authors and community contributors publish companion libraries called TypeScript declaration files. Declaration files are published to the DefinitelyTyped open source repository, or sometimes found in the original JavaScript library itself.

Update your project so that TypeScript can use the type declarations for Node.js and Express.

npm install --save-dev @types/node @types/express

Next, rerun the start script and verify there are no more errors.

npm run start

Build a Better User Interface with Materialize and EJS

Your Node.js application is off to a great start, but perhaps not the best looking, yet. This step adds Materialize, a modern CSS framework based on Google’s Material Design, and Embedded JavaScript Templates (EJS), an HTML template language for Express. Materialize and EJS are a good foundation for a much better UI.

First, install EJS as a dependency.

npm install ejs

Next, make a new folder under /src named views. In the /src/views folder, create a file named index.ejs. Add the following code to /src/views/index.ejs.

<!DOCTYPE html>
<html>
<head>
    <meta charset="utf-8" />
    <meta http-equiv="X-UA-Compatible" content="IE=edge">
    <title>Guitar Inventory</title>
    <meta name="viewport" content="width=device-width, initial-scale=1">
    <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/materialize/1.0.0/css/materialize.min.css">
    <link rel="stylesheet" href="https://fonts.googleapis.com/icon?family=Material+Icons">
</head>
<body>
    <div class="container">
        <h1 class="header">Guitar Inventory</h1>
        <a class="btn" href="/guitars"><i class="material-icons right">arrow_forward</i>Get started!</a>
    </div>
</body>
</html>

Update /src/index.ts with the following code.

import express from "express";
import path from "path";
const app = express();
const port = 8080; // default port to listen

// Configure Express to use EJS
app.set( "views", path.join( __dirname, "views" ) );
app.set( "view engine", "ejs" );

// define a route handler for the default home page
app.get( "/", ( req, res ) => {
    // render the index template
    res.render( "index" );
} );

// start the express server
app.listen( port, () => {
    // tslint:disable-next-line:no-console
    console.log( `server started at http://localhost:${ port }` );
} );

Add an asset build script for Typescript

The TypeScript compiler does the work of generating the JavaScript files and copies them to the dist folder. However, it does not copy the other types of files the project needs to run, such as the EJS view templates. To accomplish this, create a build script that copies all the other files to the dist folder.

Install the needed modules and TypeScript declarations using these commands.

npm install --save-dev ts-node shelljs fs-extra nodemon rimraf npm-run-all
npm install --save-dev @types/fs-extra @types/shelljs

Here is a quick overview of the modules you just installed.

  1. ts-node. Use to run TypeScript files directly.
  2. shelljs. Use to execute shell commands such as to copy files and remove directories.
  3. fs-extra. A module that extends the Node.js file system (fs) module with features such as reading and writing JSON files.
  4. rimraf. Use to recursively remove folders.
  5. npm-run-all. Use to execute multiple npm scripts sequentially or in parallel.
  6. nodemon. A handy tool for running Node.js in a development environment. Nodemon watches files for changes and automatically restarts the Node.js application when changes are detected. No more stopping and restarting Node.js!

Make a new folder in the root of the project named tools. Create a file in the tools folder named copyAssets.ts. Copy the following code into this file.

import * as shell from "shelljs";

// Copy all the view templates
shell.cp( "-R", "src/views", "dist/" );

Update npm scripts

Update the scripts in package.json to the following code.

  "scripts": {
    "clean": "rimraf dist/*",
    "copy-assets": "ts-node tools/copyAssets",
    "lint": "tslint -c tslint.json -p tsconfig.json --fix",
    "tsc": "tsc",
    "build": "npm-run-all clean lint tsc copy-assets",
    "dev:start": "npm-run-all build start",
    "dev": "nodemon --watch src -e ts,ejs --exec npm run dev:start",
    "start": "node .",
    "test": "echo "Error: no test specified" && exit 1"
  },

Note: If you are not familiar with using npm scripts, they can be very powerful and useful to any Node.js project. Scripts can be chained together in several ways. One way to chain scripts together is to use the pre and post prefixes. For example, if you have one script labeled start and another labeled prestart, executing npm run start at the terminal will first run prestart, and only after it successfully finishes does start run.

Now run the application and navigate to http://localhost:8080.

npm run dev

Guitar Inventory home page

The home page is starting to look better! Of course, the Get Started button leads to a disappointing error message. No worries! The fix for that is coming soon!

A Better Way to Manage Configuration Settings in Node.js

Node.js applications typically use environment variables for configuration. However, managing environment variables can be a chore. A popular module for managing application configuration data is dotenv.

Install dotenv as a project dependency.

npm install dotenv
npm install --save-dev @types/dotenv

Create a file named .env in the root folder of the project, and add the following code.

# Set to production when deploying to production
NODE_ENV=development

# Node.js server configuration
SERVER_PORT=8080

Note: When using a source control system such as git, do not add the .env file to source control. Each environment requires a custom .env file. It is recommended you document the values expected in the .env file in the project README or a separate .env.sample file.

Now, update src/index.ts to use dotenv to configure the application server port value.

import dotenv from "dotenv";
import express from "express";
import path from "path";

// initialize configuration
dotenv.config();

// port is now available to the Node.js runtime 
// as if it were an environment variable
const port = process.env.SERVER_PORT;

const app = express();

// Configure Express to use EJS
app.set( "views", path.join( __dirname, "views" ) );
app.set( "view engine", "ejs" );

// define a route handler for the default home page
app.get( "/", ( req, res ) => {
    // render the index template
    res.render( "index" );
} );

// start the express server
app.listen( port, () => {
    // tslint:disable-next-line:no-console
    console.log( `server started at http://localhost:${ port }` );
} );

You will use the .env for much more configuration information as the project grows.

Easily Add Authentication to Node and Express

Adding user registration and login (authentication) to any application is not a trivial task. The good news is Okta makes this step very easy. To begin, create a free developer account with Okta. First, navigate to developer.okta.com and click the Create Free Account button, or click the Sign Up button.

Sign up for free account

After creating your account, click the Applications link at the top, and then click Add Application.

Create application

Next, choose a Web Application and click Next.

Create a web application

Enter a name for your application, such as Guitar Inventory. Verify the port number is the same as configured for your local web application. Then, click Done to finish creating the application.

Application settings

Copy and paste the following code into your .env file.

# Okta configuration
OKTA_ORG_URL=https://{yourOktaDomain}
OKTA_CLIENT_ID={yourClientId}
OKTA_CLIENT_SECRET={yourClientSecret}

In the Okta application console, click on your new application’s General tab, and find near the bottom of the page a section titled “Client Credentials.” Copy the Client ID and Client secret values and paste them into your .env file to replace {yourClientId} and {yourClientSecret}, respectively.

Client credentials

Enable self-service registration

One of the great features of Okta is allowing users of your application to sign up for an account. By default, this feature is disabled, but you can easily enable it. First, click on the Users menu and select Registration.

User registration

  1. Click on the Edit button.
  2. Change Self-service registration to Enabled.
  3. Click the Save button at the bottom of the form.

Enable self-registration

Secure your Node.js application

The last step to securing your Node.js application is to configure Express to use the Okta OpenId Connect (OIDC) middleware.

npm install @okta/oidc-middleware express-session
npm install --save-dev @types/express-session

Next, update your .env file to add a HOST_URL and SESSION_SECRET value. You may change the SESSION_SECRET value to any string you wish.

# Node.js server configuration
SERVER_PORT=8080
HOST_URL=http://localhost:8080
SESSION_SECRET=MySuperCoolAndAwesomeSecretForSigningSessionCookies

Create a folder under src named middleware. Add a file to the src/middleware folder named sessionAuth.ts. Add the following code to src/middleware/sessionAuth.ts.

import { ExpressOIDC } from "@okta/oidc-middleware";
import session from "express-session";

export const register = ( app: any ) => {
    // Create the OIDC client
    const oidc = new ExpressOIDC( {
        client_id: process.env.OKTA_CLIENT_ID,
        client_secret: process.env.OKTA_CLIENT_SECRET,
        issuer: `${ process.env.OKTA_ORG_URL }/oauth2/default`,
        redirect_uri: `${ process.env.HOST_URL }/authorization-code/callback`,
        scope: "openid profile"
    } );

    // Configure Express to use authentication sessions
    app.use( session( {
        resave: true,
        saveUninitialized: false,
        secret: process.env.SESSION_SECRET
    } ) );

    // Configure Express to use the OIDC client router
    app.use( oidc.router );

    // add the OIDC client to the app.locals
    app.locals.oidc = oidc;
};

At this point, if you are using a code editor like VS Code, you may see TypeScript complaining about the @okta/oidc-middleware module. At the time of this writing, this module does not yet have an official TypeScript declaration file. For now, create a file in the src folder named global.d.ts and add the following code.

declare module "@okta/oidc-middleware";

Refactor routes

As the application grows, you will add many more routes. It is a good idea to define all the routes in one area of the project. Make a new folder under src named routes. Add a new file to src/routes named index.ts. Then, add the following code to this new file.

import * as express from "express";

export const register = ( app: express.Application ) => {
    const oidc = app.locals.oidc;

    // define a route handler for the default home page
    app.get( "/", ( req: any, res ) => {
        res.render( "index" );
    } );

    // define a secure route handler for the login page that redirects to /guitars
    app.get( "/login", oidc.ensureAuthenticated(), ( req, res ) => {
        res.redirect( "/guitars" );
    } );

    // define a route to handle logout
    app.get( "/logout", ( req: any, res ) => {
        req.logout();
        res.redirect( "/" );
    } );

    // define a secure route handler for the guitars page
    app.get( "/guitars", oidc.ensureAuthenticated(), ( req: any, res ) => {
        res.render( "guitars" );
    } );
};

Next, update src/index.ts to use the sessionAuth and routes modules you created.

import dotenv from "dotenv";
import express from "express";
import path from "path";
import * as sessionAuth from "./middleware/sessionAuth";
import * as routes from "./routes";

// initialize configuration
dotenv.config();

// port is now available to the Node.js runtime
// as if it were an environment variable
const port = process.env.SERVER_PORT;

const app = express();

// Configure Express to use EJS
app.set( "views", path.join( __dirname, "views" ) );
app.set( "view engine", "ejs" );

// Configure session auth
sessionAuth.register( app );

// Configure routes
routes.register( app );

// start the express server
app.listen( port, () => {
    // tslint:disable-next-line:no-console
    console.log( `server started at http://localhost:${ port }` );
} );

Next, create a new file for the guitar list view template at src/views/guitars.ejs and enter the following HTML.

<!DOCTYPE html>
<html>
<head>
    <meta charset="utf-8" />
    <meta http-equiv="X-UA-Compatible" content="IE=edge">
    <title>Guitar Inventory</title>
    <meta name="viewport" content="width=device-width, initial-scale=1">
    <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/materialize/1.0.0/css/materialize.min.css">
    <link rel="stylesheet" href="https://fonts.googleapis.com/icon?family=Material+Icons">
</head>
<body>
    <div class="container">
        <h1 class="header">Guitar Inventory</h1>
        <p>Your future list of guitars!</p>
    </div>
</body>
</html>

Finally, run the application.

npm run dev

Note: To verify authentication is working as expected, open a new browser or use a private/incognito browser window.

Click the Get Started button. If everything goes well, log in with your Okta account, and Okta should automatically redirect you back to the “Guitar List” page!

Okta login

Add a Navigation Menu to Your Node + Typescript App

With authentication working, you can take advantage of the user profile information returned from Okta. The OIDC middleware automatically attaches a userContext object and an isAuthenticated() function to every request. This userContext has a userinfo property that contains information that looks like the following object.

{ 
  sub: '00abc12defg3hij4k5l6',
  name: 'First Last',
  locale: 'en-US',
  preferred_username: 'account@company.com',
  given_name: 'First',
  family_name: 'Last',
  zoneinfo: 'America/Los_Angeles',
  updated_at: 1539283620 
}

The first step is get the user profile object and pass it to the views as data. Update the src/routes/index.ts with the following code.

import * as express from "express";

export const register = ( app: express.Application ) => {
    const oidc = app.locals.oidc;

    // define a route handler for the default home page
    app.get( "/", ( req: any, res ) => {
        const user = req.userContext ? req.userContext.userinfo : null;
        res.render( "index", { isAuthenticated: req.isAuthenticated(), user } );
    } );

    // define a secure route handler for the login page that redirects to /guitars
    app.get( "/login", oidc.ensureAuthenticated(), ( req, res ) => {
        res.redirect( "/guitars" );
    } );

    // define a route to handle logout
    app.get( "/logout", ( req: any, res ) => {
        req.logout();
        res.redirect( "/" );
    } );

    // define a secure route handler for the guitars page
    app.get( "/guitars", oidc.ensureAuthenticated(), ( req: any, res ) => {
        const user = req.userContext ? req.userContext.userinfo : null;
        res.render( "guitars", { isAuthenticated: req.isAuthenticated(), user } );
    } );
};

Make a new folder under src/views named partials. Create a new file in this folder named nav.ejs. Add the following code to src/views/partials/nav.ejs.

<nav>
    <div class="nav-wrapper">
        <a href="/" class="brand-logo"><% if ( user ) { %><%= user.name %>'s <% } %>Guitar Inventory</a>
        <ul id="nav-mobile" class="right hide-on-med-and-down">
            <li><a href="/guitars">My Guitars</a></li>
            <% if ( isAuthenticated ) { %>
            <li><a href="/logout">Logout</a></li>
            <% } %>
            <% if ( !isAuthenticated ) { %>
            <li><a href="/login">Login</a></li>
            <% } %>
        </ul>
    </div>
</nav>

Modify the src/views/index.ejs and src/views/guitars.ejs files. Immediately following the <body> tag, insert the following code.

<body>
    <% include partials/nav %>

With these changes in place, your application now has a navigation menu at the top that changes based on the login status of the user.

Navigation

Create an API with Node and PostgreSQL

The next step is to add the API to the Guitar Inventory application. However, before moving on, you need a way to store data.

Create a PostgreSQL database

This tutorial uses PostgreSQL. To make things easier, use Docker to set up an instance of PostgreSQL. If you don’t already have Docker installed, you can follow the install guide.

Once you have Docker installed, run the following command to download the latest PostgreSQL container.

docker pull postgres:latest

Now, run this command to create an instance of a PostgreSQL database server. Feel free to change the administrator password value.

docker run -d --name guitar-db -p 5432:5432 -e 'POSTGRES_PASSWORD=p@ssw0rd42' postgres

Note: If you already have PostgreSQL installed locally, you will need to change the -p parameter to map port 5432 to a different port that does not conflict with your existing instance of PostgreSQL.

Here is a quick explanation of the previous Docker parameters.

  • -d – This launches the container in daemon mode, so it runs in the background.
  • -name – This gives your Docker container a friendly name, which is useful for stopping and starting containers.
  • -p – This maps the host (your computer) port 5432 to the container’s port 5432. PostgreSQL, by default, listens for connections on TCP port 5432.
  • -e – This sets an environment variable in the container. In this example, the administrator password is p@ssw0rd42. You can change this value to any password you desire.
  • postgres – This final parameter tells Docker to use the postgres image.

Note: If you restart your computer, may need to restart the Docker container. You can do that using the docker start guitar-db command.

Install the PostgreSQL client module and type declarations using the following commands.

npm install pg pg-promise
npm install --save-dev @types/pg

Database configuration settings

Add the following settings to the end of the .env file.

# Postgres configuration
PGHOST=localhost
PGUSER=postgres
PGDATABASE=postgres
PGPASSWORD=p@ssw0rd42
PGPORT=5432

Note: If you changed the database administrator password, be sure to replace the default p@ssw0rd42 with that password in this file.

Add a database build script

You need a build script to initialize the PostgreSQL database. This script should read in a .pgsql file and execute the SQL commands against the local database.

In the tools folder, create two files: initdb.ts and initdb.pgsql. Copy and paste the following code into initdb.ts.

The post How to Use TypeScript to Build a Node API with Express appeared first on SitePoint.


Source: Sitepoint

Elements To Ditch Or Repurpose On Mobile

Elements To Ditch Or Repurpose On Mobile

Elements To Ditch Or Repurpose On Mobile

Suzanne Scacca

2018-12-06T14:00:02+01:00
2018-12-06T13:01:57+00:00

With the end of the year quickly approaching, everyone is chiming in with predictions for 2019 web design trends. For the most part, I think these predictions look quite similar to the ones made for 2018 — which is surprising.

As we move deeper into the mobile-first territory, we can’t adhere to the same predictions that made sense for websites viewed on desktop. We, of course, can’t forget about the desktop experience, but it needs to take a backseat to mobile. This is why I wish 2019 predictions (and beyond) would be more practical in nature.

We need to design websites primarily with mobile users in mind, which means having a more efficient system of content delivery. Rather than spend the next year or so adding more design techniques to our repertoire, maybe we should be taking some away?

As the abstract expressionist painter Hans Hofmann said:

“The ability to simplify means to eliminate the unnecessary so that the necessary may speak.”

So, today, I’m going to talk about the mobile design elements we’ve held onto for a little too long and what you should do about them going forward.

Why Do We Need To Get Rid Of Mobile Design Elements In 2019?

Although responsive design and minimalism have inched us closer to the desired effect of mobile first I don’t think it’s taken us as far as we can go. And part of that is because we’re reticent to let go of design elements that have been with us for a long time. They might seem essential, but I suspect that many of them can be removed from websites without harming the experience.

This is why: On desktop, there’s a lot of room to play with. Even if you don’t populate every inch of the screen with content, you find creative ways to use the space. With mobile, you’ve drastically reduced the real estate. One of the biggest side effects of this is the amount of scrolling that mobile visitors have to do.

Why does this matter?

A 2018 study from the Nielsen Norman Group on scrolling and attention demonstrates that many users (57%) don’t mind scrolling past the above-the-fold line. That said, 74% of all viewing time occurs within the first two screenfuls.


Percentage of views per scroll
NNG stats on how much content is consumed on a page (Source: Nielsen Norman Group) (Large preview)

If you try to fit all of those extraneous design elements from the traditional desktop experience into the mobile one, there’s a good chance your visitors won’t ever encounter them.

Although a longer scroll on mobile might be easy enough to execute, you might also find your visitors suffer from scrolling fatigue. My suggestion is to delete design elements on mobile that create excessive scrolls and, consequently, test visitors’ patience.

4 Mobile Design Elements You Should Ditch In 2019

If we’re not going to drastically change web design trends from 2018 to 2019, then I think now is a great time to clean up the mobile web experience. If you’re looking to increase times spent on site as well as your conversion rates, creating a sleeker and more efficient experience would greatly improve your mobile web designs.

In order to explain which mobile design elements you should ditch this year, I’m going to pit the desktop and mobile experiences against one another. This way you get a sense for why you need to say goodbye to it on mobile.

1. Sidebars

A sidebar has been a handy web design element for blogs and other news authorities for a long time. However, with responsive and mobile-first design taking over, the sidebar tends just to get shoved at the very bottom of blog posts now. But is that the best place for it?

The Blonde Abroad is an example of one that puts most of the sidebar content into the bottom of a post.

Here is how a post appears on desktop:


The Blonde Abroad on desktop
The Blonde Abroad blog sidebar on desktop (Source: The Blonde Abroad) (Large preview)

Note that this isn’t the end of the sidebar either. There are a number of other widgets below the ones shown in this screenshot. Which is why the mobile counterpart runs on way too long for this website:


The Blonde Abroad on desktop
The Blonde Abroad blog sidebar on mobile (Source: The Blonde Abroad) (Large preview)

What you’re seeing here isn’t a cool social media-centric page. This is what mobile users find after they scroll past:

  • Ads,
  • A promotion of her web store,
  • Recommended/related posts,
  • A subscriber form,
  • A comment form.

The Instagram feed then shows up, followed by the subscriber form once again! All in all, it takes about half of the page’s scrolls to get to the end of the content. The rest of the page is then filled with self-promotional material. It’s just way too much.

If Instagram is that prominent of a platform for her, she should have a link to it in the header. I would also suggest cutting back on the number of forms on the mobile web pages. Three forms (two of which are duplicates) is excessive. And I’d also probably recommend turning the recommended posts with images and titles into plain text links.

An example of an authority site that handles sidebars well is the MarketingSherpa blog. As you can see here, there is a fairly dense sidebar included in the desktop experience.


MarketingSherpa desktop sidebar
The MarketingSherpa desktop sidebar (Source: MarketingSherpa) (Large preview)

Turn your attention to mobile, however, and the sidebar disappears completely. Instead, you’ll encounter a super lightweight experience:


MarketingSherpa mobile sidebar
The MarketingSherpa mobile sidebar (Source: MarketingSherpa) (Large preview)

Below each post on the blog, you’ll find a succinct list of links recommended by the author. There is also a Previous/Next widget that enables readers to quickly move to the next published post. It’s a great way to keep readers moving through the site without having to make a mobile web page unnecessarily long.

2. Modal Pop-ups

I know that mobile pop-ups aren’t dying, at least so far as Google is concerned. But intrusive pop-ups aside, does the traditional pop-up have a place on mobile anymore? If we’re really thinking about ways to optimize the user experience, wouldn’t it make sense to do away with the modal altogether?

Here’s an example from Akamai that I’m shocked even exists:


A mobile pop-up on Akamai site
The top of a mobile pop-up on the Akamai website (Source: Akamai) (Large preview)

While perusing one of the internal pages of the mobile site, this pop-up appeared on my screen. At first, I thought, “Oh, cool! A pop-up with a graphic and statistic.” But then I read it and realized it was a scrolling pop-up!


A very long mobile pop-up on Akamai site
The bottom of a mobile pop-up on the Akamai website (Source: Akamai) (Large preview)

I’m honestly not sure I’ve seen one of these before, but I think it’s the perfect example of why modal mobile pop-ups are never a great idea. In addition to blocking the content of the site almost completely, the pop-up requires the visitor to do work in order to see the whole message.

I ran into another example of a bad pop-up. This one is on the Paul Mitchell website:


Duplicate pop-up promotion on Paul Mitchell
A Paul Mitchell pop-up matches the main header graphic (Source: Paul Mitchell) (Large preview)

I thought it was an odd choice to place the same promotion in both the pop-up and the scrolling hero image. This one, however, is easy enough to dismiss since it’s clear what is the pop-up and what is the image.

On mobile, it’s not that easy to distinguish:


Duplication of mobile pop-up on Paul Mitchell
A confusing duplication of a mobile ad or pop-up on the Paul Mitchell site (Source: Paul Mitchell) (Large preview)

If I hadn’t seen the matching pop-up on desktop, I likely would’ve thought this web page had an error upon first seeing the duplication. It also doesn’t help that the hero banner now has an arrow icon in a black box, which could easily be confused for the “X” that closes out the matching pop-up.

It’s a very odd design choice and one I’d tell everyone else to stay away from. Not only does the pop-up appear instantly on the home page (which is a no-no), but it creates a confusing first impression. It might not be the traditional modal, but it still looks bad.

Switching gears, the Four Seasons website does a very nice job of handling its pop-ups. Here is the desktop pop-up widget:


Interactive pop-up widget on Four Seasons
An interactive pop-up offer for the Four Seasons (Source: Four Seasons) (Large preview)

Click on the pop-up, and it will open a full-screen pop-up offer. This is a nice touch as it gives the visitor full control over whether they want to see the pop-up or not.


Interactive pop-up widget expands on Four Seasons
An interactive pop-up offer is revealed for the Four Seasons (Source: Four Seasons) (Large preview)

The mobile pop-up counterpart does something similar:


Interactive pop-up on mobile Four Seasons
An interactive pop-up offer appears on the Four Seasons mobile site (Source: Four Seasons) (Large preview)

The pop-up offer sits snug against the header, never intruding on the experience of the mobile site.


Interactive pop-up expands on mobile Four Seasons
An interactive pop-up offer expands on the Four Seasons mobile site (Source: Four Seasons) (Large preview)

Even once the pop-up is clicked, it never blocks the mobile website from view. It only pushes the content down further on the page. It’s simply designed, easy to follow and gives all of the control over to the mobile user in terms of engagement. It’s a great design choice and one I’d like to see more mobile designers use when designing pop-up elements going forward.

3. Sticky Side Elements

I think a sticky navigation bar or bottom bar on a mobile website is a brilliant idea. As we’ve already seen, visitors are willing to scroll on a website. But visitors are more likely to scroll further down a page if they have an easy way to go somewhere else — to another page, to check out, to a special discount offer, etc.

That said, I’m not a fan of sticky elements on the side of mobile websites. On desktop, they work well. They’re typically tiny icons or widgets that stick to the side or bottom corner of the site. They’re boldly colored, easy to recognize and give visitors the choice of interacting when they’re ready.

On mobile, however, sticky side elements are a bad idea.

Let’s take a look at the Sofitel website, for example.


Sofitel Feedback widget
Feedback widget sticks to side of Sofitel desktop site (Source: Sofitel) (Large preview)

As you can see, there’s an orange “Feedback” button stuck to the left side of the screen. As you scroll down the page, it remains put, making it convenient for visitors to drop the developer a line if something goes wrong.

Here’s how that same button appears on mobile:


Sofitel Feedback widget on mobile
Feedback widget covers content on Sofitel mobile site (Source: Sofitel) (Large preview)

Although the “Feedback” button is not always blocking content, there are occasions where it overlaps an image or text as a user scrolls. It might seem like a minor inconvenience, but it could easily be what takes a visitor from feeling annoyed or frustrated with a website to feeling completely over it.

Wreaths Across America is another example of a sticky element getting in the way. On desktop, the blue live chat widget is well-placed.


Wreaths Across America live chat
Wreaths Across America includes live chat widget on every page (Source: Wreaths Across America) (Large preview)

Then, move it over to mobile, and the live chat continuously covers a decent amount of content residing in the bottom-right corner.


Wreaths Across America live chat on mobile
Wreaths Across America live chat covers mobile content (Source: Wreaths Across America) (Large preview)

If your visitors aren’t actively engaging with live chat or other sticky side elements on mobile (and your statistics should tell you this), don’t leave them there. Or, at the very least, present an easy way to dismiss them.

One way to get around the sticky overlap issue is the solution BuzzFeed has chosen.

In recent years, many websites utilized floating and sticky social media icons. It was a logical choice as you never knew how long it would take for readers to decide that they just had to share your web page or post with their social media connections.


Sticky social and share icons on BuzzFeed
BuzzFeed sticks social media and sharing icons to the bottom of the screen (Source: BuzzFeed) (Large preview)

As we’ve seen with the live chat and Feedback widgets above, elements that stick to the side of the screen just don’t work on mobile. Instead, we should look to what BuzzFeed has done here and make those icons stick flush with the bottom of the screen.

We already know that sticky navigation and bottom bars stay out of the way of content, so let’s use these key areas of the mobile device to place sticky elements we want people to engage with.

4. Content

It’s not just these extraneous design elements or outliers that you should think about removing on the mobile experience. I believe there are times when content itself doesn’t need to be there.

If you want to get visitors to the crux of your message in just a few scrolls, you can’t be afraid to cut out content that isn’t 100% necessary.

I think ads are one of the worst offenders of this. TechRepublic has a particularly nasty example of this — both for desktop and mobile.


Oversized banner ad on TechRepublic
An oversized banner ad on the TechRepublic desktop site (Source: TechRepublic) (Large preview)

This is what the TechRepublic desktop website looks when you first visit it. That alone is horrendous. Why does anyone use ad banners above the header anymore? And why does this one have to be so large in size? Shouldn’t TechRepublic’s logo and navigation be the first thing people see?

It was my hope that, upon visiting the mobile site, the ad would’ve gone away. Sadly, that wasn’t the case.


Oversized banner ad on TechRepublic mobile
An oversized banner ad on the TechRepublic mobile site (Source: TechRepublic) (Large preview)

What we have here is a Best Buy ad that takes up roughly a third of the TechRepublic mobile home page. Sure, once a visitor scrolls down, it will go away. But where do you think visitors’ eyes will go to first? I’m willing to bet some of them will see the logo in the top left and wonder how the heck they ended up on the Best Buy website.

This is one of those times when it’s best to rethink your monetization strategy if it’s going to intrude and confuse the mobile user’s experience.

Now, let’s look at the good.

Kohl’s has a pretty standard product page for an e-commerce website:


Kohl’s desktop products
Kohl’s product page on desktop (Source: Kohl’s) (Large preview)

When displayed on mobile, however, you’ll find that the product views disappear:


Kohl’s mobile product views
Kohl’s product page on mobile (Source: Kohl’s) (Large preview)

Instead of trying to make room for them, the different product views are hidden under a slider. This is a nice choice if you would prefer not to compromise on how much content is displayed — especially if it’s essential to selling the product.

Another great example of picking and choosing your battles when it comes to displaying content on mobile comes from The Blonde Abroad.

Readers of her blog can choose content based on the global destination, as shown here on the desktop website:


The Blonde Abroad desktop search
The Blonde Abroad includes a searchable map on desktop (Source: The Blonde Abroad) (Large preview)

It’s a pretty neat search function, especially since it places the content within the context of an actual map.

Rather than try to force a graphic like this to fit to mobile, The Blonde Abroad includes only the essentials needed to conduct a search:


The Blonde Abroad mobile search
The Blonde Abroad includes only the standard search on mobile (Source: The Blonde Abroad) (Large preview)

While mobile readers might miss out on the mapped content, this provides a much more streamlined experience. Mobile users don’t want to have to scroll left and right, up and down, in order to search for content from an oversized graphic. At its core, this section of the site is about search. And, on mobile, this clean presentation of search options is enough to impress readers and inspire them to read more.

Wrapping Up

In Stephen King’s guide to writing, On Writing, he says something to this extent:

“Create your content. Then, review it and delete 10% of what you created.”

Granted, this applies to writing a story, but I believe this same logic applies to the designing of a mobile website. In other words: Why test your visitors’ patience — or even worse — create too cumbersome of an experience that they miss the most important parts of it? Go ahead and translate the idea you had for the traditional desktop landscape into a mobile setting. Then, review it on mobile and gut all of the unnecessary bits of content or design elements.

Smashing Editorial
(ra, yk, il)


Source: Smashing Magazine