Amelia: The Next-level WordPress Booking Plugin

This article was created in partnership with BAWMedia. Thank you for supporting the partners who make SitePoint possible.

Booking appointments would seem to be a minor task, but managing appointments takes up more time, energy, and money than you might imagine, especially in very active businesses like consulting firms, health and fitness clubs, and beauty salons.

Manual booking methods often work reasonably well, but they are by no means flawless. Mistakes will be made, appointments will slip through the cracks, and appropriate parties can fail to be properly informed when changes occur.

An excellent way to avoid these pitfalls is to simplify the booking process while at the same time making it totally reliable and error-free. That way has a name. It’s Amelia, and Amelia even works while you sleep.

Introducing Amelia, the Automated Booking Specialist

Amelia

Its users will tell you that it was high time for a tool like Amelia, a WordPress plugin that makes and manages appointments without fail and does so 24/7. Amelia was created with individuals and businesses like coaches and personal trainers, private clinics and health clubs, consultants, and beauty salons in mind.

Amelia requires no technical expertise whatsoever to install, it’s easy to set up, and once that’s done it’s 100% automated. Clients can make appointments at any time day or night with just a few clicks, and you can set your calendar, view those appointments and manage your business team, flawlessly and from one place.

Install Amelia, and you’ve just replaced sticky notes, scurrying, and meeting attendance goofs with a reliable appointment system that operates flawlessly on autopilot.

Amelia’s Top Benefits


Amelia

You’ll save more money than you’d think. You don’t pay Amelia hourly, nor is Amelia a salaried employee. You simply pay a one-time fee for what Amelia will do for you forever, and that fee is so low that your ROI is almost instantaneous. From the minute Amelia is set up and running, you’re saving money.

The post Amelia: The Next-level WordPress Booking Plugin appeared first on SitePoint.


Source: Sitepoint

Sunshine All Day Every Day (August 2018 Wallpapers Edition)

Sunshine All Day Every Day (August 2018 Wallpapers Edition)

Sunshine All Day Every Day (August 2018 Wallpapers Edition)

Cosima Mielke

2018-07-31T13:11:56+02:00
2018-07-31T13:11:49+00:00

Everybody loves a beautiful wallpaper to freshen up their desktops. So to cater for new and unique artworks on a regular basis, we embarked on our monthly wallpapers adventure nine years ago, and since then, countless artists and designers from all over the world have accepted the challenge and submitted their designs to it. It wasn’t any different this time around, of course.

This post features wallpapers created for August 2018. Each of them comes in versions with and without a calendar and can be downloaded for free. A big thank-you to everyone who participated!

Finally, as a little bonus, we also collected some “oldies but goodies” from previous August editions in this collection. Please note, that they only come in a non-calendar version. Which one will make it to your desktop this month?

Please note that:

  • All images can be clicked on and lead to the preview of the wallpaper,
  • We respect and carefully consider the ideas and motivation behind each and every artist’s work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience throughout their works. This is also why the themes of the wallpapers weren’t anyhow influenced by us, but rather designed from scratch by the artists themselves.

Submit your wallpaper

We are always looking for creative designers and artists to be featured in our wallpapers posts. So if you have an idea for a wallpaper, please don’t hesitate to submit your design. We’d love to see what you’ll come up with. Join in! →

Purple Haze

“Meet Lucy: she lives in California, loves summer and sunbathing at the beach. This is our Jimi Hendrix Experience tribute. Have a lovely summer!” — Designed by PopArt Web Design from Serbia.

Purple Haze

Coffee Break Time

Designed by Ricardo Gimenes from Sweden.

Coffee Break Time

A Midsummer Night’s Dream

“Inspired by William Shakespeare.” — Designed by Sofie Lee from South Korea.

A Midsummer Night’s Dream

This August, Be The Best!

“Here is the August monthly calendar to remind you of your as well as your team’s success in the previous months. Congratulations, you guys deserved all the success that came your way. Hope you continue this success this month and in the coming months.” — Designed by Webandcrafts from India.

This August, Be The Best!

No Drama LLama

“Llamas are showing up everywhere around us, so why not on our desktops too?” — Designed by Melissa Bogemans from Belgium.

No Drama LLama

The Colors Of Life

“The countenance of the clown is a reflection of our own feelings and emotions of life in the most colorful way portrayed with a deeper and stronger expression whether it is a happy clown or a sad clown. The actions of the clown signify your uninhibited nature — the faces of life in its crudest form — larger, louder, and in an undiluted way.” — Designed by Acowebs from India.

The Colors Of Life

Hello August

“August brings me to summer, and summer brings me to fruit. In the hot weather there is nothing better than a fresh piece of fruit.” — Designed by Bram Wieringa from Belgium.

Hello August

Exploring Thoughts

“Thoughts, planning, daydreams are simply what minds do. It’s following the human impulse to explore the unexplored, question what doesn’t ring true, dig beneath the surface of what you think you know to formulate your own reality, and embrace the inherent ‘now’ of life. The main character here has been created blending texture and composition. Thoughts will never have an end.” — Designed by Sweans from London.

Exploring Thoughts

Chilling At The Beach

“In August it’s Relaxation Day on the 15th so that’s why I decided to make a wallpaper in which I showcase my perspective of relaxing. It’s a wallpaper where you’re just chilling at the beach with a nice cocktail and just looking at the sea and looking how the waves move. That is what I find relaxing! I might even dip my feet in the water and go for a swim if I’m feeling adventurous!” — Designed by Senne Mommens from Belgium.

Chilling At The Beach

Let Peace Reign

“The freedom and independence sprouts from unbiased and educated individuals that build the nation for peace, prosperity and happiness to reign in the country for healthy growth.” — Designed by Admission Zone from India.

Let Peace Reign

On The Ricefields Of Batad

“Somebody once told me that I should make the most out of vacation. So there I was, carefully walking on a stone ridge in the ricefields of Batad. This place is hidden high up in the mountains. Also August is harvesting season.” — Designed by Miguel Lammens from Belgium.

On The Ricefields Of Batad

Fantasy

Designed by Ilse van den Boogaart from The Netherlands.

Fantasy

Oldies But Goodies

The past nine years have brought forth lots of inspiring wallpapers, and, well, it’d be a pity to let them gather dust somewhere down in the archives. That’s why we once again dug out some goodies from past August editions that are bound to make a great fit on your desktop still today. Please note that these wallpapers, thus, don’t come with a calendar.

Happiness Happens In August

“Many people find August one of the happiest months of the year because of holidays. You can spend days sunbathing, swimming, birdwatching, listening to their joyful chirping, and indulging in sheer summer bliss. August 8th is also known as the Happiness Happens Day, so make it worthwhile.” — Designed by PopArt Studio from Serbia.

Happiness Happens In August

Psst, It’s Camping Time…

“August is one of my favorite months, when the nights are long and deep and crackling fire makes you think of many things at once and nothing at all at the same time. It’s about these heat and cold which allow you to touch the eternity for a few moments.” — Designed by Igor Izhik from Canada.

Psst, It’s Camping Time...

Bee Happy!

“August means that fall is just around the corner, so I designed this wallpaper to remind everyone to ‘bee happy’ even though summer is almost over. Sweeter things are ahead!” — Designed by Emily Haines from the United States.

Bee Happy!

Hello Again

“In Melbourne it is the last month of quite a cool winter so we are looking forward to some warmer days to come.” — Designed by Tazi from Australia.

Hello Again

A Bloom Of Jellyfish

“I love going to aquariums – the colours, patterns and array of blue hues attract the nature lover in me while still appeasing my design eye. One of the highlights is always the jellyfish tanks. They usually have some kind of light show in them, which makes the jellyfish fade from an intense magenta to a deep purple – and it literally tickles me pink. On a recent trip to uShaka Marine World, we discovered that the collective noun for jellyfish is a bloom and, well, it was love-at-first-collective-noun all over again. I’ve used some intense colours to warm up your desktop and hopefully transport you into the depths of your own aquarium.” — Designed by Wonderland Collective from South Africa.

A Bloom Of Jellyfish

Let Us Save The Tigers

“Let us take a pledge to save these endangered species and create a world that is safe for them to live and perish just like all creatures.” — Designed by Acodez IT Solutions from India.

Let Us Save The Tigers

Shades

“It’s sunny outside (at least in the Northern Hemisphere!), so don’t forget your shades!” — Designed by James Mitchell from the United Kingdom.

Shades

Ahoy

Designed by Webshift 2.0 from South Africa.

Monthly Quality Desktop Wallpaper - August 2012

About Everything

“I know what you’ll do this August. 🙂 Because August is about holiday. It’s about exploring, hiking, biking, swimming, partying, feeling and laughing. August is about making awesome memories and enjoying the summer. August is about everything. An amazing August to all of you!” — Designed by Ioana Bitin from Bucharest, Romania.

About Everything

Shrimp Party

“A nice summer shrimp party!” — Designed by Pedro Rolo from Portugal.

Shrimp Party

The Ocean Is Waiting

“In August, make sure you swim a lot. Be cautious though.” — Designed by Igor Izhik from Canada.

The Ocean Is Waiting

Oh La La… Paris Night

“I like the Paris night! All is very bright!” — Designed by Verónica Valenzuela from Spain.

Oh la la.... Paris night

World Alpinism Day

“International Day of Alpinism and Climbing.” Designed by cheloveche.ru from Russia.

World Alpinism Day

Estonian Summer Sun

“This is a moment from Southern Estonia that shows amazing summer nights.” Designed by Erkki Pung / Sviiter from Estonia.

Estonian Summer Sun

Aunt Toula At The Beach

“A memory from my childhood summer vacations.” — Designed by Poppie Papanastasiou from Greece.

Aunt Toula At The Beach

Flowing Creativity

Designed by Creacill, Carole Meyer from Luxembourg.

Flowing creativity

Searching for Higgs Boson

Designed by Vlad Gerasimov from Russia.

Monthly Quality Desktop Wallpaper - August 2012

Unforgettable Summer Night

Designed by BootstrapDash from India.

Unforgettable Summer Night

Join In Next Month!

Thank you to all designers for their participation. Join in next month!


Source: Smashing Magazine

5 Great HTML5 Video Players

There has been an increasing demand for creators to develop their own custom video platform which they can use to advance their own advertising, marketing or branding goals. Although YouTube and other similar platforms are generally more promising, hosting videos on your own and using a video player of your choice offers more control over how your videos are used.

Irrespective of whether you are a YouTube video creator or social media influencer, we’ve compiled a list of 5 of the greatest HTML5 video players. This list has been compiled after taking into consideration a few important needs, such as:

  1. Fast and responsive
  2. Easy to install and use
  3. Compatibility quotient across browsers
  4. Robust and all round playlist options
  5. Ability to include an advertisement at various stages of the video – before, during or after
  6. Ability to integrate self-hosted videos with those from channels like YouTube, Dailymotion, Vimeo, etc.

Now that we have an idea what to look for, here is a list of the top 5 HTML5 Video Players.

VideoJS

VideoJS, an open source HTML5 video player is built using JavaScript and CSS. It’s an HTML5 video player with optional support for Flash. Having Flash as a fallback option is especially helpful when you’re using it on browsers that do not support HTML5. It can extend its support to Vimeo and YouTube.

Launched in the year 2010, VideoJS currently serves more than 400,000 websites across the internet. VideoJS is equally compatible on mobile devices as well as desktops.

Some of the top features of VideoJS include:

  1. Plugin Support: VideoJS supports multiple plugins like analytics, advertising, playlists as well as support for advanced formats such as HLS and DASH. A full list of supported plugins can be found VideoJS plugin page.
  2. Skinning: Everything about VideoJS is customizable. You can easily customize the way it looks by editing the CSS style. Steve Heffernan has a codepen demo for customizing VideoJS skin that should help you get started.
  3. Ready adaptability to various plugins makes this player much more useful. Some sample plugins include:
    1. Analytics: Ability to track Google Analytics events from the VideoJS player
    2. Brand: You can add the logo of your brand to the player
    3. Playlist: Support for playlists
    4. Chromecast: Ability to cast a video to a device using a Chromecast device

JW Player

JW Player has been around for ages and was one of the most popular Flash video players for the web. Later on, it extended its support for HTML5 video playback. JW Player is completely customizable, has a responsive HTML5 video and has a large variety of features right from analytics support to accessibility and full HTML5 video controls.

It has perhaps the best website video player with its wide array of video supported solutions. JW player also works very well as an HTML5 video player for WordPress websites. It can also be used as an alternate option for YouTube’s video player. Interestingly, before Google purchased YouTube, the original YouTube video player was based on JW Player.

One of the key reasons that the JW Player is above its peers in this category because of the sheer amount of features it provides via a number of different add-ons. These can range from advertising partnerships to closed captions as well as popular social networking tools.

As mentioned earlier, the player is completely customizable and supports a number of custom user-defined themes. It also comes with an integrated API. It has a number of different plugins to support the more popular CMSs, which makes integration fairly simple.

Kaltura HTML5 Video Player

Kaltura Player is a free-to-use, open source HTML5 video player that can be used to create multiple and custom inter-browser and inter-device skins that can match or complement the design of your website. The Kaltura player comes with numerous player templates to choose from.

Some of the key features include:

  1. Robust, all-round Performance
  2. Multi-platform support
  3. Advertising & Analytics: It supports most ad formats including VAST v. 3.0 as well as integrated plugins that can be used across a wide range of video ad networks. These include Google’s Doubleclick Ad Platform, FreeWheel, Eye Wonder, Ad Tech, Tremor Video, AdapTV and many more.

Flowplayer

Flowplayer is an extremely simple video player for creators who wish to include video playback on their websites. Integrating and the markup process for Flowplayer is decidedly straightforward, which is one of its major benefits.

At the outset, it is important to note that Flowplayer is primarily aimed at those creators who host video files independently. In case creators are using a streaming service such as Vimeo or YouTube, both streaming services provide code that can be used to embed the player itself onto the website or landing page.

Flowplayer is 100% customizable as well as skinnable and comes with support for including subtitles, modifying the playback speed, including video analytics and monetization opportunities.

The post 5 Great HTML5 Video Players appeared first on SitePoint.


Source: Sitepoint

Speed Up Your WordPress Website with YOOtheme Pro

This article was created in partnership with YOOtheme. Thank you for supporting the partners who make SitePoint possible.

Starting July 2018, Google started ranking their mobile search results according to mobile page speed. This makes PageSpeed optimization even more important than before. Fortunately, there are tools that can help you speed-up your website. YOOtheme Pro, a new WordPress theme and page builder, ensures a high page speed ranking of your website thanks to its small, clean code base and the use of latest web technologies. Here is how it works.

What is YOOtheme Pro?

For those of you who are not familiar with YOOtheme Pro, it is a powerful theme and page builder for WordPress and Joomla. These are its main features:

  • Intuitive drag & drop page builder
  • Element Library with 30+ elements (including Slider, Slideshow, etc.)
  • Layout Library with 100+ pre-built premium layouts
  • Style Library with 70+ handcrafted styles
  • Integrated Unsplash Library
  • WooCommerce support
  • Footer Builder
  • Fast and lightweight code base

How does YOOtheme Pro speed up your website?

YOOtheme Pro is built with the JavaScript library Vue.js and the front-end framework UIkit. Thanks to these two libraries, YOOtheme Pro is extremely modular and extendable, and its fast and lightweight JavaScript provides a great user experience. The page builder generates small, clean and semantic markup, which is optimized for fast loading times. So let’s take a closer look at the technologies YOOtheme Pro uses to speed up your browser’s rendering time.

Lazy Loading Images

Images make up most of your website’s total size, which can significantly impact the loading times. To improve page speed and decrease server traffic, YOOtheme Pro uses lazy loading. This means that initially only above-the-fold images are fully loaded. Other images are loaded as they enter the viewport. To prevent content from jumping, an empty placeholder image is generated instantly. Your visitors will not even notice that images are lazy loaded, and the first meaningful paint will get faster on the screen. You can also lazy load video elements.

YooTheme Pro Lazy Loading

Auto-generated srcsets

To make sure you get the best resolution on every device, YOOtheme Pro auto-generates srcsets. These are multiple versions of the same image in different sizes each used for a specific device. When you upload an image in YOOtheme Pro’s page builder and set a width or a height value, YOOtheme Pro generates seven srcset images. The first two are 100% and 200% of the target size. Other five images have the most frequently used device resolutions: 768, 1024, 1366, 1600 and 1920. Of course, these are only generated if the image size allows it, which is why you should always upload images with the highest resolution possible. This feature will guarantee the best user experience from mobiles to retina displays.

While srcsets really improve performance, they are usually not served for background images used as section or column backgrounds. These images extend to the full width and are often quite large, so loading them on mobiles takes a lot of bandwidth. To solve this problem, YOOtheme Pro also generates srcsets for background images, which is great for your mobile page speed rank.

Next-gen Image Format

To save bandwidth, Google also recommends using next-generation image formats like WebP. This format has a superior compression and quality characteristics as compared to the most common used image formats JPEG and PNG. Using WebPs saves size and consumes less cellular data. YOOtheme Pro automatically generates and serves images in WebP format for Chrome browsers. In case a visitor uses other browsers, the original JPEG or PNG images will be served.

YooTheme Pro WebP Images

Local Google Fonts

YOOtheme Pro stores Google Fonts locally. When you select a Google font in YOOtheme Pro, the `woff` and `woff2` font files are downloaded to your server and included into the CSS. This is not only helps with GDPR compliance, but it also largely improves the speed of your website since there is no round-trip to Google servers anymore. And if your web server supports HTTP/2, there is no round-trip at all.

YooTheme Pro Local Google Fonts

Other Advanced Features

Apart from providing a fast user experience, YOOtheme Pro is also very developer-friendly. It allows you to override and extend everything, add custom elements, CSS, JavaScript and even create new theme settings. YOOtheme offers an extensive documentation on YOOtheme Pro, which includes a section specifically written for developers. There you will find information, tutorials and examples on custom assets, child themes, custom elements and much more.

Conclusion

As you can see, YOOtheme Pro is a very versatile theme and page builder for WordPress that really cares about speed. It integrates more optimizations for your Google PageSpeed rank than any other page builder on the WordPress market right now. It is a great tool for designers that gives them the power to create with none of the coding. But it was also built with developers in mind giving them the possibility to customize everything. YOOtheme Pro regularly releases theme packages on a particular topic including a thought-through content structure. Beautiful layouts, various styles and free-to-use images along with regular feature updates make YOOtheme Pro the next WordPress theme and page builder to watch for. So get YOOtheme Pro and try it out for yourself.

The post Speed Up Your WordPress Website with YOOtheme Pro appeared first on SitePoint.


Source: Sitepoint

What Do You Need To Know When Converting A Flash Game Into HTML5?

What Do You Need To Know When Converting A Flash Game Into HTML5?

What Do You Need To Know When Converting A Flash Game Into HTML5?

Tomasz Grajewski

2018-07-30T14:00:26+02:00
2018-07-30T14:19:07+00:00

With the rise of HTML5 usage, many companies start redoing their most popular titles to get rid of outdated Flash and match their products to the latest industry standards. This change is especially visible in the Gambling/Casino & Entertainment industries and has been happening for several years now, so a decent selection of titles has already been converted.

Unfortunately, when browsing the Internet, you can quite often stumble upon examples of a seemingly hasty job, which results in the lover quality of the final product. That’s why it’s a good idea for game developers to dedicate some of their time for getting familiar with the subject of Flash to HTML5 conversion and learning which mistakes to avoid before getting down to work.

Among the reasons for choosing JavaScript instead of Flash, apart from the obvious technical issues, is also the fact that changing your game design from SWF to JavaScript can yield a better user experience, which in turn give it a modern look. But how to do it? Do you need a dedicated JavaScript game converter to get rid of this outdated technology? Well, Flash to HTML5 conversion can be a piece of cake — here’s how to take care of it.

Recommended reading: Principles Of HTML5 Game Design

How To Improve HTML5 Game Experience

Converting a game to another platform is an excellent opportunity to improve it, fix its issues, and increase the audience. Below are few things that can be easily done and are worth considering:

  • Supporting mobile devices
    Converting from Flash to JavaScript allows reaching a broader audience (users of mobile devices); support for touchscreen controls usually needs to be implemented into the game, too. Luckily, both Android and iOS devices now also support WebGL, so 30 or 60 FPS rendering usually can be easily achieved. In many cases, 60 FPS won’t cause any problems, which will only improve with time, as mobile devices become more and more performant.

  • Improving performance
    When it comes to comparing ActionScript and JavaScript, the latter is faster than the first one. Other than that, converting a game is a good occasion to revisit algorithms used in game code. With JavaScript game development you can optimize them or completely strip unused code that’s left by original developers.
  • Fixing bugs and making improvements to the gameplay
    Having new developers looking into game’s source code can help to fix known bugs or discover new and very rare ones. This would make playing the game less irritating for the players, which would make them spend more time on your site and encourage to try your other games.
  • Adding web analytics
    In addition to tracking the traffic, web analytics can also be used to gather knowledge on how players behave in a game and where they get stuck during gameplay.
  • Adding localization
    This would increase the audience and is important for kids from other countries playing your game. Or maybe your game is not in English and you want to support that language?

Why Skipping HTML And CSS For In-Game UI Will Improve Game Performance

When it comes to JavaScript game development, it may be tempting to leverage HTML and CSS for in-game buttons, widgets, and other GUI elements. My advice is to be careful here. It’s counterintuitive, but actually leveraging DOM elements is less performant on complex games and this gains more significance on mobile. If you want to achieve constant 60 FPS on all platforms, then resigning from HTML and CSS may be required.

Non-interactive GUI elements, such as health bars, ammo bars, or score counters can be easily implemented in Phaser by using regular images (the Phaser.Image class), leveraging the .crop property for trimming and the Phaser.Text class for simple text labels.

Such interactive elements as buttons and checkboxes can be implemented by using the built-in Phaser.Button class. Other, more complex elements can be composed of different simple types, like groups, images, buttons and text labels.

Note: Each time you instantiate a Phaser.Text or PIXI.Text object, a new texture is created to render text onto. This additional texture breaks vertex batching, so be careful not to have too many of them.

How To Ensure That Custom Fonts Have Loaded

If you want to render text with a custom vector font (e.g. TTF or OTF), then you need to ensure that the font has already been loaded by the browser before rendering any text. Phaser v2 doesn’t provide a solution for this purpose, but another library can be used: Web Font Loader.

Assuming that you have a font file and include the Web Font Loader in your page, then below is a simple example of how to load a font:

Make a simple CSS file that will be loaded by Web Font Loader (you don’t need to include it in your HTML):

@font-face {
    // This name you will use in JS
    font-family: 'Gunplay';
    // URL to the font file, can be relative or absolute
    src: url('../fonts/gunplay.ttf') format('truetype');
    font-weight: 400;
}

Now define a global variable named WebFontConfig. Something as simple as this will usually suffice:

var WebFontConfig = {
   'classes': false,
   'timeout': 0,
   'active': function() {
       // The font has successfully loaded...
   },
   'custom': {
       'families': ['Gunplay'],
       // URL to the previously mentioned CSS
       'urls': ['styles/fonts.css']
   }
};

It the end, remember to put your code in the ‘active’ callback shown above. And that’s it!

How To Make It Easier For Users To Save The Game

To persistently store local data in ActionScript you would use the SharedObject class. In JavaScript, the simple replacement is localStorage API, which allows storing strings for later retrieval, surviving page reloads.

Saving data is very simple:

var progress = 15;
localStorage.setItem('myGame.progress', progress);

Note that in the above example the progress variable, which is a number, will be converted to a string.

Loading is simple too, but remember that retrieved values will be strings or null if they don’t exists.

var progress = parseInt(localStorage.getItem('myGame.progress')) || 0;

Here we’re ensuring that the return value is a number. If it doesn’t exist, then 0 will be assigned to the progress variable.

You can also store and retrieve more complex structures, for example, JSON:

var stats = {'goals': 13, 'wins': 7, 'losses': 3, 'draws': 1};
localStorage.setItem('myGame.stats', JSON.stringify(stats));
…
var stats = JSON.parse(localStorage.getItem('myGame.stats')) || {};

There are some cases when the localStorage object won’t be available. For example, when using the file:// protocol or when a page is loaded in a private window. You can use the try and catch statement to ensure your code will both continue working and use default values, what is shown in the example below:

try {
    var progress = localStorage.getItem('myGame.progress');
} catch (exception) {
    // localStorage not available, use default values
}

Another thing to remember is that the stored data is saved per domain, not per URL. So if there is a risk that many games are hosted on a single domain, then it’s better to use a prefix (namespace) when saving. In the example above 'myGame.' is such a prefix and you usually want to replace it with the name of the game.

Note: If your game is embedded in an iframe, then localStorage won’t persist on iOS. In this case, you would need to store data in the parent iframe instead.

How To Leverage Replacing Default Fragment Shader

When Phaser and PixiJS render your sprites, they use a simple internal fragment shader. It doesn’t have many features because it’s tailored for a speed. However, you can replace that shader for your purposes. For example, you can leverage it to inspect overdraw or support more features for rendering.

Below is an example of how to supply your own default fragment shader to Phaser v2:

function preload() {
    this.load.shader('filename.frag', 'shaders/filename.frag');
}

function create() {
    var renderer = this.renderer;
    var batch = renderer.spriteBatch;
    batch.defaultShader = 
        new PIXI.AbstractFilter(this.cache.getShader('filename.frag'));
    batch.setContext(renderer.gl);
}

Note: It’s important to remember that the default shader is used for ALL sprites as well as when rendering to a texture. Also, keep in mind that using complex shaders for all in-game sprites will greatly reduce rendering performance.

How To Change Tinting Method With A Default Shader

Custom default shader can be used to replace default tinting method in Phaser and PixiJS.

Tinting in Phaser and PixiJS works by multiplying texture pixels by a given color. Multiplication always darkens colors, which obviously is not a problem; it’s simply different from the Flash tinting. For one of our games, we needed to implement tinting similar to Flash and decided that a custom default shader could be used. Below is an example of such fragment shader:

// Specific tint variant, similar to the Flash tinting that adds
// to the color and does not multiply. A negative of a color
// must be supplied for this shader to work properly, i.e. set
// sprite.tint to 0 to turn whole sprite to white.
precision lowp float;

varying vec2 vTextureCoord;
varying vec4 vColor;

uniform sampler2D uSampler;

void main(void) {
    vec4 f = texture2D(uSampler, vTextureCoord);
    float a = clamp(vColor.a, 0.00001, 1.0);
    gl_FragColor.rgb = f.rgb * vColor.a + clamp(1.0 - vColor.rgb/a, 0.0, 1.0) * vColor.a * f.a;
    gl_FragColor.a = f.a * vColor.a;
}

This shader lightens pixels by adding a base color to the tint one. For this to work, you need to supply negative of the color you want. Therefore, in order to get white, you need to set:

sprite.tint = 0x000000;  // This colors the sprite to white
Sprite.tint = 0x00ffff;  // This gives red

The result in our game looks like this (notice how tanks flash white when hit):

Custom default shader (tanks flashing white).

How To Inspect Overdraw To Detect Fill Rate Issues

Replacing default shader can also be leveraged to help with debugging. Below I’ve explained how overdraw can be detected with such a shader.

Overdrawing happens when many or all pixels on the screen are rendered multiple times. For example, many objects taking the same place and being rendered one over another. How many pixels a GPU can render per second is described as fill rate. Modern desktop GPUs have excessive fill rate for usual 2D purposes, but mobile ones are a lot slower.

There is a simple method of finding out how many times each pixel on the screen is written by replacing the default global fragment shader in PixiJS and Phaser with this one:

void main(void) {
    gl_FragColor.rgb += 1.0 / 7.0;
}

This shader lightens pixels that are being processed. The number 7.0 indicates how many writes are needed to turn pixel white; you can tune this number to your liking. In other words, lighter pixels on screen were written several times, and white pixels were written at least 7 times.

This shader also helps to find both “invisible” objects that for some reason are still rendered and sprites that have excessive transparent areas around that need to be stripped (GPU still needs to process transparent pixels in your textures).


Example of the Overdraw shader in action in game development
Overdraw shader in action. (Large preview)

The picture on the left shows how a player sees the game, while the one on the right displays the effect of applying the overdraw shader to the same scene.

Why Physics Engines Are Your Friends

A physics engine is a middleware that’s responsible for simulating physics bodies (usually rigid body dynamics) and their collisions. Physics engines simulate 2D or 3D spaces, but not both. A typical physics engine will provide:

  • object movement by setting velocities, accelerations, joints, and motors;
  • detecting collisions between various shape types;
  • calculating collision responses, i.e. how two objects should react when they collide.

At Merixstudio, we’re big fans of the Box2D physics engine and used it on a few occasions. There is a Phaser plugin that works well for this purpose. Box2D is also used in the Unity game engine and GameMaker Studio 2.

While a physics engine will speed-up your development, there is a price you’ll have to pay: reduced runtime performance. Detecting collisions and calculating responses is a CPU-intensive task. You may be limited to several dozen dynamic objects in a scene on mobile phones or face degraded performance, as well as reduced frame rate deep below 60 FPS.


Example of the difference in the scene of a game with and withour Phaser physics debug overlay displayed on top
Phaser’s physics debug overlay. (Large preview)

The left part of the image is a scene from a game, while the right side shows the same scene with Phaser physics debug overlay displayed on top.

How To Export Sounds From A .fla File

If you have a Flash game sound effects inside of a .fla file, then exporting them from GUI is not possible (at least not in Adobe Animate CC 2017) due to the lack of menu option serving this purpose. But there is another solution — a dedicated script that does just that:

function normalizeFilename(name) {
   // Converts a camelCase name to snake_case name
   return name.replace(/([A-Z])/g, '_$1').replace(/^_/, '').toLowerCase();
}

function displayPath(path) {
   // Makes the file path more readable
   return unescape(path).replace('file:///', '').replace('|', ':');
}


fl.outputPanel.clear();

if (fl.getDocumentDOM().library.getSelectedItems().length > 0)
   // Get only selected items
   var library = fl.getDocumentDOM().library.getSelectedItems();
else
   // Get all items
   var library = fl.getDocumentDOM().library.items;

// Ask user for the export destination directory
var root = fl.browseForFolderURL('Select a folder.');
var errors = 0;

for (var i = 0; i < library.length; i++) {
   var item = library[i];
   if (item.itemType !== 'sound')
       continue;

   var path = root + '/';

   if (item.originalCompressionType === 'RAW')
       path += normalizeFilename(item.name.split('.')[0]) + '.wav';
   else
       path += normalizeFilename(item.name);

   var success = item.exportToFile(path);
   if (!success)
       errors += 1;
   fl.trace(displayPath(path) + ': ' + (success ? 'OK' : 'Error'));
}

fl.trace(errors + ' error(s)');

How to use the script to export sound files:

  1. Save the code above as a .jsfl file on your computer;
  2. Open a .fla file with Adobe Animate;
  3. Select ‘Commands’ → ‘Run Command’ from the top menu and select the script in the dialogue that opens;
  4. Now another dialogue file pops up for selecting export destination directory.

And done! You should now have WAV files in the specified directory. What’s left to do is convert them to, for example, MP3’s, OGG, or AAC.

How To Use MP3s In Flash To HTML5 Convertions

The good old MP3 format is back, as some patents have expired and now every browser can decode and play MP3’s. This makes development a bit easier since finally there’s no need to prepare two separate audio formats. Previously you needed, for instance, OGG and AAC files, while now MP3 will suffice.

Nonetheless, there are two important things you need to remember about MP3:

  • MP3’s need to decode after loading, what can be time-consuming, especially on mobile devices. If you see a pause after all your assets have loaded, then it probably means that MP3’s being decoded;
  • gaplessly playing looped MP3’s is a little problematic. The solution is to use mp3loop, about which you can read in the article posted by Compu Phase.

So, Why Should You Convert Flash To JavaScript?

As you can see, Flash to JavaScript conversion is not impossible if you know what to do. With knowledge and skill, you can stop struggling with Flash and enjoy the smooth, entertaining games created in JavaScript. Don’t try to fix Flash — get rid of it before everyone is forced to do so!

Want To Learn More?

In this article, I was focused mainly on Phaser v2. However, a newer version of Phaser is now available, and I strongly encourage you to check it out, as it introduced a plethora of fresh, cool features, such as multiple cameras, scenes, tilemaps, or Matter.js physics engine.

If you are brave enough and want to create truly remarkable things in browsers, then WebGL is the right thing to learn from the ground up. It’s a lower level of abstraction than various game-building frameworks or tools but allows to achieve greater performance and quality even if you work on 2D games or demos. Among many websites which you may find useful when learning the basics of WebGL would be WebGL Fundamentals (uses interactive demos). In addition to that, to find out more about WebGL feature adoption rates, check WebGL Stats.

Always remember that there’s no such thing as too much knowledge — especially when it comes to game development!

Smashing Editorial
(rb, ra, yk, il)


Source: Smashing Magazine

Logging Activity With The Web Beacon API

Logging Activity With The Web Beacon API

Logging Activity With The Web Beacon API

Drew McLellan

2018-07-27T13:40:14+02:00
2018-07-27T14:14:35+00:00

The Beacon API is a JavaScript-based Web API for sending small amounts of data from the browser to the web server without waiting for a response. In this article, we’ll look at what that can be useful for, what makes it different from familiar techniques like XMLHTTPRequest (‘Ajax’), and how you can get started using it.

If you know why you want to use Beacon already, feel free to jump directly to the Getting Started section.

What Is The Beacon API For?

The Beacon API is used for sending small amounts of data to a server without waiting for a response. That last part is critical and is the key to why Beacon is so useful — our code never even gets to see a response, even if the server sends one. Beacons are specifically for sending data and then forgetting about it. We don’t expect a response and we don’t get a response.

Think of it like a postcard sent home when on vacation. You put a small amount of data on it (a bit of “Wish you were here” and “The weather’s been lovely”), put it in the mailbox, and you don’t expect a response. No one sends a return postcard saying “Yes, I do wish I was there actually, thank you very much!”

For modern websites and applications, there’s a number of use cases that fall very neatly into this pattern of send-and-forget.

Tracking Stats And Analytics Data

The first use case that comes to mind for most people is analytics. Big solutions like Google Analytics might give a good overview of things like page visits, but what if we wanted something more customized? We could write some JavaScript to track what’s happening in a page (maybe how a user interacts with a component, how far they’ve scrolled to, or which articles have been displayed before they follow a CTA) but we then need to send that data to the server when the user leaves the page. Beacon is perfect for this, as we’re just logging the data and don’t need a response.

There’s no reason we couldn’t also cover the sort of mundane tasks often handled by Google Analytics, reporting on the user themselves and the capability of their device and browser. If the user has a logged in session, you could even tie those stats back to a known individual. Whatever data you gather, you can send it back to the server with Beacon.

Debugging And Logging

Another useful application for this behavior is logging information from your JavaScript code. Imagine you have a complex interactive component on your page that works perfectly for all your tests, but occasionally fails in production. You know it’s failing, but you can’t see the error in order to begin debugging it. If you can detect a failure in the code itself, you could then gather up diagnostics and use Beacon to send it all back for logging.

In fact, any logging task can usefully be performed using Beacon, be that creating save-points in a game, collecting information on feature use, or recording results from a multivariate test. If it’s something that happens in the browser that you want the server to know about, then Beacon is likely a contender.

Can’t We Already Do This?

I know what you’re thinking. None of this is new, is it? We’ve been able to communicate from the browser to the server using XMLHTTPRequest for more than a decade. More recently we also have the Fetch API which does much the same thing with a more modern promise-based interface. Given that, why do we need the Beacon API at all?

The key here is that because we don’t get a response, the browser can queue up the request and send it without blocking execution of any other code. As far as the browser is concerned, it doesn’t matter if our code is still running or not, or where the script execution has got to, as there’s nothing to return it can just background the sending of the HTTP request until it’s convenient to send it.

That might mean waiting until CPU load is lower, or until the network is free, or even just sending it right away if it can. The important thing is that the browser queues the beacon and returns control immediately. It does not hold things up while the beacon sends.

To understand why this is a big deal, we need to look at how and when these sorts of requests are issued from our code. Take our example of an analytics logging script. Our code may be timing how long the users spend on a page, so it becomes critical that the data is sent back to the server at the last possible moment. When the user goes to leave a page, we want to stop timing and send the data back home.

Typically, you’d use either the unload or beforeunload event to execute the logging. These are fired when the user does something like following a link on the page to navigate away. The trouble here is that code running on one of the unload events can block execution and delay the unloading of the page. If unloading of the page is delayed, then the loading next page is also delayed, and so the experience feels really sluggish.

Keep in mind how slow HTTP requests can be. If you’re thinking about performance, typically one of the main factors you try to cut down on is extra HTTP requests because going out to the network and getting a response can be super slow. The very last thing you want to do is put that slowness between the activation of a link and the start of the request for the next page.

Beacon gets around this by queuing the request without blocking, returning control immediately back to your script. The browser then takes care of sending that request in the background without blocking. This makes everything much faster, which makes users happier and lets us all keep our jobs.

Getting Started

So we understand what Beacon is, and why we might use it, so let’s get started with some code. The basics couldn’t be simpler:

let result = navigator.sendBeacon(url, data);

The result is boolean, true if the browser accepted and queued the request, and false if there was a problem in doing so.

Using navigator.sendBeacon()

navigator.sendBeacon takes two parameters. The first is the URL to make the request to. The request is performed as an HTTP POST, sending any data provided in the second parameter.

The data parameter can be in one of several formats, all if which are taken directly from the Fetch API. This can be a Blob, a BufferSource, FormData or URLSearchParams — basically any of the body types used when making a request with Fetch.

I like using FormData for basic key-value data as it’s uncomplicated and easy to read back.

// URL to send the data to
let url = '/api/my-endpoint';
    
// Create a new FormData and add a key/value pair
let data = new FormData();
data.append('hello', 'world');
    
let result = navigator.sendBeacon(url, data);
    
if (result) { 
  console.log('Successfully queued!');
} else {
  console.log('Failure.');
}

Browser Support

Support in browsers for Beacon is very good, with the only notable exceptions being Internet Explorer (works in Edge) and Opera Mini. For most uses, that should be fine, but it’s worth testing for support before trying to use navigator.sendBeacon.

That’s easy to do:

if (navigator.sendBeacon) {
  // Beacon code
} else {
  // No Beacon. Maybe fall back to XHR?
}

If Beacon isn’t available and your request is important, you could fall back to a blocking method such as XHR. Depending on your audience and purpose, you might equally choose to not bother.

An Example: Logging Time On A Page

To see this in practice, let’s create a basic system to time how long a user stays on a page. When the page loads we’ll note the time, and when the user leaves the page we’ll send the start time and current time to the server.

As we only care about time spent (not the actual time of day) we can use performance.now() to get a basic timestamp as the page loads:

let startTime = performance.now();

If we wrap up our logging into a function, we can call it when the page unloads.

let logVisit = function() {
  // Test that we have support
  if (!navigator.sendBeacon) return true;
      
  // URL to send the data to, e.g.
  let url = '/api/log-visit';
      
  // Data to send
  let data = new FormData();
  data.append('start', startTime);
  data.append('end', performance.now());
  data.append('url', document.URL);
      
  // Let's go!
  navigator.sendBeacon(url, data);
};

Finally, we need to call this function when the user leaves the page. My first instinct was to use the unload event, but Safari on a Mac seems to block the request with a security warning, so beforeunload works just fine for us here.

window.addEventListener('beforeunload', logVisit);

When the page unloads (or, just before it does) our logVisit() function will be called and provided the browser supports the Beacon API our beacon will be sent.

(Note that if there is no Beacon support, we return true and pretend it all worked great. Returning false would cancel the event and stop the page unloading. That would be unfortunate.)

Considerations When Tracking

As so many of the potential uses for Beacon revolve around tracking of activity, I think it would be remiss not to mention the social and legal responsibilities we have as developers when logging and tracking activity that could be tied back to users.

GDPR

We may think of the recent European GDPR laws as they related to email, but of course, the legislation relates to storing any type of personal data. If you know who your users are and can identify their sessions, then you should check what activity you are logging and how it relates to your stated policies.

Often we don’t need to track as much data as our instincts as developers tell us we should. It can be better to deliberately not store information that would identify a user, and then you reduce your likelihood of getting things wrong.

DNT: Do Not Track

In addition to legal requirements, most browsers have a setting to enable the user to express a desire not to be tracked. Do Not Track sends an HTTP header with the request that looks like this:

DNT: 1

If you’re logging data that can track a specific user and the user sends a positive DNT header, then it would be best to follow the user’s wishes and anonymize that data or not track it at all.

In PHP, for example, you can very easily test for this header like so:

if (!empty($_SERVER['HTTP_DNT'])) { 
  // User does not wish to be tracked ... 
}

In Conclusion

The Beacon API is a really useful way to send data from a page back to the server, particularly in a logging context. Browser support is very broad, and it enables you to seamlessly log data without negatively impacting the user’s browsing experience and the performance of your site. The non-blocking nature of the requests means that the performance is much faster than alternatives such as XHR and Fetch.

If you’d like to read more about the Beacon API, the following sites are worth a look.

Smashing Editorial
(ra, il)


Source: Smashing Magazine

Build a Basic CRUD App with Node and React

This article was originally published on the Okta developer blog. Thank you for supporting the partners who make SitePoint possible.

There are a lot of JavaScript frameworks out there today. It seems like I hear about a new one every month or so. They all have their advantages and are usually there to solve some sort of problem with an existing framework. My favorite to work with so far has been React. One of the best things about it is how many open source components and libraries there are in the React ecosystem, so you have a lot to choose from. This can be really difficult if you’re indecisive, but if you like the freedom to do things your way then React may be the best option for you.

In this tutorial, I’ll walk you through creating both a frontend web app in React and a backend REST API server in Node. The frontend will have a home page and a posts manager, with the posts manager hidden behind secure user authentication. As an added security measure, the backend will also not let you create or edit posts unless you’re properly authenticated.

The tutorial will use Okta’s OpenID Connect (OIDC) to handle authentication. On the frontend, the Okta React SDK will be used to request a token and provide it in requests to the server. On the backend, the Okta JWT Verifier will ensure that the user is properly authenticated, and throw an error otherwise.

The backend will be written with Express as a server, with Sequelize for modeling and storing data, and Epilogue for quickly creating a REST API without a lot of boilerplate.

Why React?

React has been one of the most popular JavaScript libraries for the past few years. One of the biggest concepts behind it, and what makes it so fast, is to use a virtual DOM (the Document Object Model, or DOM, is what describes the layout of a web page) and make small updates in batches to the real DOM. React isn’t the first library to do this, and there are quite a few now, but it certainly made the idea popular. The idea is that the DOM is slow, but JavaScript is fast, so you just say what you want the final output to look like and React will make those changes to the DOM behind the scenes. If no changes need to be made, then it doesn’t affect the DOM. If only a small text field changes, it will just patch that one element.

React is also most commonly associated with JSX, even though it’s possible to use React without JSX. JSX lets you mix HTML in with your JavaScript. Rather than using templates to define the HTML and binding those values to a view model, you can just write everything in JavaScript. Values can be plain JavaScript objects, instead of strings that need to be interpreted. You can also write reusable React components that then end up looking like any other HTML element in your code.

Here’s an example of some JSX code, that should be fairly simple to follow:

const Form = () => (
  <form>
    <label>
      Name
      <input value="Arthur Dent" />
    </label>
    <label>
      Answer to life, the universe, and everything
      <input type="number" value={42} />
    </label>
  </form>
);

const App = () => (
  <main>
    <h1>Welcome, Hitchhiker!</h1>
    <Form />
  </main>
);

…and here’s what the same code would look like if you wrote it in plain JavaScript, without using JSX:

const Form = () => React.createElement(
  "form",
  null,
  React.createElement(
    "label",
    null,
    "Name",
    React.createElement("input", { value: "Arthur Dent" })
  ),
  React.createElement(
    "label",
    null,
    "Answer to life, the universe, and everything",
    React.createElement("input", { type: "number", value: 42 })
  )
);

const App = () => React.createElement(
  "main",
  null,
  React.createElement(
    "h1",
    null,
    "Welcome, Hitchhiker!"
  ),
  React.createElement(Form, null)
);

I find the JSX form much easier to read, but that’s just like, you know, my opinion, man.

Create Your React App

The quickest way to get started with React is to use Create React App, a tool that generates a progressive web app (PWA) with all the scripts and boilerplate tucked away neatly behind something called react-scripts, so you can just focus on writing code. It has all kinds of nice dev features as well, like updating the code whenever you make changes, and scripts to compile it down for production. You can use npm or yarn, but I’ll be using yarn in this tutorial.

To install create-react-app and yarn, simply run:

npm i -g create-react-app@1.5.2 yarn@1.7.0

NOTE: I’ll be adding version numbers to help future-proof this post. In general though, you’d be fine leaving out the version numbers (e.g. npm i -g create-react-app).

Now bootstrap your application with the following commands:

create-react-app my-react-app
cd my-react-app
yarn start

The default app should now be running on port 3000. Check it out at http://localhost:3000.

Create React App default homepage

Create a Basic Homepage in React with Material UI

To keep things looking nice without writing a lot of extra CSS, you can use a UI framework. Material UI is a great framework for React that implements Google’s Material Design principles.

Add the dependency with:

yarn add @material-ui/core@1.3.1

Material recommends the Roboto font. You can add it to your project by editing public/index.html and adding the following line inside the head tag:

<link rel="stylesheet" href="https://fonts.googleapis.com/css?family=Roboto:300,400,500">

You can separate components into separate files to help keep things organized. First, create a couple new folders in your src directory: components, and pages

mkdir src/components
mkdir src/pages

Now create an AppHeader component. This will serve as the navbar with links to pages, as well as show the title and whether you’re logged in.

src/components/AppHeader.js

import React from 'react';
import {
  AppBar,
  Toolbar,
  Typography,
} from '@material-ui/core';

const AppHeader = () => (
  <AppBar position="static">
    <Toolbar>
      <Typography variant="title" color="inherit">
        My React App
      </Typography>
    </Toolbar>
  </AppBar>
);

export default AppHeader;

Also create a homepage:

src/pages/Home.js

import React from 'react';
import {
  Typography,
} from '@material-ui/core';

export default () => (
  <Typography variant="display1">Welcome Home!</Typography>
);

Now go ahead and actually just gut the sample app, replacing src/App.js with the following:

src/App.js

import React, { Fragment } from 'react';
import {
  CssBaseline,
  withStyles,
} from '@material-ui/core';

import AppHeader from './components/AppHeader';
import Home from './pages/Home';

const styles = theme => ({
  main: {
    padding: 3 * theme.spacing.unit,
    [theme.breakpoints.down('xs')]: {
      padding: 2 * theme.spacing.unit,
    },
  },
});

const App = ({ classes }) => (
  <Fragment>
    <CssBaseline />
    <AppHeader />
    <main className={classes.main}>
      <Home />
    </main>
  </Fragment>
);

export default withStyles(styles)(App);

Material UI uses JSS (one of many flavors in the growingly popular trend of CSS in JavaScript), which is what withStyles provides.

The CssBaseline component will add some nice CSS defaults to the page (e.g. removing margins from the body), so we no longer need src/index.css. You can get rid of a couple other files too, now that we’ve gotten rid of most of the Hello World demo app.

rm src/index.css src/App.css src/logo.svg

In src/index.js, remove the reference to index.css (the line that says import './index.css';). While you’re at it, add the following as the very last line of src/index.js to turn on hot module reloading, which will make it so that changes you make automatically update in the app without needing to refresh the whole page:

if (module.hot) module.hot.accept();

At this point, your app should look like this:

Blank homepage

Add Authentication to Your Node + React App with Okta

You would never ship your new app out to the Internet without secure identity management, right? Well, Okta makes that a lot easier and more scalable than what you’re probably used to. Okta is a cloud service that allows developers to create, edit, and securely store user accounts and user account data, and connect them with one or multiple applications. Our API enables you to:

If you don’t already have one, sign up for a forever-free developer account. Log in to your developer console, navigate to Applications, then click Add Application. Select Single-Page App, then click Next.

Since Create React App runs on port 3000 by default, you should add that as a Base URI and Login Redirect URI. Your settings should look like the following:

Create new application settings

Click Done to save your app, then copy your Client ID and paste it as a variable into a file called .env.local in the root of your project. This will allow you to access the file in your code without needing to store credentials in source control. You’ll also need to add your organization URL (without the -admin suffix). Environment variables (other than NODE_ENV) need to start with REACT_APP_ in order for Create React App to read them, so the file should end up looking like this:

.env.local

REACT_APP_OKTA_CLIENT_ID={yourClientId}
REACT_APP_OKTA_ORG_URL=https://{yourOktaDomain}

The easiest way to add Authentication with Okta to a React app is to use Okta’s React SDK. You’ll also need to add routes, which can be done using React Router. I’ll also have you start adding icons to the app (for now as an avatar icon to show you’re logged in). Material UI provides Material Icons, but in another package, so you’ll need to add that too. Run the following command to add these new dependencies:

yarn add @okta/okta-react@1.0.2 react-router-dom@4.3.1 @material-ui/icons@1.1.0

For routes to work properly in React, you need to wrap your whole application in a Router. Similarly, to allow access to authentication anywhere in the app, you need to wrap the app in a Security component provided by Okta. Okta also needs access to the router, so the Security component should be nested inside the router. You should modify your src/index.js file to look like the following:

src/index.js

import React from 'react';
import ReactDOM from 'react-dom';
import { BrowserRouter } from 'react-router-dom';
import { Security } from '@okta/okta-react';

import App from './App';
import registerServiceWorker from './registerServiceWorker';

const oktaConfig = {
  issuer: `${process.env.REACT_APP_OKTA_ORG_URL}/oauth2/default`,
  redirect_uri: `${window.location.origin}/implicit/callback`,
  client_id: process.env.REACT_APP_OKTA_CLIENT_ID,
};

ReactDOM.render(
  <BrowserRouter>
    <Security {...oktaConfig}>
      <App />
    </Security>
  </BrowserRouter>,
  document.getElementById('root'),
);
registerServiceWorker();

if (module.hot) module.hot.accept();

Now in src/App.js you can use Routes. These tell the app to only render a certain component if the current URL matches the given path. Replace your Home component with a route that only renders the component when pointing at the root URL (/), and renders Okta’s ImplicitCallback component for the /implicit/callback path.

src/App.js

--- a/src/App.js
+++ b/src/App.js
@@ -1,4 +1,6 @@
 import React, { Fragment } from 'react';
+import { Route } from 'react-router-dom';
+import { ImplicitCallback } from '@okta/okta-react';
 import {
   CssBaseline,
   withStyles,
@@ -21,7 +23,8 @@ const App = ({ classes }) => (
     <CssBaseline />
     <AppHeader />
     <main className={classes.main}>
-      <Home />
+      <Route exact path="/" component={Home} />
+      <Route path="/implicit/callback" component={ImplicitCallback} />
     </main>
   </Fragment>
 );

Next, you need a login button. This file is a bit bigger because it contains some logic to check if the user is authenticated. I’ll show you the whole component first, then walk through what each section is doing:

src/components/LoginButton.js

import React, { Component } from 'react';
import {
  Button,
  IconButton,
  Menu,
  MenuItem,
  ListItemText,
} from '@material-ui/core';
import { AccountCircle } from '@material-ui/icons';
import { withAuth } from '@okta/okta-react';

class LoginButton extends Component {
  state = {
    authenticated: null,
    user: null,
    menuAnchorEl: null,
  };

  componentDidUpdate() {
    this.checkAuthentication();
  }

  componentDidMount() {
    this.checkAuthentication();
  }

  async checkAuthentication() {
    const authenticated = await this.props.auth.isAuthenticated();
    if (authenticated !== this.state.authenticated) {
      const user = await this.props.auth.getUser();
      this.setState({ authenticated, user });
    }
  }

  login = () => this.props.auth.login();
  logout = () => {
    this.handleMenuClose();
    this.props.auth.logout();
  };

  handleMenuOpen = event => this.setState({ menuAnchorEl: event.currentTarget });
  handleMenuClose = () => this.setState({ menuAnchorEl: null });

  render() {
    const { authenticated, user, menuAnchorEl } = this.state;

    if (authenticated == null) return null;
    if (!authenticated) return <Button color="inherit" onClick={this.login}>Login</Button>;

    const menuPosition = {
      vertical: 'top',
      horizontal: 'right',
    };

    return (
      <div>
        <IconButton onClick={this.handleMenuOpen} color="inherit">
          <AccountCircle />
        </IconButton>
        <Menu
          anchorEl={menuAnchorEl}
          anchorOrigin={menuPosition}
          transformOrigin={menuPosition}
          open={!!menuAnchorEl}
          onClose={this.handleMenuClose}
        >
          <MenuItem onClick={this.logout}>
            <ListItemText
              primary="Logout"
              secondary={user && user.name}
            />
          </MenuItem>
        </Menu>
      </div>
    );
  }
}

export default withAuth(LoginButton);

React components have a concept of state management. Each component can be passed props (in a component like <input type="number" value={3} />, type and number would be considered props). They can also maintain their own state, which has some initial values and can be changed with a function called setState. Any time the props or state changes, the component will rerender, and if changes need to be made to the DOM they will happen then. In a component, you can access these with this.props or this.state, respectively.

Here, you’re creating a new React component and setting the initial state values. Until you query the auth prop, you don’t know whether there’s a user or not, so you set authenticated and user to null. Material UI will use menuAnchorEl to know where to anchor the menu that lets you log the user out.

class LoginButton extends Component {
  state = {
    authenticated: null,
    user: null,
    menuAnchorEl: null,
  };

  // ...
}

React components also have their own lifecycle methods, which are hooks you can use to trigger actions at certain stages of the component lifecycle. Here, when the component is first mounted you’ll check to see whether or not the user has been authenticated, and if so get some more details about the user, such as their name and email address. You also want to rerun this check whenever the component updates, but you need to be careful to only update the state when something is different, otherwise you’ll get yourself into an infinite loop (the component updates, so you give the component new values, which updates the component, you give it new values, etc.). The withAuth function is a Higher Order Component (HOC) which wraps the original component and returns another one containing the auth prop.

class LoginButton extends Component {
  // ...

  componentDidUpdate() {
    this.checkAuthentication();
  }

  componentDidMount() {
    this.checkAuthentication();
  }

  async checkAuthentication() {
    const authenticated = await this.props.auth.isAuthenticated();
    if (authenticated !== this.state.authenticated) {
      const user = await this.props.auth.getUser();
      this.setState({ authenticated, user });
    }
  }

  // ...
}

export default withAuth(LoginButton);

The following functions are helper functions used later to log the user in or out, and open or close the menu. Writing the function as an arrow function ensures that this is referring to the instantiation of the component. Without this, if a function is called somewhere outside of the component (e.g. in an onClick event), you would lose access to the component and wouldn’t be able to execute functions on it or access props or state.

class LoginButton extends Component {
  // ...

  login = () => this.props.auth.login();
  logout = () => {
    this.handleMenuClose();
    this.props.auth.logout();
  };

  handleMenuOpen = event => this.setState({ menuAnchorEl: event.currentTarget });
}

All React components must have a render() function. This is what tells React what to display on the screen, even if it shouldn’t display anything (in which case you can return null).

When you’re not sure of the authentication state yet, you can just return null so the button isn’t rendered at all. Once Okta this.props.auth.isAuthenticated() returns, the value will either be true or false. If it’s false, you’ll want to provide a Login button. If the user is logged in, you can instead display an avatar icon that has a dropdown menu with a Logout button.

class LoginButton extends Component {
  // ...

  render() {
    const { authenticated, user, menuAnchorEl } = this.state;

    if (authenticated == null) return null;
    if (!authenticated) return <Button color="inherit" onClick={this.login}>Login</Button>;

    const menuPosition = {
      vertical: 'top',
      horizontal: 'right',
    };

    return (
      <div>
        <IconButton onClick={this.handleMenuOpen} color="inherit">
          <AccountCircle />
        </IconButton>
        <Menu
          anchorEl={menuAnchorEl}
          anchorOrigin={menuPosition}
          transformOrigin={menuPosition}
          open={!!menuAnchorEl}
          onClose={this.handleMenuClose}
        >
          <MenuItem onClick={this.logout}>
            <ListItemText
              primary="Logout"
              secondary={user && user.name}
            />
          </MenuItem>
        </Menu>
      </div>
    );
  }
}

The next piece of the puzzle is to add this LoginButton component to your header. In order to display it on the right-hand side of the page, you can put an empty spacer div that has a flex value of 1. Since the other objects aren’t told to flex, the spacer will take up as much space as it can. Modify your src/components/AppHeader.js file like so:

src/components/AppHeader.js

--- a/src/components/AppHeader.js
+++ b/src/components/AppHeader.js
@@ -3,16 +3,27 @@ import {
   AppBar,
   Toolbar,
   Typography,
+  withStyles,
 } from '@material-ui/core';

-const AppHeader = () => (
+import LoginButton from './LoginButton';
+
+const styles = {
+  flex: {
+    flex: 1,
+  },
+};
+
+const AppHeader = ({ classes }) => (
   <AppBar position="static">
     <Toolbar>
       <Typography variant="title" color="inherit">
         My React App
       </Typography>
+      <div className={classes.flex} />
+      <LoginButton />
     </Toolbar>
   </AppBar>
 );

-export default AppHeader;
+export default withStyles(styles)(AppHeader);

You should now be able to log in and out of your app using the button in the top right.

homepage with login button

When you click the Login button, you’ll be redirected to your Okta organization URL to handle authentication. You can log in with the same credentials you use in your developer console.

Okta sign in

Once successfully signed in, you’re returned back to your app and should now see an icon showing that you’re logged in. If you click on the icon, you’ll see your name in a logout button. Clicking the button keeps you on the homepage but logs you out again.

homepage, logged in

homepage without logout button

Add a Node REST API Server

Now that users can securely authenticate, you can build the REST API server to perform CRUD operations on a post model. You’ll need to add quite a few dependencies to your project at this point:

yarn add @okta/jwt-verifier@0.0.12 body-parser@1.18.3 cors@2.8.4 dotenv@6.0.0 epilogue@0.7.1 express @4.16.3 sequelize@4.38.0 sqlite@2.9.2
yarn add -D npm-run-all@4.1.3

Create a new folder for the server under the src directory:

mkdir src/server

Now create a new file src/server/index.js. To keep this simple we will just use a single file, but you could have a whole subtree of files in this folder. Keeping it in a separate folder lets you watch for changes just in this subdirectory and reload the server only when making changes to this file, instead of anytime any file in src changes. Again, I’ll post the whole file and then explain some key sections below.

src/server/index.js

require('dotenv').config({ path: '.env.local' });

const express = require('express');
const cors = require('cors');
const bodyParser = require('body-parser');
const Sequelize = require('sequelize');
const epilogue = require('epilogue');
const OktaJwtVerifier = require('@okta/jwt-verifier');

const oktaJwtVerifier = new OktaJwtVerifier({
  clientId: process.env.REACT_APP_OKTA_CLIENT_ID,
  issuer: `${process.env.REACT_APP_OKTA_ORG_URL}/oauth2/default`,
});

const app = express();
app.use(cors());
app.use(bodyParser.json());

app.use(async (req, res, next) => {
  try {
    if (!req.headers.authorization) throw new Error('Authorization header is required');

    const accessToken = req.headers.authorization.trim().split(' ')[1];
    await oktaJwtVerifier.verifyAccessToken(accessToken);
    next();
  } catch (error) {
    next(error.message);
  }
});

const database = new Sequelize({
  dialect: 'sqlite',
  storage: './test.sqlite',
});

const Post = database.define('posts', {
  title: Sequelize.STRING,
  body: Sequelize.TEXT,
});

epilogue.initialize({ app, sequelize: database });

epilogue.resource({
  model: Post,
  endpoints: ['/posts', '/posts/:id'],
});

const port = process.env.SERVER_PORT || 3001;

database.sync().then(() => {
  app.listen(port, () => {
    console.log(`Listening on port ${port}`);
  });
});

The following loads the environment variables we used in the React app. This way we can use the same env variables, and only have to set them in one place.

require('dotenv').config({ path: '.env.local' });

This sets up the HTTP server and adds some settings to allow for Cross-Origin Resource Sharing (CORS) and will automatically parse JSON.

const app = express();
app.use(cors());
app.use(bodyParser.json());

Here is where you check that a user is properly authenticated. First, throw an error if there is no Authorization header, which is how you will send the authorization token. The token will actually look like Bearer aLongBase64String. You want to pass the Base 64 string to the Okta JWT Verifier to check that the user is properly authenticated. The verifier will initially send a request to the issuer to get a list of valid signatures, and will then check locally that the token is valid. On subsequent requests, this can be done locally unless it finds a claim that it doesn’t have signatures for yet.

If everything looks good, the call to next() tells Express to go ahead and continue processing the request. If however, the claim is invalid, an error will be thrown. The error is then passed into next to tell Express that something went wrong. Express will then send an error back to the client instead of proceeding.

app.use(async (req, res, next) => {
  try {
    if (!req.headers.authorization) throw new Error('Authorization header is required');

    const accessToken = req.headers.authorization.trim().split(' ')[1];
    await oktaJwtVerifier.verifyAccessToken(accessToken);
    next();
  } catch (error) {
    next(error.message);
  }
});

Here is where you set up Sequelize. This is a quick way of creating database models. You can Sequelize with a wide variety of databases, but here you can just use SQLite to get up and running quickly without any other dependencies.

const database = new Sequelize({
  dialect: 'sqlite',
  storage: './test.sqlite',
});

const Post = database.define('posts', {
  title: Sequelize.STRING,
  body: Sequelize.TEXT,
});

Epilogue works well with Sequelize and Express. It binds the two together like glue, creating a set of CRUD endpoints with just a couple lines of code. First, you initialize Epilogue with the Express app and the Sequelize database model. Next, you tell it to create your endpoints for the Post model: one for a list of posts, which will have POST and GET methods; and one for individual posts, which will have GET, PUT, and DELETE methods.

epilogue.initialize({ app, sequelize: database });

epilogue.resource({
  model: Post,
  endpoints: ['/posts', '/posts/:id'],
});

The last part of the server is where you tell Express to start listening for HTTP requests. You need to tell sequelize to initialize the database, and when it’s done it’s OK for Express to start listening on the port you decide. By default, since the React app is using 3000, we’ll just add one to make it port 3001.

const port = process.env.SERVER_PORT || 3001;

database.sync().then(() => {
  app.listen(port, () => {
    console.log(`Listening on port ${port}`);
  });
});

Now you can make a couple small changes to package.json to make it easier to run both the frontend and backend at the same time. Replace the default start script and add a couple others, so your scripts section looks like this:

The post Build a Basic CRUD App with Node and React appeared first on SitePoint.


Source: Sitepoint

Building an Ethereum DApp: Launching the StoryDao

In part 7 of this tutorial series on building DApps with Ethereum, we showed how to build the app’s front end, setting up and deploying the UI for this story we’ve been working on.

It’s time to do some deploying and write a few final functions.

Suicide

Something could go very, very wrong and the whole DAO somehow get destroyed — either through hacks of bad and hastily written code, or through the inability to make long loops due to too many participants. (Too many voters on a proposal might as well break the system; we really didn’t put any precaution in place for that!) Just in case this happens, it might be useful to have the equivalent of a “big red button”. First, let’s upgrade our StoryDao:

function bigRedButton() onlyOwner external {
    active = false;
    withdrawToOwner();
    token.unlockForAll();
}

Then, let’s make it possible to unlock all the tokens at once in our Token contract:

/**
@dev unlocks the tokens of every user who ever had any tokens locked for them
*/
function unlockForAll() public onlyOwner {
    uint256 len = touchedByLock.length;
    for (uint256 i = 0; i < len; i++) {
        locked[touchedByLock[i]] = 0;
    }
}

Naturally, we need to add this new list of addresses into the contract:

address[] touchedByLock;

And we need to upgrade our increaseLockedAmount function to add addresses to this list:

/**
@dev _owner will be prevented from sending _amount of tokens. Anything
beyond this amount will be spendable.
*/
function increaseLockedAmount(address _owner, uint256 _amount) public onlyOwner returns (uint256) {
    uint256 lockingAmount = locked[_owner].add(_amount);
    require(balanceOf(_owner) >= lockingAmount, "Locking amount must not exceed balance");
    locked[_owner] = lockingAmount;
    touchedByLock.push(_owner);
    emit Locked(_owner, lockingAmount);
    return lockingAmount;
}

We should also update the required interface of the token inside the StoryDao contract to include this new function’s signature:

// ...
    function getUnlockedAmount(address _owner) view public returns (uint256);
    function unlockForAll() public;
}

With the active-story block we added before (inability to run certain functions unless the story’s active flag is true), this should do the trick. No one else will be able to waste money by sending it to the contract, and everyone’s tokens will get unlocked.

The owner doesn’t get the ether people submitted. Instead, the withdrawal function becomes available so people can take their ether back, and everyone’s taken care of.

Now our contracts are finally ready for deployment.

What about selfdestruct?

There’s a function called selfdestruct which makes it possible to destroy a contract. It looks like this:

selfdestruct(address);

Calling it will disable the contract in question, removing its code from the blockchain’s state and disabling all functions, all while sending the ether in that address to the address provided. This is not a good idea in our case: we still want people to be able to withdraw their ether; we don’t want to take it from them. Besides, any ether sent straight to the address of a suicided contract will get lost forever (burned) because there’s no way to get it back.

Deploying the Contract

To deploy the smart contracts fully, we need to do the following:

  1. deploy to mainnet
  2. send tokens to StoryDAO address
  3. transfer ownership of Token contract to StoryDao.

Let’s go.

Mainnet Deployment

To deploy to mainnet, we need to add a new network into our truffle.js file:

mainnet: {
  provider: function() {
    return new WalletProvider(
      Wallet.fromPrivateKey(
        Buffer.from(PRIVKEY, "hex")), "https://mainnet.infura.io/"+INFURAKEY
    );
  },
  gasPrice: w3.utils.toWei("20", "gwei"),
  network_id: "1",
},

Luckily, this is very simple. It’s virtually identical to the Rinkeby deployment; we just need to remove the gas amount (let it calculate it on its own) and change the gas price. We should also change the network ID to 1 since that’s the mainnet ID.

We use this like so:

truffle migrate --network mainnet

There’s one caveat to note here. If you’re deploying on a network you previously deployed on (even if you just deployed the token onto the mainnet and wanted to deploy the StoryDao later) you might get this error:

Attempting to run transaction which calls a contract function, but recipient address XXX is not a contract address

This happens because Truffle remembers where it deployed already-deployed contracts so that it can reuse them in subsequent migrations, avoiding the need to re-deploy. But if your network restarted (i.e. Ganache) or you made some incompatible changes, it can happen that the address it has saved doesn’t actually contain this code any more, so it will complain. You can get around this by resetting migrations:

truffle migrate --network mainnet --reset

The post Building an Ethereum DApp: Launching the StoryDao appeared first on SitePoint.


Source: Sitepoint

WordPress Notifications Made Easy

WordPress Notifications Made Easy

WordPress Notifications Made Easy

Jakub Mikita

2018-07-26T14:20:27+02:00
2018-07-26T13:44:26+00:00

WordPress doesn’t offer any kind of notification system. All you can use is the wp_mail() function, but all of the settings have to be hardcoded, or else you have to create a separate settings screen to allow the user tweak the options. It takes many hours to write a system that is reliable, configurable and easy to use. But not anymore. I’ll show you how to create your own notification system within minutes with the free Notification plugin. By notification, I mean any kind of notification. Most of the time, it will be email, but with the plugin we’ll be using, you can also send webhooks and other kinds of notifications.

While creating a project for one of my clients, I encountered this problem I’ve described. The requirement was to have multiple custom email alerts with configurable content. Instead of hardcoding each and every alert, I decided to build a system. I wanted it to be very flexible, and the aim was to be able to code new scenarios as quickly as possible.

The code I wrote was the start of a great development journey. It turned out that the system I created was flexible enough that it could work as a separate package. This is how the Notification plugin was born.

Suppose you want to send an email about a user profile being updated by one of your website’s members. WordPress doesn’t provide that functionality, but with the Notification plugin, you can create such an email in minutes. Or suppose you want to synchronize your WooCommerce products with third-party software by sending a webhook to a separate URL every time a new product is published. That’s easy to do with the plugin, too.

Lessons Learned While Developing WordPress Plugins

Good plugin development and support lead to more downloads. More downloads mean more money and a better reputation. Learn how you can develop good-quality products with seven golden rules. Read more →

In this article, you’ll learn how to integrate the plugin in your own application and how to create an advanced WordPress notification system more quickly and easily than ever.

In this article, we’ll cover:

  1. how to install the plugin,
  2. the idea behind the plugin and its architecture,
  3. creating a custom scenario for notifications,
  4. creating the action (step 1 of the process),
  5. creating the trigger (step 2 of the process),
  6. creating the custom notification type,
  7. how to enable white-label mode and bundle the plugin in your package.

Installing The Plugin

To create your own scenarios, you are going to need the Notification plugin. Just install it from the WordPress.org repository in your WordPress dashboard, or download it from the GitHub repository.

Large preview

Later in the article, you’ll learn how to hide this plugin from your clients and make it work as an integrated part of your plugin or theme.

The Idea Of The Plugin

Before going into your code editor, you’ll need to know what the plugin’s architecture looks like. The plugin contains many various components, but its core is really a few abstract classes.

The main components are:

  • The notification
    This could be an email, webhook, push notification or SMS.
  • The trigger
    This is what sends the notification. It’s effectively the WordPress action.
  • The merge tag
    This is a small portion of the dynamic content, like {post_title}.

To give you a better idea of how it all plays together, you can watch this short video:

The core of the Notification plugin is really just an API. All of the default triggers, like Post published and User registered are things built on top of that API.

Because the plugin was created for developers, adding your own triggers is very easy. All that’s required is a WordPress action, which is just a single line of code and a class declaration.

Custom Scenario

Let’s devise a simple scenario. We’ll add a text area and button to the bottom of each post, allowing bugs in the article to be reported. Then, we’ll trigger the notification upon submission of the form.

This scenario was covered in another article, “Submitting Forms Without Reloading the Page: AJAX Implementation in WordPress”.

For simplicity, let’s make it a static form, but there’s no problem putting the action in an AJAX handler, instead of in the wp_mail() function.

Let’s create the form.

The Form

add_filter( 'the_content', 'report_a_bug_form' );
function report_a_bug_form( $content ) {

    // Display the form only on posts.
    if ( ! is_single() ) {
        return $content;
    }

    // Add the form to the bottom of the content.
    $content .= '<form action="' . admin_url( 'admin-post.php' ) . '" method="POST">
        <input type="hidden" name="post_id" value="' . get_ID() . '">
        <input type="hidden" name="action" value="report_a_bug">
        <textarea name="message" placeholder="' . __( 'Describe what's wrong...', 'reportabug' ) . '"></textarea>
        <button>' . __( 'Report a bug', 'reportabug' ) . '</button>
    </div>';

    return $content;

}

Please note that many components are missing, like WordPress nonces, error-handling and display of the action’s result, but these are not the subject of this article. To better understand how to handle these actions, please read the article mentioned above.

Preparing The Action

To trigger the notification, we are going to need just a single action. This doesn’t have to be a custom action like the one below. You can use any of the actions already registered in WordPress core or another plugin.

The Form Handler And Action

add_action( 'admin_post_report_a_bug', 'report_a_bug_handler' );
add_action( 'admin_post_nopriv_report_a_bug', 'report_a_bug_handler' );
function report_a_bug_handler() {

    do_action( 'report_a_bug', $_POST['post_id'], $_POST['message'] );

    // Redirect back to the article.
    wp_safe_redirect( get_permalink( $_POST['post_id'] ) );
    exit;

}

You can read more on how to use the admin-post.php file in the WordPress Codex.

This is all we need to create a custom, configurable notification. Let’s create the trigger.

Registering The Custom Trigger

The trigger is just a simple class that extends the abstract trigger. The abstract class does all of the work for you. It puts the trigger in the list, and it handles the notifications and merge tags.

Let’s start with the trigger declaration.

Minimal Trigger Definition

class ReportBug extends BracketSpaceNotificationAbstractsTrigger {

    public function __construct() {

        // Add slug and the title.
        parent::__construct(
            'reportabug',
            __( 'Bug report sent', 'reportabug' )
        );

        // Hook to the action.
        $this->add_action( 'report_a_bug', 10, 2 );

    }

    public function merge_tags() {}

}

All you need to do is call the parent constructor and pass the trigger slug and nice name.

Then, we can hook into our custom action. The add_action method is very similar to the add_action() function; so, the second parameter is the priority, and the last one is the number of arguments. Only the callback parameter is missing because the abstract class does that for us.

Having the class, we can register it as our new trigger.

register_trigger( new ReportBug() );

This is now a fully working trigger. You can select it from the list when composing a new notification.



(Large preview)

Although the trigger is working and we can already send the notification we want, it’s not very useful. We don’t have any way to show the recipient which post has a bug and what the message is.

This would be the time, then, to register some merge tags and set up the trigger context with the action parameters we have: the post ID and the message.

To do this, we can add another method to the trigger class. This is the action callback, where we can catch the action arguments.

Handling Action Arguments

public function action( $post_ID, $message ) {

    // If the message is empty, don't send any notifications.
    if ( empty( $message ) ) {
        return false;
    }

    // Set the trigger properties.
    $this->post    = get_post( $post_ID );
    $this->message = $message;

}

Note the return false; statement. If you return false from this method, the trigger will be stopped, and no notification will be sent. In our case, we don’t want a notification to be submitted with an empty message. In the real world, you’d want to validate that before the form is sent.

Then, we just set the trigger class’ properties, the complete post object and the message. Now, we can use them to add some merge tags to our trigger. We can just fill the content of the merge_tags method we declared earlier.

Defining Merge Tags

public function merge_tags() {

    $this->add_merge_tag( new BracketSpaceNotificationDefaultsMergeTagUrlTag( array(
        'slug'        => 'post_url',
        'name'        => __( 'Post URL', 'reportabug' ),
        'resolver'    => function( $trigger ) {
            return get_permalink( $trigger->post->ID );
        },
    ) ) );

    $this->add_merge_tag( new BracketSpaceNotificationDefaultsMergeTagStringTag( array(
        'slug'        => 'post_title',
        'name'        => __( 'Post title', 'reportabug' ),
        'resolver'    => function( $trigger ) {
            return $trigger->post->post_title;
        },
    ) ) );

    $this->add_merge_tag( new BracketSpaceNotificationDefaultsMergeTagHtmlTag( array(
        'slug'        => 'message',
        'name'        => __( 'Message', 'reportabug' ),
        'resolver'    => function( $trigger ) {
            return nl2br( $trigger->message );
        },
    ) ) );

    $this->add_merge_tag( new BracketSpaceNotificationDefaultsMergeTagEmailTag( array(
        'slug'        => 'post_author_email',
        'name'        => __( 'Post author email', 'reportabug' ),
        'resolver'    => function( $trigger ) {
            $author = get_userdata( $trigger->post->post_author );
            return $author->user_email;
        },
    ) ) );

}

This will add four merge tags, all ready to use while a notification is being composed.

The merge tag is an instance of a special class. You can see that there are many types of these tags, and we are using them depending on the value that is returned from the resolver. You can see all merge tags in the GitHub repository.

All merge tags are added via the add_merge_tag method, and they require the config array with three keys:

  • slug
    The static value that will be used in the notification (i.e. {post_url}).
  • name
    The translated label for the merge tag.
  • resolver
    The function that replaces the merge tag with the actual value.

The resolver doesn’t have to be the closure, as in our case, but using it is convenient. You can pass a function name as a string or an array if this is a method in another class.

In the resolver function, only one argument is available: the trigger class instance. Thus, we can access the properties we just set in the action method and return the value we need.

And that’s all! The merge tags are not available to use with our trigger, and we can set up as many notifications of the bug report as we want.



(Large preview)

Creating The Custom Notification Type

The Notification plugin offers not only custom triggers, but also custom notification types. The plugin ships with two types, email and webhook, but it has a simple API to register your own notifications.

It works very similarly to the custom trigger: You also need a class and a call to one simple function to register it.

I’m showing only an example; the implementation will vary according to the system you wish to integrate. You might need to include a third-party library and call its API or operate in WordPress’ file system, but the guide below will set you up with the basic process.

Let’s start with a class declaration:

class CustomNotification extends BracketSpaceNotificationAbstractsNotification {

    public function __construct() {

        // Add slug and the title.
        parent::__construct( 
            'custom_notification',
            __( 'Custom Notification', 'textdomain' )
        );

    }

    public function form_fields() {}

    public function send( BracketSpaceNotificationInterfacesTriggerable $trigger ) {}

}

In the constructor, you must call the parent’s class constructor and pass the slug and nice name of the notification.

The form_fields method is used to create a configuration form for notifications. (For example, the email notification would have a subject, body, etc.)

The send method is called by the trigger, and it’s where you can call the third-party API that you wish to integrate with.

Next, you have to register it with the register_notification function.

register_trigger( new CustomNotification() );

The Notification Form

There might be a case in which you have a notification with no configuration fields. That’s fine, but most likely you’ll want to give the WordPress administrator a way to configure the notification content with the merge tags.

That’s why we’ll register two fields, the title and the message, in the form_fields method. It looks like this:

public function form_fields() {

    $this->add_form_field( new BracketSpaceNotificationDefaultsFieldInputField( array(
        'label'       => __( 'Title', 'textdomain' ),
        'name'        => 'title',
        'resolvable'  => true,
        'description' => __( 'You can use merge tags', 'textdomain' ),
    ) ) );

    $this->add_form_field( new BracketSpaceNotificationDefaultsFieldTextareaField( array(
        'label'       => __( 'Message', 'textdomain' ),
        'name'        => 'message',
        'resolvable'  => true,
        'description' => __( 'You can use merge tags', 'textdomain' ),
    ) ) );

}

As you can see, each field is an object and is registered with the add_form_field method. For the list of all available field types, please visit the GitHub repository.

Each field has the translatable label, the unique name and a set of other properties. You can define whether the field should be resolved with the merge tags with the resolvable key. This means that when someone uses the {post_title} merge tag in this field, it will be changed with the post’s actual title. You can also provide the description field for a better user experience.

At this point, your custom notification type can be used in the plugin’s interface with any available trigger type.



(Large preview)

Sending The Custom Notification

In order to make it really work, we have to use the send method in our notification class declaration. This is the place where you can write an API call or use WordPress’ file system or any WordPress API, and do whatever you like with the notification data.

This is how you can access it:

public function send( BracketSpaceNotificationInterfacesTriggerable $trigger ) {

    $title   = $this->data['title'];
    $message = $this->data['message'];

    // @todo Write the integration here.

}

At this point, all of the fields are resolved with the merge tags, which means the variables are ready to be shipped.

That gives you endless possibilities to integrate WordPress with any service, whether it’s your local SMS provider, another WordPress installation or any external API you wish to communicate with.

White Labeling And Bundling The Plugin

It’s not ideal to create a dependency of a plugin that can be easily deactivated and uninstalled. If you are building a system that really requires the Notification plugin to be always available, you can bundle the plugin in your own code.

If you’ve used the Advanced Custom Fields plugin before, then you are probably familiar with the bundling procedure. Just copy the plugin’s files to your plugin or theme, and invoke the plugin manually.

The Notification plugin works very similarly, but invoking the plugin is much simpler than with Advanced Custom Fields.

Just copy the plugin’s files, and require one file to make it work.

require_once( 'path/to/plugin/notification/load.php' );

The plugin will figure out its location and the URLs.

But bundling the plugin might not be enough. Perhaps you need to completely hide that you are using this third-party solution. This is why the Notification plugin comes with a white-label mode, which you can activate at any time.

It also is enabled as a single call to a function:

notification_whitelabel( array(
    // Admin page hook under which the Notifications will be displayed.
    'page_hook'       => 'edit.php?post_type=page',
    // If display extensions page.
    'extensions'      => false,
    // If display settings page.
    'settings'        => false,
    // Limit settings access to user IDs.
    // This works only if settings are enabled.
    'settings_access' => array( 123, 456 ),
) );

By default, calling this function will hide all of the default triggers.

Using both techniques, white labeling and bundling, will completely hide any references to the plugin’s origin, and the solution will behave as a fully integrated part of your system.

Conclusion

The Notification plugin is an all-in-one solution for any custom WordPress notification system. It’s extremely easy to configure, and it works out of the box. All of the triggers that are registered will work with any notification type, and if you have any advanced requirements, you can save some time by using an existing extension.

If you’d like to learn more details and advanced techniques, go to the documentation website.

I’m always open to new ideas, so if you have any, you can reach out to me here in the comments, via the GitHub issues or on Twitter.

Download the plugin from the repository, and give it a try!

Smashing Editorial
(ra, yk, il)


Source: Smashing Magazine

How to Use Feature Flags in Continuous Integration

A lot has been written about the benefits of achieving true Continuous Integration (CI) into production systems. This tutorial will demonstrate a simple workflow that achieves CI. We’ll be using Feature Flags and Remote Config to avoid the need for feature branches in Git, as well as any sort of test or staging environments.

What We Mean by CI and Feature Flags

Continuous Integration is a method of development in which developers are able to constantly release their code. Developers push their new features as they finish development, at which point they are automatically tested and released immediately into a live environment.

Feature flags provide a way of gating these new features from a remote control panel allowing you to turn them off and on at will without making any changes to your code. This allows you to develop a feature, and release it into production without actually changing anything from a user’s point of view.

Why CI Is So Powerful

Integrating code in smaller chunks and in a more frequent manner allows development teams to pick up issues a lot quicker, as well as getting new features to users as quickly as possible. CI also removes the need for any large releases where a team of engineers need to stay awake until the wee hours of the night to minimise disruption to their users.

How Feature Flags Aid Continuous Integration

Feature flags provide an additional layer of confidence when releasing new features. By wrapping new features in a feature flag, the developers and product team are able to quickly enable or disable their features as required. Let’s say we introduce a new feature into production but we can immediately see that there’s a bug in the new code, because of something specific in the production environment that wasn’t evident in testing (let’s face it, it’s happened to everyone… more than once).

Previously, this would have meant a (probably) lengthy and (definitely) painful rollback procedure and a rescheduling of the release for another ungodly hour on another day when the bug has been fixed. Instead, with a toggle of a switch the feature can be turned off for all or a subset of users and the pain is gone. Once the bug is identified and fixed, a patch release can be deployed, and the feature can be re-enabled.

Outline of Our Sample Project

To demonstrate integrating feature flags and remote config, we’re going to base our initial codebase on the de facto React JS tutorial of a tic-tac-toe game. This is a great tutorial for learning the basics of React, so be sure to check it out if you haven’t already.

Don’t worry if you don’t know React or Javascript well. The concepts we’ll be going over are all about the process and tooling, and not about the code.

Rather than repeat the tutorial from the start, we’re going to start at a point where we’ve got a basic tic-tac-toe game up and running.

At this point we’ll use feature flags and remote configuration to continuously configure, push and deploy new features. To take this concept to an extreme we’ll continuously be pushing to master; no other branches will be used. We’ll introduce a bug (intentionally of course!) and push the fix to master, to demonstrate how dealing with this scenario doesn’t require a full rollback or additional branching.

If you wish to follow along writing code during this tutorial, you can fork the repository here.

Achieving CI

The most common method of automating continuous integration is to have the build process trigger when you push changes to your git repository. Different build tools achieve this in different ways.

We’re going to use Netlify for our project. Netlify lets you connect a Git repository and automatically deploy builds every time you push to a branch.

To use Netlify, simply sign up for a free account and select the ‘New site from Git’ option in the top right of the dashboard. Once you’ve connected your GitHub account (or otherwise, if you want to use Bitbucket or GitLab) you should then be presented with the options shown below.

In the next step, configure Netlify to run the application as follows.

Netlify will now go ahead and build your application for you. It will take a few minutes, but once it’s done you should see something like the following:

Browsing to that URL should show your Tic Tac Toe game in all its glory.

Setting Up Feature Flags for Our Project

We’re going to use feature flags to control the declaration of a winner in the tic-tac-toe game. To create and manage our feature flags, we’re going to use Bullet Train as it’s currently free, but there are many other feature flag products. We’ll let you pick the one you feel is right.

To continue with us, go ahead and create a free account on Bullet Train. Click on the ‘Create Project’ button and create your project. We named ours FF Tutorial.

Next, we need to create a new feature for declaring a winner.
Click on the ‘Create your first Feature’ button at the bottom of the next screen to bring up the following form and add in the details.

Note that we’ve kept the feature disabled to begin with.

Take note of the two code snippets available beneath the feature, which will help us in the next step.

Implement the Feature Flag

Firstly, let’s get this project running in our development environment. From the command line, navigate to the project folder and run the command npm i to install the necessary dependencies.

Next run npm run dev and head to http://localhost:8080 in your browser. You should see the same tic-tac-toe game that you saw on the Netlify URL.

We are now going to implement our new feature flag into the existing code. Let’s start by installing the Bullet Train client for JavaScript, by opening up another command line and running the following in the project directory:

npm i bullet-train-client --save

Open up the project in your preferred editor and edit ./web/App.js.

Find the calculateWinner(squares) function. This function determines whether there has been a winner or not, based on whether it can find a line of the same shape or not. We are going to have this function return null based on the value of our feature flag so that we can test filling up the board without it declaring a winner.

At the very top of App.js, add the following line:

var declareWinner = true;

Now, let’s initialise our Bullet Train client that we installed earlier. Copy all of code example 2 from the Features page in the Bullet Train interface and paste it just beneath the line you just added.

The environment ID in your code snippet will be the correct environment ID associated with the Development environment in your Bullet Train project. You can check this by browsing to the ‘Environment Settings’ page if you want.

We now need to edit the onChange() function in the bulletTrain.init() function in the code we just pasted in to suit our needs. Replace all of the code in there with one line:

declareWinner = bulletTrain.hasFeature("declare-winner");

You should now have this at the top of your App.js

var declareWinner = true;
import bulletTrain from "bullet-train-client"; //Add this line if you're using bulletTrain via npm

bulletTrain.init({
    environmentID:"<your-environment-id>",
    onChange: (oldFlags,params)=>{ //Occurs whenever flags are changed
        declareWinner = bulletTrain.hasFeature("declare-winner");
    }
});

Scroll down to the calculateWinner(squares) function and at the top, just above the declaration of the lines constant, let’s add the line:

if (!declareWinner) return null;

And that’s it! Our feature flag will now determine whether or not the winner is calculated or not on every render of the game. Refresh your browser and play the game. You can no longer win and instead the entire board can now be filled with Xs and Os.

Go back to the Bullet Train admin and toggle the feature using the switch on the right.

Refresh your browser and the game becomes winnable again. Check out the code for the end of this part here.

Commit and push your code (yes, all on master) and Netlify will automatically deploy your code. Browse to your assigned Netlify URL again and toggle the feature flag to see it working in a production environment. Sweet!

Work on a Bug

We are now going to purposefully introduce a bug into the tic-tac-toe game and show how feature flags can be used to drop a feature that is causing issues.

The feature we’re going to add is selection of who goes first at the start of the game. For that we will add a couple of buttons that only appear at the beginning of the game and prevent clicks on the board from adding a shape.

Firstly, let’s set up our feature flag to wrap the new feature. In your Bullet Train project, create a new feature called select-who-goes-first as follows. Let’s leave it disabled to begin with.

Now, let’s add our new feature. In the render() function we’re going to render the buttons, instead of the status, if the game hasn’t started yet. At the top of the return of the render() function, replace the line:

<div className="status">{status}</div>

… with the following code:

{!this.state.selected ? (
    <div>
        Who goes first?
        <button onClick={() => this.setState({selected: true})}>X</button>
        <button onClick={() => this.setState({selected: true, xIsNext: false})}>O</button>
    </div>
) : (
    <div className="status">{status}</div>
)}

Now we want to write the code to control our new feature with the feature flag we created. As before, this needs to go in the bulletTrain.init({...}) function.

First, let’s add the lifecycle function componentDidMount() to the Board component and then move the entire bulletTrain.init({...}) function inside of it, so that we can update the state of the component after the flag is retrieved:

The post How to Use Feature Flags in Continuous Integration appeared first on SitePoint.


Source: Sitepoint