How to Optimize SQL Queries for Faster Sites

This article was originally published on the Delicious Brains blog, and is republished here with permission.

You know that a fast site == happier users, improved ranking from Google, and increased conversions. Maybe you even think your WordPress site is as fast as it can be: you’ve looked at site performance, from the best practices of setting up a server, to troubleshooting slow code, and offloading your images to a CDN, but is that everything?

With dynamic, database-driven websites like WordPress, you might still have one problem on your hands: database queries slowing down your site.

In this post, I’ll take you through how to identify the queries causing bottlenecks, how to understand the problems with them, along with quick fixes and other approaches to speed things up. I’ll be using an actual query we recently tackled that was slowing things down on the customer portal of


The first step in fixing slow SQL queries is to find them. Ashley has sung the praises of the debugging plugin Query Monitor on the blog before, and it’s the database queries feature of the plugin that really makes it an invaluable tool for identifying slow SQL queries. The plugin reports on all the database queries executed during the page request. It allows you to filter them by the code or component (the plugin, theme or WordPress core) calling them, and highlights duplicate and slow queries:

Query Monitor results

If you don’t want to install a debugging plugin on a production site (maybe you’re worried about adding some performance overhead) you can opt to turn on the MySQL Slow Query Log, which logs all queries that take a certain amount of time to execute. This is relatively simple to configure and set up where to log the queries to. As this is a server-level tweak, the performance hit will be less that a debugging plugin on the site, but should be turned off when not using it.


Once you have found an expensive query that you want to improve, the next step is to try to understand what is making the query slow. Recently during development to our site, we found a query that was taking around 8 seconds to execute!

    pm2.post_id AS 'product_id',
    pm.meta_value AS 'user_id'
    oiz6q8a_woocommerce_software_licences l
        INNER JOIN
    oiz6q8a_woocommerce_software_subscriptions s ON s.key_id = l.key_id
        INNER JOIN
    oiz6q8a_posts p ON p.ID = l.order_id
        INNER JOIN
    oiz6q8a_postmeta pm ON pm.post_id = p.ID
        AND pm.meta_key = '_customer_user'
        INNER JOIN
    oiz6q8a_postmeta pm2 ON pm2.meta_key = '_software_product_id'
        AND pm2.meta_value = l.software_product_id
    p.post_type = 'shop_order'
        AND pm.meta_value = 279
ORDER BY s.next_payment_date

We use WooCommerce and a customized version of the WooCommerce Software Subscriptions plugin to run our plugins store. The purpose of this query is to get all subscriptions for a customer where we know their customer number. WooCommerce has a somewhat complex data model, in that even though an order is stored as a custom post type, the id of the customer (for stores where each customer gets a WordPress user created for them) is not stored as the post_author, but instead as a piece of post meta data. There are also a couple of joins to custom tables created by the software subscriptions plugin. Let’s dive in to understand the query more.

MySQL is your Friend

MySQL has a handy statement DESCRIBE which can be used to output information about a table’s structure such as its columns, data types, defaults. So if you execute DESCRIBE wp_postmeta; you will see the following results:

meta_idbigint(20) unsignedNOPRINULLauto_increment
post_idbigint(20) unsignedNOMUL0

That’s cool, but you may already know about it. But did you know that the DESCRIBE statement prefix can actually be used on SELECT, INSERT, UPDATE, REPLACE and DELETE statements? This is more commonly known by its synonym EXPLAIN and will give us detailed information about how the statement will be executed.

Here are the results for our slow query:

1SIMPLEpm2refmeta_keymeta_key576const28Using where; Using temporary; Using filesort
1SIMPLEpmrefpost_id,meta_keymeta_key576const37456Using where
1SIMPLEpeq_refPRIMARY, where
1SIMPLElrefPRIMARY, index condition; Using where

At first glance, this isn’t very easy to interpret. Luckily the folks over at SitePoint have put together a comprehensive guide to understanding the statement.

The most important column is type, which describes how the tables are joined. If you see ALL then that means MySQL is reading the whole table from disk, increasing I/O rates and putting load on the CPU. This is know as a “full table scan” (more on that later).

The rows column is also a good indication of what MySQL is having to do, as this shows how many rows it has looked in to find a result.

Explain also gives us more information we can use to optimize. For example, the pm2 table (wp_postmeta), it is telling us we are Using filesort, because we are asking the results to be sorted using an ORDER BY clause on the statement. If we were also grouping the query we would be adding overhead to the execution.

Visual Investigation

MySQL Workbench is another handy, free tool for this type of investigation. For databases running on MySQL 5.6 and above, the results of EXPLAIN can be outputted as JSON, and MySQL Workbench turns that JSON into a visual execution plan of the statement:

MySQl Workbench Visual Results

It automatically draws your attention to issues by coloring parts of the query by cost. We can see straight away that join to the wp_woocommerce_software_licences (alias l) table has a serious issue.

Continue reading %How to Optimize SQL Queries for Faster Sites%

Source: Sitepoint

6 Cutting-Edge React Courses

React is a JavaScript library for building user interfaces that has taken the web development world by storm. React is known for its blazing-fast performance and has spawned an ecosystem of thousands of related modules on NPM. 

However, in a community that favours choice and flexibility, it can be hard to know where to start! So here are six courses that will get you fully up to speed with the latest in React development. 

Whether you want to master React animation, learn how to work with React Native or Redux, or get your hands dirty building some practical React applications, these courses have you covered.

1. Modern Web Apps With React and Redux

In this course, Envato Tuts+ instructor Andrew Burgess will get you started building modern web apps with React and Redux. 

Starting from nothing, you’ll use these two libraries to build a complete web application. You’ll start with the simplest possible architecture and slowly build up the app, feature by feature. By the end, you’ll have created a complete flashcards app for learning by spaced repetition.

Along the way, you’ll get a chance to sharpen your ES6 (ECMAScript 2015) skills and learn the patterns and modules that work best with React and Redux!


2. Five Practical Examples to Learn React

Sometimes, the best way to learn is just to dive in and do something practical. In this course by Jeremy McPeak, you’re going to learn React by writing components that you could incorporate into your own applications.

Along the way, you’ll learn all the basics of coding React components. You’ll learn about JSX, events, managing state, and passing props. You’ll also learn about some other key concepts like higher-order components, lifecycle methods, and using third-party libraries.


3. Get Started With React Native

Mobile app users expect the performance and features that can only be supplied by native app development. But going native often means that you have to develop your app for multiple platforms. React Native bridges this gap by letting you write your user interface in modern JavaScript and automatically transforming it into native platform-specific views.

In this course, Envato Tuts+ instructor Markus Mühlberger will teach you how to write mobile apps in React Native. You will learn how to create, lay out and style components, provide user interaction, and integrate third-party components into your app. Along the way, you’ll build a cool cross-platform fitness app!


4. Build a Social App With React Native

When you’ve got started with React Native in the course above, you’ll want to put your knowledge to good use. So try this course on building a social app with React Native.

You’re with Markus Mühlberger again for this one, and he’ll show you how to build a social app easily with a Firebase back-end. You’ll also learn some more advanced topics like sophisticated view routing, camera and photo library access, and how to use the device’s address book.


5. How to Animate Your React App

If you want to add some life and engagement to your React app, animation is a great way to do it.

In this course, you’ll learn how to add some sparkle to your web app with simple animations. Follow along with Stuart Memo and you’ll build a basic to-do app, and then enhance it with UI animation. 

To start, you’ll learn how to use React’s built-in animation hooks. After you’ve become proficient with that, you’ll move on to react-motion, a very popular and powerful animation library.


6. Code a Universal React App

Coding a full-stack app has always been hard. Developers have to know completely different sets of languages, tools, libraries, and frameworks for the client and server side. But with React and Node, you can use the same JavaScript code on both the client and server.

In this course, Jeremy McPeak will show you how to write a universal (isomorphic) React app—one that can render on the server or the client. This will let us reuse the same code on the server and client, and it will make it easier for search engines to index our app. Follow along as Jeremy builds a simple app in React with React Router and then upgrades it with server-side routing.


Watch Any Course Now

You can take any of our React courses straight away with a subscription to Envato Elements. For a single low monthly fee, you get access not only to these courses, but also to our growing library of over 1,000 video courses and industry-leading eBooks on Envato Tuts+. 

Plus you now get unlimited downloads from the huge Envato Elements library of 400,000+ creative assets. Create with unique fonts, photos, graphics and templates, and deliver better projects faster.

Source: Nettuts Web Development

Performant Animations Using KUTE.js: Part 1, Getting Started

KUTE.js is a JavaScript-based animation engine which focuses on performance and memory efficiency while animating different elements on a webpage. I have already written a series on using Anime.js to create JavaScript-based animations. This time we will learn about KUTE.js and how we can use it to animate CSS properties, SVG, and text elements, among other things.


Before we dive into some examples, let’s install the library first. KUTE.js has a core engine, and then there are plugins for animating the value of different CSS properties, SVG attributes, or text. You can directly link to the library from popular CDNs like cdnjs and jsDelivr.

You can also install KUTE.js using either NPM or Bower with the help of the following commands:

Once you have included the library in your projects, you can start creating your own animation sequences.

Tween Objects

When creating your animation using KUTE.js, you need to define tween objects. These tween objects provide all the animation-related information for a given element or elements. This includes the element itself, the properties that you want to animate, the duration of the animation, and other attributes like the repeat count, delay, or offset.

You can use the .to() method or the .fromTo() method in order to animate a set of CSS properties from one value to another. The .to() method animates the properties from their default value or their computed/current value to a final provided value. In the case of the .fromTo() method, you have to provide both the starting and ending animation values.

The .to() method is useful when you don’t know the current or default value for the property that you want to animate. One major disadvantage of this method is that the library has to compute the current value of all the properties by itself. This results in a delay of a few milliseconds after you call .start() to start the animation.

The .fromTo() method allows you to specify the starting and ending animation values yourself. This can marginally improve the performance of the animations. You can now also specify the units for starting and ending values yourself and avoid any surprises during the course of the animation. One disadvantage of using .fromTo() is that you won’t be able to stack multiple transform properties on chained tweens. In such cases, you will have to use the .to() method.

Remember that both .fromTo() and .to() are meant to be used when you are animating individual elements. If you want to animate multiple elements at once, you will have to use either .allTo() or .allFromTo(). These methods work just like their single element counterparts and inherit all their attributes. They also get an extra offset attribute that determines the delay between the start of the animation for different elements. This offset is defined in milliseconds.

Here is an example that animates the opacity of three different boxes in sequence.

The following JavaScript is used to create the above animation sequence:

All the boxes above have a box class which has been used to select them all using the querySelectorAll() method. The allFromTo() method in KUTE.js is used to animate the opacity of these boxes from 1 to 0.1 with an offset of 700 milliseconds. As you can see, the tween object does not start the animation by itself. You have to call the start() method in order to start the animation.

Controlling the Animation Playback

In the previous section, we used the start() method in order to start our animations. The KUTE.js library also provides a few other methods that can be used to control the animation playback. 

For example, you can stop any animation that is currently in progress with the help of the stop() method. Keep in mind that you can use this method to stop the animation of only those tween objects that have been stored in a variable. The animation for any tween object that was created on the fly cannot be stopped with this method.

You also have the option to just pause an animation with the help of the pause() method. This is helpful when you want to resume the animation again at a later time. You can either use resume() or play() to resume any animation that was paused.

The following example is an updated version of the previous demo with all the four methods added to it.

Here is the JavaScript code needed to add the start, stop, play, and pause functionality.

I have changed the animation duration to 2,000 milliseconds. This gives us enough time to press different buttons and see how they affect the animation playback.

Chaining Tweens Together

You can use the chain() method to chain different tweens together. Once different tweens have been chained, they call the start() method on other tweens after their own animation has finished. 

This way, you get to play different animations in a sequence. You can chain different tweens with each other in order to play them in a loop. The following example should make it clear:

We already had one tween to animate the opacity. We have now added another one that animates the rotation of our boxes. The first two buttons animate the opacity and the rotation one at a time. The third button triggers the chaining of animateOpacity with animateRotation

The chaining itself doesn’t start the animation, so we also use the start() method to start the opacity animation. The last button is used to chain both the tweens with each other. This time, the animations keep playing indefinitely once they have been started. Here is a CodePen demo that shows all the above code in action:

To fully understand how chaining works, you will have to press the buttons in a specific sequence. Click on the Animate Opacity button first and you will see that the opacity animation is played only once and then nothing else happens. Now, press the Animate Rotation button and you will see that the boxes rotate once and then nothing else happens.

After that, press the Chain Animations button and you will see that the opacity animation plays first and, once it completes its iteration, the rotation animation starts playing all by itself. This happened because the rotation animation is now chained to the opacity animation. 

Now, press the Animate Opacity button again and you will see that both opacity and rotation are animated in sequence. This is because they had already been chained after we clicked on Chain Animations.

At this point, pressing the Animate Rotation button will only animate the rotation. The reason for this behavior is that we have only chained the rotation animation to the opacity animation. This means that the boxes will be rotated every time the opacity is animated, but a rotation animation doesn’t mean that the opacity will be animated as well. 

Finally, you can click on the Play in a Loop button. This will chain both the animations with each other, and once that happens, the animations will keep playing in an indefinite loop. This is because the end of one animation triggers the start of the other animation.

Final Thoughts

In this introductory KUTE.js tutorial, you learned about the basics of the library. We started with the installation and then moved on to different methods that can be used to create tween objects. 

You also learned how to control the playback of an animation and how to chain different tweens together. Once you fully understand chaining, you will be able to create some interesting animations using this library.

In the next tutorial of the series, you will learn how to animate different kinds of CSS properties using KUTE.js.

Source: Nettuts Web Development

Monthly Web Development Update 11/2017: Browser News, KRACK and Calligraphy With AR



Editor’s Note: Our dear friend Anselm Hannemann summarizes what happened in the web community in the past few weeks in one handy list, so that you can catch up on everything new and important. Enjoy!

Monthly Web Development Update November 2017

Welcome back to our monthly reading list. Before we dive right into all the amazing content I stumbled upon — admittedly, this one is going to be quite a long update — I want to make a personal announcement. This week I launched a personal project called Colloq, a new conference and event service for users and organizers. If you’re going to events or are organizing one or if you’re interested in the recorded content of conferences, this could be for you. So if you like, go ahead and check it out.

The post Monthly Web Development Update 11/2017: Browser News, KRACK and Calligraphy With AR appeared first on Smashing Magazine.

Source: Smashing Magazine

How to Read Big Files with PHP (Without Killing Your Server)

It’s not often that we, as PHP developers, need to worry about memory management. The PHP engine does a stellar job of cleaning up after us, and the web server model of short-lived execution contexts means even the sloppiest code has no long-lasting effects.

There are rare times when we may need to step outside of this comfortable boundary — like when we’re trying to run Composer for a large project on the smallest VPS we can create, or when we need to read large files on an equally small server.

Fragmented terrain

It’s the latter problem we’ll look at in this tutorial.

The code for this tutorial can be found on GitHub.

Measuring Success

The only way to be sure we’re making any improvement to our code is to measure a bad situation and then compare that measurement to another after we’ve applied our fix. In other words, unless we know how much a “solution” helps us (if at all), we can’t know if it really is a solution or not.

There are two metrics we can care about. The first is CPU usage. How fast or slow is the process we want to work on? The second is memory usage. How much memory does the script take to execute? These are often inversely proportional — meaning that we can offload memory usage at the cost of CPU usage, and vice versa.

In an asynchronous execution model (like with multi-process or multi-threaded PHP applications), both CPU and memory usage are important considerations. In traditional PHP architecture, these generally become a problem when either one reaches the limits of the server.

It’s impractical to measure CPU usage inside PHP. If that’s the area you want to focus on, consider using something like top, on Ubuntu or macOS. For Windows, consider using the Linux Subsystem, so you can use top in Ubuntu.

For the purposes of this tutorial, we’re going to measure memory usage. We’ll look at how much memory is used in “traditional” scripts. We’ll implement a couple of optimization strategies and measure those too. In the end, I want you to be able to make an educated choice.

The methods we’ll use to see how much memory is used are:

// formatBytes is taken from the documentation


function formatBytes($bytes, $precision = 2) {
    $units = array("b", "kb", "mb", "gb", "tb");

    $bytes = max($bytes, 0);
    $pow = floor(($bytes ? log($bytes) : 0) / log(1024));
    $pow = min($pow, count($units) - 1);

    $bytes /= (1 << (10 * $pow));

    return round($bytes, $precision) . " " . $units[$pow];

We’ll use these functions at the end of our scripts, so we can see which script uses the most memory at one time.

What Are Our Options?

There are many approaches we could take to read files efficiently. But there are also two likely scenarios in which we could use them. We could want to read and process data all at the same time, outputting the processed data or performing other actions based on what we read. We could also want to transform a stream of data without ever really needing access to the data.

Let’s imagine, for the first scenario, that we want to be able to read a file and create separate queued processing jobs every 10,000 lines. We’d need to keep at least 10,000 lines in memory, and pass them along to the queued job manager (whatever form that may take).

For the second scenario, let’s imagine we want to compress the contents of a particularly large API response. We don’t care what it says, but we need to make sure it’s backed up in a compressed form.

In both scenarios, we need to read large files. In the first, we need to know what the data is. In the second, we don’t care what the data is. Let’s explore these options…

Reading Files, Line By Line

There are many functions for working with files. Let’s combine a few into a naive file reader:

// from memory.php

function formatBytes($bytes, $precision = 2) {
    $units = array("b", "kb", "mb", "gb", "tb");

    $bytes = max($bytes, 0);
    $pow = floor(($bytes ? log($bytes) : 0) / log(1024));
    $pow = min($pow, count($units) - 1);

    $bytes /= (1 << (10 * $pow));

    return round($bytes, $precision) . " " . $units[$pow];

print formatBytes(memory_get_peak_usage());
// from reading-files-line-by-line-1.php

function readTheFile($path) {
    $lines = [];
    $handle = fopen($path, "r");

    while(!feof($handle)) {
        $lines[] = trim(fgets($handle));

    return $lines;


require "memory.php";

We’re reading a text file containing the complete works of Shakespeare. The text file is about 5.5MB, and the peak memory usage is 12.8MB. Now, let’s use a generator to read each line:

// from reading-files-line-by-line-2.php

function readTheFile($path) {
    $handle = fopen($path, "r");

    while(!feof($handle)) {
        yield trim(fgets($handle));



require "memory.php";

The text file is the same size, but the peak memory usage is 393KB. This doesn’t mean anything until we do something with the data we’re reading. Perhaps we can split the document into chunks whenever we see two blank lines. Something like this:

// from reading-files-line-by-line-3.php

$iterator = readTheFile("shakespeare.txt");

$buffer = "";

foreach ($iterator as $iteration) {
    preg_match("/n{3}/", $buffer, $matches);

    if (count($matches)) {
        print ".";
        $buffer = "";
    } else {
        $buffer .= $iteration . PHP_EOL;

require "memory.php";

Any guesses how much memory we’re using now? Would it surprise you to know that, even though we split the text document up into 1,216 chunks, we still only use 459KB of memory? Given the nature of generators, the most memory we’ll use is that which we need to store the largest text chunk in an iteration. In this case, the largest chunk is 101,985 characters.

I’ve already written about the performance boosts of using generators and Nikita Popov’s Iterator library, so go check that out if you’d like to see more!

Generators have other uses, but this one is demonstrably good for performant reading of large files. If we need to work on the data, generators are probably the best way.

Piping Between Files

In situations where we don’t need to operate on the data, we can pass file data from one file to another. This is commonly called piping (presumably because we don’t see what’s inside a pipe except at each end … as long as it’s opaque, of course!). We can achieve this by using stream methods. Let’s first write a script to transfer from one file to another, so that we can measure the memory usage:

// from piping-files-1.php

    "piping-files-1.txt", file_get_contents("shakespeare.txt")

require "memory.php";

Unsurprisingly, this script uses slightly more memory to run than the text file it copies. That’s because it has to read (and keep) the file contents in memory until it has written to the new file. For small files, that may be okay. When we start to use bigger files, no so much…

Let’s try streaming (or piping) from one file to another:

// from piping-files-2.php

$handle1 = fopen("shakespeare.txt", "r");
$handle2 = fopen("piping-files-2.txt", "w");

stream_copy_to_stream($handle1, $handle2);


require "memory.php";

This code is slightly strange. We open handles to both files, the first in read mode and the second in write mode. Then we copy from the first into the second. We finish by closing both files again. It may surprise you to know that the memory used is 393KB.

That seems familiar. Isn’t that what the generator code used to store when reading each line? That’s because the second argument to fgets specifies how many bytes of each line to read (and defaults to -1 or until it reaches a new line).

The third argument to stream_copy_to_stream is exactly the same sort of parameter (with exactly the same default). stream_copy_to_stream is reading from one stream, one line at a time, and writing it to the other stream. It skips the part where the generator yields a value, since we don’t need to work with that value.

Piping this text isn’t useful to us, so let’s think of other examples which might be. Suppose we wanted to output an image from our CDN, as a sort of redirected application route. We could illustrate it with code resembling the following:

// from piping-files-3.php

    "piping-files-3.jpeg", file_get_contents(

// ...or write this straight to stdout, if we don't need the memory info

require "memory.php";

Imagine an application route brought us to this code. But instead of serving up a file from the local file system, we want to get it from a CDN. We may substitute file_get_contents for something more elegant (like Guzzle), but under the hood it’s much the same.

The memory usage (for this image) is around 581KB. Now, how about we try to stream this instead?

// from piping-files-4.php

$handle1 = fopen(
    "", "r"

$handle2 = fopen(
    "piping-files-4.jpeg", "w"

// ...or write this straight to stdout, if we don't need the memory info

stream_copy_to_stream($handle1, $handle2);


require "memory.php";

The memory usage is slightly less (at 400KB), but the result is the same. If we didn’t need the memory information, we could just as well print to standard output. In fact, PHP provides a simple way to do this:

$handle1 = fopen(
    "", "r"

$handle2 = fopen(
    "php://stdout", "w"

stream_copy_to_stream($handle1, $handle2);


// require "memory.php";

Other Streams

There are a few other streams we could pipe and/or write to and/or read from:

  • php://stdin (read-only)
  • php://stderr (write-only, like php://stdout)
  • php://input (read-only) which gives us access to the raw request body
  • php://output (write-only) which lets us write to an output buffer
  • php://memory and php://temp (read-write) are places we can store data temporarily. The difference is that php://temp will store the data in the file system once it becomes large enough, while php://memory will keep storing in memory until that runs out.

Continue reading %How to Read Big Files with PHP (Without Killing Your Server)%

Source: Sitepoint

Using CSS Grid: Supporting Browsers Without Grid



When using any new CSS, the question of browser support has to be addressed. This is even more of a consideration when new CSS is used for layout as with Flexbox and CSS Grid, rather than things we might consider an enhancement.

Using CSS Grid: Supporting Browsers Without Grid

In this article, I explore approaches to dealing with browser support today. What are the practical things we can do to allow us to use new CSS now and still give a great experience to the browsers that don’t support it?

The post Using CSS Grid: Supporting Browsers Without Grid appeared first on Smashing Magazine.

Source: Smashing Magazine

Essential Skills for Landing a Test Automation Job in 2018

Every year brings new requirements in the test automation market. Test automation engineers must master their skills in order to stay ahead and land the job of their dreams. Following our last research: World’s Most Desirable Test Automation Skills, TestProject examined top job searching websites around the world to determine the most demanded test automation skills and technologies for 2018.

Continue reading %Essential Skills for Landing a Test Automation Job in 2018%

Source: Sitepoint

Gates and Policies in Laravel

Today, we’re going to discuss the authorization system of the Laravel web framework. The Laravel framework implements authorization in the form of gates and policies. After an introduction to gates and policies, I’ll demonstrate the concepts by implementing a custom example.

I assume that you’re already aware of the built-in Laravel authentication system as that’s something essential in order to understand the concept of authorization. Obviously, the authorization system works in conjunction with the authentication system in order to identify the legitimate user session.

If you’re not aware of the Laravel authentication system, I would highly recommend going through the official documentation, which provides you with hands-on insight into the subject.

Laravel’s Approach to Authorization

By now, you should already know that the Laravel authorization system comes in two flavors—gates and policies. Although it may sound like a complicated affair, I would say it’s pretty easy to implement it once you get the hang of it!

Gates allow you to define an authorization rule using a simple closure-based approach. In other words, when you want to authorize an action that’s not related to any specific model, the gate is the perfect place to implement that logic.

Let’s have a quick look at what gate-based authorization looks like:

The above snippet defines the authorization rule update-post that you could call from anywhere in your application.

On the other hand, you should use policies when you want to group the authorization logic of any model. For example, let’s say you have a Post model in your application, and you want to authorize the CRUD actions of that model. In that case, it’s the policy that you need to implement.

As you can see, it’s a pretty simple policy class that defines the authorization for the CRUD actions of the Post model.

So that was an introduction to gates and policies in Laravel. From the next section onwards, we’ll go through a practical demonstration of each element.


In this section, we’ll see a real-world example to understand the concept of gates.

More often than not, you end up looking at the Laravel service provider when you need to register a component or a service. Following that convention, let’s go ahead and define our custom gate in the app/Providers/AuthServiceProvider.php as shown in the following snippet.

In the boot method, we’ve defined our custom gate:

While defining a gate, it takes a closure that returns either TRUE or FALSE based on the authorization logic that’s defined in the gate definition. Apart from the closure function, there are other ways you could define gates.

For example, the following gate definition calls the controller action instead of the closure function.

Now, let’s go ahead and add a custom route so that we can go through a demonstration of how gate-based authorization works. In the routes file routes/web.php, let’s add the following route.

Let’s create an associated controller file app/Http/Controllers/PostController.php as well.

In most cases, you’ll end up using either the allows or denies method of the Gate facade to authorize a certain action. In our example above, we’ve used the allows method to check if the current user is able to perform the update-post action.

Users with sharp eyes would have noticed that we’ve only passed the second argument $post to the closure. The first argument, the current logged-in user, is automatically injected by the Gate facade.

So that’s how you’re supposed to use gates to authorize actions in your Laravel application. The next section is all about how to use policies, should you wish to implement authorization for your models.


As we discussed earlier, when you want to logically group your authorization actions for any particular model or resource, it’s the policy you’re looking for.

In this section, we’ll create a policy for the Post model that will be used to authorize all the CRUD actions. I assume that you’ve already implemented the Post model in your application; otherwise, something similar will do.

The Laravel artisan command is your best friend when it comes to creating stubbed code. You can use the following artisan command to create a policy for the Post model.

As you can see, we’ve supplied the --model=Post argument so that it creates all the CRUD methods. In the absence of that, it’ll create a blank Policy class. You can locate the newly created Policy class at app/Policies/PostPolicy.php.

Let’s replace it with the following code.

To be able to use our Policy class, we need to register it using the Laravel service provider as shown in the following snippet.

We’ve added the mapping of our Policy in the $policies property. It tells Laravel to call the corresponding policy method to authorize the CRUD action.

You also need to register the policies using the registerPolicies method, as we’ve done in the boot method.

Moving further, let’s create a couple of custom routes in the routes/web.php file so that we can test our Policy methods there.

Finally, let’s create an associated controller at app/Http/Controllers/PostController.php.

There are different ways you could authorize your actions using Policies. In our example above, we’ve used the User model to authorize our Post model actions.

The User model provides two useful methods for authorization purposes—can and cant. The can method is used to check if the current user is able to execute a certain action. And the counterpart of the can method, the cant method, is used to determine the inability of the action execution.

Let’s grab the snippet of the view method from the controller to see what exactly it does.

Firstly, we load the currently logged-in user, which gives us the object of the User model. Next, we load an example post using the Post model.

Moving ahead, we’ve used the can method of the User model to authorize the view action of the Post model. The first argument of the can method is the action name that you want to authorize, and the second argument is the model object that you want to get authorized against.

That was a demonstration of how to use the User model to authorize the actions using policies. Alternatively, you could use the Controller Helper as well, if you’re in the controller while authorizing a certain action.

As you can see, you don’t need to load the User model if you use the Controller Helper.

So that was the concept of policies at your disposal, and it’s really handy while authorizing a model or a resource as it allows you to group the authorization logic in one place.

Just make sure that you don’t use gates and policies altogether for the same actions of the Model, otherwise it’ll create issues. That’s it from my side for today, and I’ll call it a day!


Today, it was Laravel authorization that took the center stage in my article. At the beginning of the article, I introduced the main elements of Laravel authorization, gates and policies.

Following that, we went through creating our custom gate and policy to see how it works in the real world. I hope you’ve enjoyed the article and learned something useful in the context of Laravel.

For those of you who are either just getting started with Laravel or looking to expand your knowledge, site, or application with extensions, we have a variety of things you can study on Envato Market.

As always, I would love to hear from you in the form of comments using the feed below!

Source: Nettuts Web Development

Automate CI/CD and Spend More Time Writing Code

This article was sponsored by Microsoft Visual Studio App Center. Thank you for supporting the partners who make SitePoint possible.

What’s the best part about developing software? Writing amazing code.

What’s the worst part? Everything else.

Developing software is a wonderful job. You get to solve problems in new ways, delight users, and see something you built making lives better. But for all the hours we spend writing code, there are often just as many spent managing the overhead that comes along with it—and it’s all a big waste of time. Here are some of the biggest productivity sinkholes, and how we at Microsoft are trying to scrape back some of that time for you.

1. Building

What’s the first step to getting your awesome app in the hands of happy users? Making it exist. Some may think moving from source code to binary wouldn’t still be such a pain, but it is. Depending on the project, you might compile several times a day, on different platforms, and all that waiting is time you could have spent coding. Plus, if you’re building iOS apps, you need a Mac build agent—not necessarily your primary development tool, particularly if you’re building apps in a cross-platform framework.

You want to claim back that time, and the best way to do that is (it won’t be the last time I say this) automation. You need to automate away the configuration and hardware management so the apps just build when they’re supposed to.

Build with Microsoft Mobile Center

Our attempt to answer that need is Visual Studio App Center Build, a service that automates all the steps you don’t want to reproduce manually, so you can build every time you check in code, or any time you, your QA team, or your release managers want to. Just point Build at a Github, Bitbucket, or VSTS repo, pick a branch, configure a few parameters, and you’re building Android, UWP, and even iOS and macOS apps in the cloud, without managing any hardware. And if you need to do something special, you can add post-clone, pre-build, and post-build scripts to customize.

2. Testing

I’ve spent many years testing software, and throughout my career, there were three questions I always hated hearing:

“Are you done yet?”

“Can you reproduce it?”

“Is it really that bad?”

In the past, there’s rarely been enough time or resources for thorough, proper testing, but mobile development has exacerbated that particular problem. We now deliver more code, more frequently to more devices. We can’t waste hours trying to recreate that elusive critical failure, and we don’t have time to argue over whether a bug is a showstopper. At the same time, we’re the gatekeepers who are ultimately responsible for a high-visibility failure or a poor-quality product, and as members of a team, we want to get ahead of problems to increase quality, rather than just standing in the way of shipping.

So what’s the answer? “Automation,” sure. But automation that makes sense. Spreadsheets of data and folders of screenshots mean nothing if you can’t put it all together. When you’re up against a deadline and have to convince product owners to make a call, you need to deliver information they can understand, while still giving devs the detail they need to make the fix.

Test with Microsoft Mobile Center

To help with that, we’ve created App Center Test, a service that performs automated UI tests on hundreds of configurations across thousands of real devices. Since the tests are automated, you run exactly the same test every time, so you can identify performance and UX deviations right away, with every build. Tests produce screenshots or videos alongside performance data, so anyone can spot issues, and devs can click down into the detailed logs and start fixing right away. You can spot-check your code by testing on a few devices with every commit, then run regressions on hundreds of devices to verify that everything works for all your users.

Continue reading %Automate CI/CD and Spend More Time Writing Code%

Source: Sitepoint

Get a lifetime of online privacy with VPN Unlimited for under $45

VPN Unlimited

VPN Unlimited

If you’ve ever connected to public Wi-Fi, chances are your data and browsing activity were unencrypted and vulnerable to falling in the wrong hands. With the increasing scale and frequency of cyber attacks, it’s more important than ever to protect your online privacy. VPN Unlimited is a VPN (virtual private network) service that routes your connection through remote servers, masking your online activity and letting you access geo-blocked content from anywhere in the world. A standard subscription costs $9 USD / month, but right now SitePoint users can get a lifetime account for just $42.50

Continue reading %Get a lifetime of online privacy with VPN Unlimited for under $45%

Source: Sitepoint