How to Build a WordPress Theme from Scratch: First Steps

WordPress themes give WordPress users the ability to completely change the look of a WP website, as well as add functionality to it. In this three-part series, we’ll introduce WordPress themes, showing how they work, how they’re structured, the PHP architecture behind them, and other relevant information. We’ll then embark on a journey to build a WordPress theme.

This first article prepares us for this journey by discussing the theory behind WordPress themes.

A Word or Two on the Basics

WordPress was conceived as a blog engine, or a simple, blogging-oriented content management system. It was initially released in 2003, by Matt Mullenweg and Mike Little. Ever since then, its user base hasn’t stopped growing. WordPress is a PHP-powered web application that uses MySQL as its database, and is usually run behind a server program, such as NGINX or Apache.

WordPress is basically just a bunch of PHP files that work together as an application. A PHP interpreter uses these files to produce web pages to web visitors. It produces HTML, to be more precise.

The role of the templating engine of WordPress is to enable us to write (primarily) presentational instructions — instructions on how exactly to structure and style the HTML content that WordPress will output. These instructions are encapsulated in WordPress themes.

Each theme consist of a folder with PHP, CSS, and sometimes JavaScript files. The files that every WordPress theme must have — at the minimum — are style.css and index.php. This is the technical minimum required for the theme to function, but no serious WordPress theme stays with only these two files.

Basic template and Partials Files

The minimum index.php file catches all the queries that don’t have their respective specialized template files within a theme. front-page.php, home.php, page.php, taxonomy.php, author.php, archive.php are some of the templates that we can use in our themes to further structure specific pages or queries in our theme.

For example, the archive.php file will specify the HTML output structure when a visitor requests some of the pages that display list of posts. page.php will specify how to display individual pages, and so on.

Partials files encapsulate repeatable parts of pages in WordPress website. For example, the header and footer are usually consistent across all pages on a website, so WordPress themes separate these page parts into header.php and footer.php. comments.php will be used to display comments wherever applicable.

These files are then required from the template files that we explained.

This way, we adhere to the DRY principle, and don’t repeat the code in all those files.

Template Hierarchy

In the WordPress templating system, there’s a hierarchy of template files that WordPress will try to use for each request. This hierarchy is based on specificity. WordPress will try to use the most specific file for each request, if it exists. If it doesn’t exist, it will look up the next, less specific file, and so on.

To explain this on a page request — when the visitor visits a specific page on a WordPress website — WordPress will first try to find the template the page author assigned to it in the wp-admin backend. That can be a completely custom template file, even completely static.

If there’s no such template, or it wasn’t assigned, WordPress will try to find a template file with a slug of that particular page in its filename. That would look like page-mypageslug.php — whatever our mypageslug may be.

The next file WordPress will try to use will be a file with that particular page’s ID in its filename — like page-48.php.

If none of those page-specific files exist, WordPress will try to use page.php — used for all the pages, unless otherwise specified.

If it doesn’t find page.php, it will try to use singular.php. This template file is used when — for postssingle.php is not found, and for pages when page.php is not found.

Now, if none of these are found in our page request example, WordPress will fall back to index.php.

This, briefly, explains the WordPress template hierarchy. All of these template files we mentioned will usually incorporate (require) partials like header.php, footer.php and others as needed. They can also specify their specific partials to use — for example, a page-specific header.

The next thing you need to be familiar with to understand theming is the WordPress post type.

WordPress Post Types

The content in WordPress exists in the form of post types. The built-in types are posts and pages. These are the logical ones. WordPress also has a built-in attachment post type, navigation menus and revisions. These are also, technically, post types.

We can also specify our own post types, whether in our themes or our plugins. We need use the following:

register_post_type( $post_type, $args )

Here, we specify the post type name, how it’s structured, how it’s represented in wp-admin, and so on.

When we have this defined, we can also create template files specific for this post type. Custom post types, like pages and posts, have their own template hierarchy.

More about the template hierarchy can be found here.

The post How to Build a WordPress Theme from Scratch: First Steps appeared first on SitePoint.


Source: Sitepoint

Sending Emails Asynchronously Through AWS SES

Sending Emails Asynchronously Through AWS SES

Sending Emails Asynchronously Through AWS SES

Leonardo Losoviz

2018-11-13T14:30:53+01:00
2018-11-13T13:42:26+00:00

Most applications send emails to communicate with their users. Transactional emails are those triggered by the user’s interaction with the application, such as when welcoming a new user after registering in the site, giving the user a link to reset the password, or attaching an invoice after the user does a purchase. All these previous cases will typically require sending only one email to the user. In some other cases though, the application needs to send many more emails, such as when a user posts new content on the site, and all her followers (which, in a platform like Twitter, may amount to millions of users) will receive a notification. In this latter situation, not architected properly, sending emails may become a bottleneck in the application.

That is what happened in my case. I have a site that may need to send 20 emails after some user-triggered actions (such as user notifications to all her followers). Initially, it relied on sending the emails through a popular cloud-based SMTP provider (such as SendGrid, Mandrill, Mailjet and Mailgun), however the response back to the user would take seconds. Evidently, connecting to the SMTP server to send those 20 emails was slowing the process down significantly.

After inspection, I found out the sources of the problem:

  1. Synchronous connection
    The application connects to the SMTP server and waits for an acknowledgment, synchronously, before continuing the execution of the process.
  2. High latency
    While my server is located in Singapore, the SMTP provider I was using has its servers located in the US, making the roundtrip connection take considerable time.
  3. No reusability of the SMTP connection When calling the function to send an email, the function sends the email immediately, creating a new SMTP connection on that moment (it doesn’t offer to collect all emails and send them all together at the end of the request, under a single SMTP connection).

Because of #1, the time the user must wait for the response is tied to the time it takes to send the emails. Because of #2, the time to send one email is relatively high. And because of #3, the time to send 20 emails is 20 times the time it takes to send one email. While sending only one email may not make the application terribly slower, sending 20 emails certainly does, affecting the user experience.

Let’s see how we can solve this issue.

Paying Attention To The Nature Of Transactional Emails

Before anything, we must notice that not all emails are equal in importance. We can broadly categorize emails into two groups: priority and non-priority emails. For instance, if the user forgot the password to access the account, she will expect the email with the password reset link immediately on her inbox; that is a priority email. In contrast, sending an email notifying that somebody we follow has posted new content does not need to arrive on the user’s inbox immediately; that is a non-priority email.

The solution must optimize how these two categories of emails are sent. Assuming that there will only be a few (maybe 1 or 2) priority emails to be sent during the process, and the bulk of the emails will be non-priority ones, then we design the solution as follows:

  • Priority emails can simply avoid the high latency issue by using an SMTP provider located in the same region where the application is deployed. In addition to good research, this involves integrating our application with the provider’s API.
  • Non-priority emails can be sent asynchronously, and in batches where many emails are sent together. Implemented at the application level, it requires an appropriate technology stack.

Let’s define the technology stack to send emails asynchronously next.

Defining The Technology Stack

Note: I have decided to base my stack on AWS services because my website is already hosted on AWS EC2. Otherwise, I would have an overhead from moving data among several companies’ networks. However, we can implement our soluting using other cloud service providers too.

My first approach was to set-up a queue. Through a queue, I could have the application not send the emails anymore, but instead publish a message with the email content and metadata in a queue, and then have another process pick up the messages from the queue and send the emails.

However, when checking the queue service from AWS, called SQS, I decided that it was not an appropriate solution, because:

  • It is rather complex to set-up;
  • A standard queue message can store only up top 256 kb of information, which may not be enough if the email has attachments (an invoice for instance). And even though it is possible to split a large message into smaller messages, the complexity grows even more.

Then I realized that I could perfectly imitate the behavior of a queue through a combination of other AWS services, S3 and Lambda, which are much easier to set-up. S3, a cloud object storage solution to store and retrieve data, can act as the repository for uploading the messages, and Lambda, a computing service that runs code in response to events, can pick a message and execute an operation with it.

In other words, we can set-up our email sending process like this:

  1. The application uploads a file with the email content + metadata to an S3 bucket.
  2. Whenever a new file is uploaded into the S3 bucket, S3 triggers an event containing the path to the new file.
  3. A Lambda function picks the event, reads the file, and sends the email.

Finally, we have to decide how to send emails. We can either keep using the SMTP provider that we already have, having the Lambda function interact with their APIs, or use the AWS service for sending emails, called SES. Using SES has both benefits and drawbacks:

Benefits:
  • Very simple to use from within AWS Lambda (it just takes 2 lines of code).
  • It is cheaper: Lambda fees are computed based on the amount of time it takes to execute the function, so connecting to SES from within the AWS network will take a shorter time than connecting to an external server, making the function finish earlier and costing less. (Unless SES is not available in the same region where the application is hosted; in my case, because SES is not offered in the Asian Pacific (Singapore) region, where my EC2 server is located, then I might be better off connecting to some Asia-based external SMTP provider).
Drawbacks:
  • Not many stats for monitoring our sent emails are provided, and adding more powerful ones requires extra effort (eg: tracking what percentage of emails were opened, or what links were clicked, must be set-up through AWS CloudWatch).
  • If we keep using the SMTP provider for sending the priority emails, then we won’t have our stats all together in 1 place.

For simplicity, in the code below we will be using SES.

We have then defined the logic of the process and stack as follows: The application sends priority emails as usual, but for non-priority ones, it uploads a file with email content and metadata to S3; this file is asynchronously processed by a Lambda function, which connects to SES to send the email.

Let’s start implementing the solution.

Differentiating Between Priority And Non-Priority Emails

In short, this all depends on the application, so we need to decide on an email by email basis. I will describe a solution I implemented for WordPress, which requires some hacks around the constraints from function wp_mail. For other platforms, the strategy below will work too, but quite possibly there will be better strategies, which do not require hacks to work.

The way to send an email in WordPress is by calling function wp_mail, and we don’t want to change that (eg: by calling either function wp_mail_synchronous or wp_mail_asynchronous), so our implementation of wp_mail will need to handle both synchronous and asynchronous cases, and will need to know to which group the email belongs. Unluckily, wp_mail doesn’t offer any extra parameter from which we could asses this information, as it can be seen from its signature:

function wp_mail( $to, $subject, $message, $headers = '', $attachments = array() )

Then, in order to find out the category of the email we add a hacky solution: by default, we make an email belong to the priority group, and if $to contains a particular email (eg: nonpriority@asynchronous.mail), or if $subject starts with a special string (eg: “[Non-priority!]“), then it belongs to the non-priority group (and we remove the corresponding email or string from the subject). wp_mail is a pluggable function, so we can override it simply by implementing a new function with the same signature on our functions.php file. Initially, it contains the same code of the original wp_mail function, located in file wp-includes/pluggable.php, to extract all parameters:

if ( !function_exists( 'wp_mail' ) ) :

function wp_mail( $to, $subject, $message, $headers = '', $attachments = array() ) {

  $atts = apply_filters( 'wp_mail', compact( 'to', 'subject', 'message', 'headers', 'attachments' ) );

  if ( isset( $atts['to'] ) ) {
    $to = $atts['to'];
  }

  if ( !is_array( $to ) ) {
    $to = explode( ',', $to );
  }

  if ( isset( $atts['subject'] ) ) {
    $subject = $atts['subject'];
  }

  if ( isset( $atts['message'] ) ) {
    $message = $atts['message'];
  }

  if ( isset( $atts['headers'] ) ) {
    $headers = $atts['headers'];
  }

  if ( isset( $atts['attachments'] ) ) {
    $attachments = $atts['attachments'];
  }

  if ( ! is_array( $attachments ) ) {
    $attachments = explode( "n", str_replace( "rn", "n", $attachments ) );
  }
  
  // Continue below...
}
endif;

And then we check if it is non-priority, in which case we then fork to a separate logic under function send_asynchronous_mail or, if it is not, we keep executing the same code as in the original wp_mail function:

function wp_mail( $to, $subject, $message, $headers = '', $attachments = array() ) {

  // Continued from above...

  $hacky_email = "nonpriority@asynchronous.mail";
  if (in_array($hacky_email, $to)) {

    // Remove the hacky email from $to
    array_splice($to, array_search($hacky_email, $to), 1);

    // Fork to asynchronous logic
    return send_asynchronous_mail($to, $subject, $message, $headers, $attachments);
  }

  // Continue all code from original function in wp-includes/pluggable.php
  // ...
}

In our function send_asynchronous_mail, instead of uploading the email straight to S3, we simply add the email to a global variable $emailqueue, from which we can upload all emails together to S3 in a single connection at the end of the request:

function send_asynchronous_mail($to, $subject, $message, $headers, $attachments) {
  
  global $emailqueue;
  if (!$emailqueue) {
    $emailqueue = array();
  }
  
  // Add email to queue. Code continues below...
}

We can upload one file per email, or we can bundle them so that in 1 file we contain many emails. Since $headers contains email meta (from, content-type and charset, CC, BCC, and reply-to fields), we can group emails together whenever they have the same $headers. This way, these emails can all be uploaded in the same file to S3, and the $headers meta information will be included only once in the file, instead of once per email:

function send_asynchronous_mail($to, $subject, $message, $headers, $attachments) {
  
  // Continued from above...

  // Add email to the queue
  $emailqueue[$headers] = $emailqueue[$headers] ?? array();
  $emailqueue[$headers][] = array(
    'to' => $to,
    'subject' => $subject,
    'message' => $message,
    'attachments' => $attachments,
  );

  // Code continues below
}

Finally, function send_asynchronous_mail returns true. Please notice that this code is hacky: true would normally mean that the email was sent successfully, but in this case, it hasn’t even been sent yet, and it could perfectly fail. Because of this, the function calling wp_mail must not treat a true response as “the email was sent successfully,” but an acknowledgment that it has been enqueued. That’s why it is important to restrict this technique to non-priority emails so that if it fails, the process can keep retrying in the background, and the user will not expect the email to already be in her inbox:

function send_asynchronous_mail($to, $subject, $message, $headers, $attachments) {
  
  // Continued from above...

  // That's it!
  return true;
}

Uploading Emails To S3

In my previous article “Sharing Data Among Multiple Servers Through AWS S3”, I described how to create a bucket in S3, and how to upload files to the bucket through the SDK. All code below continues the implementation of a solution for WordPress, hence we connect to AWS using the SDK for PHP.

We can extend from the abstract class AWS_S3 (introduced in my previous article) to connect to S3 and upload the emails to a bucket “async-emails” at the end of the request (triggered through wp_footer hook). Please notice that we must keep the ACL as “private” since we don’t want the emails to be exposed to the internet:

class AsyncEmails_AWS_S3 extends AWS_S3 {

  function __construct() {

    // Send all emails at the end of the execution
    add_action("wp_footer", array($this, "upload_emails_to_s3"), PHP_INT_MAX);
  }

  protected function get_acl() {

    return "private";
  }

  protected function get_bucket() {

    return "async-emails";
  }

  function upload_emails_to_s3() {

    $s3Client = $this->get_s3_client();

    // Code continued below...
  }
}
new AsyncEmails_AWS_S3();

We start iterating through the pairs of headers => emaildata saved in global variable $emailqueue, and get a default configuration from function get_default_email_meta for if the headers are empty. In the code below, I only retrieve the “from” field from the headers (the code to extract all headers can be copied from the original function wp_mail):

class AsyncEmails_AWS_S3 extends AWS_S3 {

  public function get_default_email_meta() {

    // Code continued from above...

    return array(
      'from' => sprintf(
        '%s ',
        get_bloginfo('name'),
        get_bloginfo('admin_email')
      ),
      'contentType' => 'text/html',
      'charset' => strtolower(get_option('blog_charset'))
    );
  }

  public function upload_emails_to_s3() {

    // Code continued from above...

    global $emailqueue;
    foreach ($emailqueue as $headers => $emails) {

      $meta = $this->get_default_email_meta();

      // Retrieve the "from" from the headers
      $regexp = '/From:s*(([^<]*?) <)??s*n/i';
      if(preg_match($regexp, $headers, $matches)) {
        
        $meta['from'] = sprintf(
          '%s ',
          $matches[2],
          $matches[3]
        );
      }

      // Code continued below... 
    }
  }
}

Finally, we upload the emails to S3. We decide how many emails to upload per file with the intention to save money. Lambda functions charge based on the amount of time they need to execute, calculated on spans of 100ms. The more time a function requires, the more expensive it becomes.

Sending all emails by uploading 1 file per email, then, is more expensive than uploading 1 file per many emails, since the overhead from executing the function is computed once per email, instead of only once for many emails, and also because sending many emails together fills the 100ms spans more thoroughly.

So we upload many emails per file. How many emails? Lambda functions have a maximum execution time (3 seconds by default), and if the operation fails, it will keep retrying from the beginning, not from where it failed. So, if the file contains 100 emails, and Lambda manages to send 50 emails before the max time is up, then it fails and it retries executing the operation again, sending the first 50 emails once again. To avoid this, we must choose a number of emails per file that we are confident is enough to process before the max time is up. In our situation, we could choose to send 25 emails per file. The number of emails depends on the application (bigger emails will take longer to be sent, and the time to send an email will depend on the infrastructure), so we should do some testing to come up with the right number.

The content of the file is simply a JSON object, containing the email meta under property “meta”, and the chunk of emails under property “emails”:

class AsyncEmails_AWS_S3 extends AWS_S3 {

  public function upload_emails_to_s3() {

    // Code continued from above...
    foreach ($emailqueue as $headers => $emails) {

      // Code continued from above...

      // Split the emails into chunks of no more than the value of constant EMAILS_PER_FILE:
      $chunks = array_chunk($emails, EMAILS_PER_FILE);
      $filename = time().rand();
      for ($chunk_count = 0; $chunk_count  $meta,
          'emails' => $chunks[$chunk_count],
        );

        // Upload to S3
        $s3Client->putObject([
          'ACL' => $this->get_acl(),
          'Bucket' => $this->get_bucket(),
          'Key' => $filename.$chunk_count.'.json',
          'Body' => json_encode($body),
        ]);  
      }   
    }
  }
}

For simplicity, in the code above, I am not uploading the attachments to S3. If our emails need to include attachments, then we must use SES function SendRawEmail instead of SendEmail (which is used in the Lambda script below).

Having added the logic to upload the files with emails to S3, we can move next to coding the Lambda function.

Coding The Lambda Script

Lambda functions are also called serverless functions, not because they do not run on a server, but because the developer does not need to worry about the server: the developer simply provides the script, and the cloud takes care of provisioning the server, deploying and running the script. Hence, as mentioned earlier, Lambda functions are charged based on function execution time.

The following Node.js script does the required job. Invoked by the S3 “Put” event, which indicates that a new object has been created on the bucket, the function:

  1. Obtains the new object’s path (under variable srcKey) and bucket (under variable srcBucket).
  2. Downloads the object, through s3.getObject.
  3. Parses the content of the object, through JSON.parse(response.Body.toString()), and extracts the emails and the email meta.
  4. Iterates through all the emails, and sends them through ses.sendEmail.
var async = require('async');
var aws = require('aws-sdk');
var s3 = new aws.S3();
      
exports.handler = function(event, context, callback) {

  var srcBucket = event.Records[0].s3.bucket.name;
  var srcKey = decodeURIComponent(event.Records[0].s3.object.key.replace(/+/g, " ")); 

  // Download the file from S3, parse it, and send the emails
  async.waterfall([

    function download(next) {

      // Download the file from S3 into a buffer.
      s3.getObject({
        Bucket: srcBucket,
        Key: srcKey
      }, next);
    },
    function process(response, next) {
          
      var file = JSON.parse(response.Body.toString());
      var emails = file.emails;
      var emailsMeta = file.meta;
        
      // Check required parameters
      if (emails === null || emailsMeta === null) {
        callback('Bad Request: Missing required data: ' + response.Body.toString());
        return;
      }
      if (emails.length === 0) {
        callback('Bad Request: No emails provided: ' + response.Body.toString());
        return;
      }
      
      var totalEmails = emails.length;
      var sentEmails = 0;
      for (var i = 0; i < totalEmails; i++) {

        var email = emails[i];
        var params = {
          Destination: {
            ToAddresses: email.to
          },
          Message: {          
            Subject: {
              Data: email.subject,
              Charset: emailsMeta.charset
            }
          },
          Source: emailsMeta.from
        };

        if (emailsMeta.contentType == 'text/html') {

          params.Message.Body = {
            Html: {
              Data: email.message,
              Charset: emailsMeta.charset
            }
          };
        } 
        else {

          params.Message.Body = {
            Text: {
              Data: email.message,
              Charset: emailsMeta.charset
            }
          };
        }

        // Send the email
        var ses = new aws.SES({
          "region": "us-east-1"
        });
        ses.sendEmail(params, function(err, data) {

          if (err) {
            console.error('Unable to send email due to an error: ' + err);
            callback(err);
          }

          sentEmails++;
          if (sentEmails == totalEmails) {
            next();
          }
        });
      }
    }
  ], 
  function (err) {

    if (err) {
        console.error('Unable to send emails due to an error: ' + err);
        callback(err);
    }

    // Success
    callback(null);
  });
};

Next, we must upload and configure the Lambda function to AWS, which involves:

  1. Creating an execution role granting Lambda permissions to access S3.
  2. Creating a .zip package containing all the code, i.e. the Lambda function we are creating + all the required Node.js modules.
  3. Uploading this package to AWS using a CLI tool.

How to do these things is properly explained on the AWS site, on the Tutorial on Using AWS Lambda with Amazon S3.

Hooking Up S3 With The Lambda Function

Finally, having the bucket and the Lambda function created, we need to hook both of them together, so that whenever there is a new object created on the bucket, it will trigger an event to execute the Lambda function. To do this, we go to the S3 dashboard and click on the bucket row, which will show its properties:


Displaying bucket properties inside the S3 dashboard
Clicking on the bucket’s row displays the bucket’s properties. (Large preview)

Then clicking on Properties, we scroll down to the item “Events”, and there we click on Add a notification, and input the following fields:

  • Name: name of the notification, eg: “EmailSender”;
  • Events: “Put”, which is the event triggered when a new object is created on the bucket;
  • Send to: “Lambda Function”;
  • Lambda: name of our newly created Lambda, eg: “LambdaEmailSender”.

Setting up S3 with Lambda
Adding a notification in S3 to trigger an event for Lambda. (Large preview)

Finally, we can also set the S3 bucket to automatically delete the files containing the email data after some time. For this, we go to the Management tab of the bucket, and create a new Lifecycle rule, defining after how many days the emails must expire:


Lifecycle rule
Setting up a Lifecycle rule to automatically delete files from the bucket. (Large preview)

That’s it. From this moment, when adding a new object on the S3 bucket with the content and meta for the emails, it will trigger the Lambda function, which will read the file and connect to SES to send the emails.

I implemented this solution on my site, and it became fast once again: by offloading sending emails to an external process, whether the applications send 20 or 5000 emails doesn’t make a difference, the response to the user who triggered the action will be immediate.

Conclusion

In this article we have analyzed why sending many transactional emails in a single request may become a bottleneck in the application, and created a solution to deal with the issue: instead of connecting to the SMTP server from within the application (synchronously), we can send the emails from an external function, asynchronously, based on a stack of AWS S3 + Lambda + SES.

By sending emails asynchronously, the application can manage to send thousands of emails, yet the response to the user who triggered the action will not be affected. However, to ensure that the user is not waiting for the email to arrive in the inbox, we also decided to split emails into two groups, priority and non-priority, and send only the non-priority emails asynchronously. We provided an implementation for WordPress, which is rather hacky due to the limitations of function wp_mail for sending emails.

A lesson from this article is that serverless functionalities on a server-based application work pretty well: sites running on a CMS like WordPress can improve their performance by implementing only specific features on the cloud, and avoid a great deal of complexity that comes from migrating highly dynamic sites to a fully serverless architecture.

Smashing Editorial
(rb, ra, yk, il)


Source: Smashing Magazine

Using Visual Composer Website Builder To Create WordPress Websites

Using Visual Composer Website Builder To Create WordPress Websites

Using Visual Composer Website Builder To Create WordPress Websites

Nick Babich

2018-11-12T12:55:03+01:00
2018-11-12T16:40:43+00:00

(This is a sponsored article.) WordPress has changed the way we make websites and millions of people use it to create websites today. But this tool has a few significant limitations — it requires time and coding skills to create a website.

Even when you have aquired coding skills, jumping into code each time when you need to solve a problem (add a new UI element or change styling options for existing one) can be tedious. All too often we hear: “We need to work harder to achieve our goals.” While working hard is definitely important we also need to work smarter.

Today, I’d like to review a tool that will allow us to work smarter. Imagine WordPress without design and technical limits; the tool that reduces the need to hand-code the parts of your website and frees you up to work on more interesting and valuable parts of the design.

In this article, I’ll review the Visual Composer Website Builder and create a real-world example — a landing page for a digital product — just by using this tool.

What Is Visual Composer Website Builder?

Visual Composer Website Builder is a simple and powerful drag-and-drop website builder that promises to change the way we work with WordPress. It introduced a more intuitive way of building a page — all actions involving changing visual hierarchy and content management are done visually. The tool reduces the need to hand-code the theme parts of a website and frees you up to work on valuable parts of the design such as content.

A GIF showing some features of Visual Composer Website Builder
(Large preview)

Content is the most important property of your website. It’s the primary reason why people visit your site — for content. It’s worth putting a lot of efforts in creating good content and use tools that help you to deliver the content in the best way to visitors with the least amount of effort.

Visual Composer And WPBakery

Visual Composer Website Builder is a builder from the creators of WPBakery Page Builder. If you had a chance to use the WPBakery Page builder before you might wonder what the difference between the two plugins is. Let’s be clear about these two products:

There are a few significant differences between the two:.

  • The key difference between WPBakery Page builder and Visual Composer is that WPBakery is only for the content part, while with Visual Composer Website Builder you can create a complete website (including Headers and Footers).
  • Visual Composer is not shortcode based, which helps generate clean code. Also, disabling the plugin won’t leave you with “shortcode hell” (a situation when shortcodes can’t be rendered without an activated plugin).

You can check the full list of differences between two plugins here.

Now, Visual Composer Website Builder is not an ‘advanced’ version of WPBakery. It is an entirely new product that was created to satisfy the growing needs of web professionals. Visual Composer is not just a plugin; it’s a powerful platform that can be extended as user needs continue evolving.

A Quick List Of Visual Composer’s Features

While I’ll show you how Visual Composer works in action below, it’s worth to point out a few key benefits of this tool:

  • It’s a live-preview editor with drag-and-drop features, and hundreds of ready-to-use content elements that bring a lot of design freedom. You can make changes instantly and see end-results before publishing.
  • Two ways of page editing — using frontend editor and tree view. Tree view allows navigating through the elements available on a page and makes a design process much easier.
  • Ready-to-use WordPress templates for all types of pages — from landing pages and portfolios to business websites with dedicated product pages, because editing an existing template is a lot easier than starting from scratch with a blank page.
  • Visual Composer works with any theme (i.e. it’s possible to integrate Visual Composer Website builder into your existing themes)
  • Responsive design out-of-the-box. All the elements and templates are responsive and mobile-ready. You can adjust responsiveness for each independent column.
  • Header, footer, and sidebar editor. Usually the header, footer and sidebar are defined by the theme you’re using. When web professionals need to change them, they usually move to code. But with Visual Composer, you can change the layout quickly and easily using only the visual editor. This feature is available in the Premium version of the product.
  • An impressive collection of add-ons (it’s possible to get add-ons from the Hub or get them from third-party developers)

There are also three features that make Visual Composer stand out from the crowd. Here they are:

1. Visual Composer Hub

Visual Composer Hub is a cloud which stores all the elements available to the users. It’s basically like a design system that keeps itself updated and where you can get new elements, templates, elements, blocks (soon).


A screenshot og Visual Composer Hub: a cloud which stores all the elements available to the users.
(Large preview)

The great thing about Visual Composer Hub is that you don’t need to update the plugin to get new elements — you can download the elements whenever you need them. As a result, your WP setup isn’t bloated with a myriad of unused elements.

2. New Technical Stack

Visual Composer Website builder is built on a new technology stack — it’s powered by ReactJS and doesn’t use any of the WordPress shortcodes. This helps to achieve better performance — the team behind Visual Composer conducted a series of internal tests and showed that pages created with Visual Composer load 1-1.5s faster than the same layouts re-created with WPBakery.

3. API

Visual Composer Website builder has a well-documented open API. If you have coding skills, you can extend Visual Composer with your own custom elements which may be helpful for some custom projects.

How To Create A Landing Page With Visual Composer

In this section, I’ll show how to create a landing page for a digital product called CalmPod (a fictional home speaker device) with the new Visual Composer Website Builder.

Our journey starts in a WP interface where we need to create a new page — give it a title and click the ‘Edit with Visual Composer button.’


How to create a landing page With Visual Composer
(Large preview)

Creating A Layout For A Landing Page

The process of creating the page starts with building an appropriate layout. Usually building a layout for a landing page takes a lot of time and effort. Designers have to try a lot of different approaches before finding the one that works the best for the content. But Visual Composer simplifies the task for designers — it provides a list of ready-to-use layouts (available under the Add Template option). So, all you need to do to create a new page is to find the appropriate layout from the list of available options and see how it works for your content.


You can start with a blank page or select a ready-to-use template.
You can start with a blank page or select a ready-to-use template. (Large preview)

But for our example, we’ll select the Startup Page template. This template applies automatically as soon as we click the + symbol, so all we need to do is to modify it according to our needs.


The Startup Page template applies automatically as soon as we click the plus symbol, so all we need to do is to modify it according to our needs.
(Large preview)

Each layout in Visual Composer consists of rows and columns. The row is a base that defines the logical structure of the page. Each row consists of columns. Visual Composer gives you the ability to control the number of columns in a row.


Each layout in Visual Composer consists of rows and columns.
(Large preview)

Tip: Notice that Visual Composer uses different colored borders for UI units. When we select a row, we see a blue-colored border, when we select a column, we see an orange-colored border. This feature can be extremely valuable when you work on creating complex layouts.


Visual Composer uses different colored borders for UI units
(Large preview)

Visual Composer can customize all properties of the layout, i.e. add/remove elements or change their styling options (such as margins, padding between elements)
(Large preview)

The great thing about Visual Composer is that we can customize all properties of the layout — add/remove elements or change their styling options (such as margins, padding between elements). For example, we don’t need to dive into the code to alter the sizes of columns; we can simply drag and drop the borders of individual elements.


We don’t need to dive into the code to alter the sizes of columns; we can simply drag and drop the borders of individual elements.
(Large preview)

It’s important to mention that we can use either the visual editor or the tree view of elements to modify individual properties of UI elements.


You don’t need to dive into the code to alter the sizes of columns; we can simply drag and drop the borders of individual elements.
(Large preview)

By clicking on the ‘Pen’ icon, we activate a screen with individual styling properties for the element.


By clicking on the ‘Pen’ icon, you can activate a screen with individual styling properties for the element.
(Large preview)

Stretch Content

Visual Composer allows making the layout either boxed or stretched. If you switch the ‘Stretch content’ toggle to ‘On’, your layout will be in full width.


Visual Composer allows making the layout either boxed or stretched.
(Large preview)

Changing The Page Title

Visual Composer allows users to change the page title. You can do it in the Layout settings. Let’s give our page the following title: ‘CalmTech: the best digital assistant.’


Visual Composer allows users to change the page title. You can do it in the Layout settings.
(Large preview)

Adding The Top Menu

Now it’s time to add a top menu to our landing page. Suppose we have the following menu in WP:


Adding a top menu to the landing page
(Large preview)

And we want to place it at the top of our newly created landing page. To do that, we need to go to Visual Composer -> Headers (because the top of the page is a default place for navigation) and create a new header.

As soon as we click on the ‘Add Header’ button, we’ll see a screen that asks us to provide a title for the page — let’s give it a name “Top header.” It’s a technical name that will help us identify this object.


As soon as you click on the ‘Add Header’ button, you’ll see a screen that asks us to provide a title for the page
(Large preview)

Next, Visual Composer will direct us to the Hub where we can add all required UI elements to our header. Since we want to have a menu, we type ‘menu’ in the search box. The Hub provides us with two options: Basic menu and Sandwich menu. For our case, we’ll use the* Basic Menu* because we have a limited number of top level navigation options and want all of them to be visible all the time (hidden navigation such as Sandwich Menu can be bad for discoverability) .


The Hub provides us with two options: Basic menu and Sandwich menu. For our case, we’ll use the Basic Menu.
(Large preview)

Finally, we need to choose the menu source (in our case it’ll be Main menu, the one that we have in WP) and change the appearance of the navigation options.


Choosing the menu source in order to change the appearance of the navigation options
(Large preview)

Let’s change the alignment of the menu (we will move it to the right).


Changing the alignment of the menu to the right
(Large preview)

And that’s all. Now we can use our header page settings. Let’s modify our home page to include a Header. Hover over the *Please select Your Header*element, and you’ll see a button Add Header.


Modifying the home page to include a Header
(Large preview)

When you click on the button, you’ll see a dialog at the left part of the screen that invites you to select a header. Let’s choose the Top Header option from the list.


Choosing the Top Header option
(Large preview)

After we select a header, you’ll see a menu at the top of the page.


After we select a header, you’ll see a menu at the top of the page.
(Large preview)

Making The Top Menu Sticky

The foundational principle of good navigation says that a navigation menu should be available for the users all of the time. But unfortunately, on many websites, the top navigation menu hides when you scroll. Such behavior forces users to scroll way back to the top in order to navigate to another page. It introduces unnecessary interaction costs. Fortunately, there’s a simple solution for this problem — we can make the top menu sticky. A sticky menu stays visible all the time no matter where the user is on a page.

To enable stickiness, we need to turn the Sticky toggle for our header On (for the whole Menu container) and add a margin 50-pixels margin to the Margin top.


To enable stickiness, we need to turn on the Sticky toggle for our header and add a margin 50-pixels margin to the Margin top.
(Large preview)

When you scroll the landing page, you’ll notice that the header stays visible all the time.

Pairing Image With Text

Next comes a really exciting part — we need to describe our product to our visitors. To create a great first-time impression, we need to provide excellent images paired with a clear description. Text description and product picture (or pictures) should work together and engage visitors in learning more about a product.

We need to replace a default image with our image. Click on the image and upload a new one. We’ll use an image with a dart background, so we also need to change the background for the container. We need to select the row and modify the background color option.


Uploading an image with a dart background
(Large preview)

Next, we need to add a text section to the left of the image. In the western world, users scan the page from left to right, so visitors will read text description and match it with the image. Visual Composer uses Text Block object to store the text information. Let’s replace a text that came with theme with our custom text “CalmTech A breakthrough speaker that adapts to its location.” Let’s also modify the text color to make the text more relevant to the theme (white for the title and a shade of gray for the subtitle).


ModifyING the text color to make the text more relevant to the theme
(Large preview)

Creating A Group Of Elements

We have a picture of a product and a text description, but still, one element is missing. As you probably guessed, it’s a call to action (CTA). Good designers don’t just create individual pages but a holistic user journey. Thus, to create an enjoyable user journey, it’s important to guide users along the way. At the time when visitors read the necessary information, it’s vital to provide the next logical step for them, and a CTA is a precisely right element for this role.

In our case, we’ll need two CTAs — ‘Buy now’ and ‘Learn More.’ The primary call to action button “Buy now” should come first and it should be more eye-catching (we expect that users will click on it). Thus, we need to make it more contrasting while the “Learn more” button should be a hollow button.

Visual Composer makes it easier to customize the general parameters for the UI element (such as a gap) as well as individual styling options. Since we’re interested in changing individual properties, we need to click on ‘Edit’ for a particular button.


Visual Composer makes it easier to customize the general parameters for the UI element (such as a gap) as well as individual styling options.
(Large preview)

Playing With Animation To Convey Dynamics And Telling Stories

People visit dozens of different websites on a daily basis. In such a highly competitive market web professionals need to create genuinely memorable products. One way to achieve this goal is to focus on building better user engagement.

It’s possible to engage visitors to interact with a product by conveying some dynamics. If you make a site less static, there’s a better chance that visitors remember it.

Visual Composer allows you to choose from a few predefined CSS animations of a particular element. When we open design options for any UI element we can find the option Animate. When we choose the animated option, it’ll be triggered once the element will be visible in the browser window.


Visual Composer also allows you to choose from a few predefined CSS animations of a particular element.
(Large preview)

Final Polishing

Let’s see how our page looks like for our site’s visitors. It’s obvious that it has two problems:

  • It looks a bit unfinished (we don’t have a logo of a website),
  • The elements have the wrong proportions (the text overpowers the image so the layout looks unbalanced).

Preview of the page created
(Large preview)

Let’s solve the first problem. Go to the Headers section and select our Top Header. Click on ‘+’ element and select an object Single Image. Upload new image (the icon). Notice that we can modify the size of the image right in the Visual Composer. Let’s make the size of our icon 50px x 50px (in the Size section).


The size of the image can be modified directly in the Visual Composer.
(Large preview)

Now it’s time to solve the second problem. Select the first column and adjust the size of an text (set the size to 40 for the subheader). Here is how our page will look like after the changes.


Final preview of the page created with Visual Composer
(Large preview)

Conclusion

Visual Composer Website Builder simplifies the process of page building in WordPress. The process of web design becomes not only fast and easy, but it also becomes more fun because designers have a lot more creative freedom to express their ideas. And when web professionals have more creative freedom, they can come up with better design solutions.

Smashing Editorial
(ms, ra, il)


Source: Smashing Magazine

How to Deploy and Host a Joomla Website on Alibaba Cloud ECS

This article was originally published on Alibaba Cloud. Thank you for supporting the partners who make SitePoint possible.

Joomla! is a free and open source content management system (CMS), and is one of the most popular among them. According to the official website, Joomla! is built on a model-view-controller web application framework that can be used independently of the CMS, allowing you to build powerful online applications.

One of my personal favorite features of Joomla! is the multi-language support with its large library of language packs. You can also translate the website admin backend with language extensions, helping you to easily localize your website.

This step-by-step guide will walk you through setting up and deploying a Joomla! website on an Alibaba Cloud Elastic Compute Service (ECS) instance with Ubuntu 16.04.

Requirements and Prerequisites

Before we deploy our Joomla! instance, we need to fulfill the following requirements. We need to set up an Alibaba Cloud Elastic Compute Service (ECS) Linux server (Ubuntu 16.04) with basic configurations. You should also allocate administrator (sudo) privileges to a non-root user.

You can refer to this guide for setting up your Alibaba Cloud ECS instance. If you don’t have an Alibaba Cloud account, you can sign up for free and enjoy $300 of Free Trial credit.

Installing Joomla on an Ubuntu 16.04 ECS Instance

To install Joomla on our server, we need to first install a LAMP (Linux, Apache, MySQL, PHP) stack.

Step 1: Connect to Your Server

There are many ways to connect to your server, but I will be using the Alibaba Cloud console for simplicity. To do this, go to your instance section and click connect from your created instance. You will be redirected to the Terminal.

Enter username as Root and the password you created. If you didn’t create a password, just continue by hitting enter. You are logged in to your server as system administrator.

All the commands in the following sections should be typed in this terminal.

Step 2: Install Apache

To install Apache, update your server repository list by typing command below.

sudo apt-get update

Then install the Apache web server.

sudo apt-get install apache2

Step 3: Install MySQL

Joomla, like most other content management systems, requires MySQL for its backend. So we need to install MySQL and link it to PHP.

To do this, add the following command.

sudo apt-get install mysql-server php7.0-mysql

You’ll be asked to enter a MySQL password. Keep the password secure because you will need it later.

Complete the installation process of MySQL with the command below.

/usr/bin/mysql_secure_installation

You’ll be asked to enter the MySQL password you just created. Continue with the installation process by making the following selections.

Would you like to setup VALIDATE password plugin ? [Y/N] N
Change the root password ? [ Y/N ] N
Remove anonymous users ? [Y/N] Y
Disallow root login remotely ? [Y/N] Y
Remove test database and access to it ? [Y/N] Y
Reload privilege tables now ? [Y/N] Y

Step 4: Install PHP

Joomla! requires PHP to be installed. Execute the following command to install PHP 7.0 and other required PHP modules.

sudo apt-get install php7.0 libapache2-mod-php7.0 php7.0-mcrypt php7.0-xml php7.0-curl php7.0-json php7.0-cgi

Step 5: Confirm LAMP Installation

To confirm LAMP stack has been installed on your Ubuntu 16.04 server, follow the procedures below.

Open the web browser and navigate to your server’s IP address. You’ll see the Apache2 Ubuntu Default page.

Note: To check your server’s public IP address, check your ECS Instance dashboard. You’ll see both private and public IP addresses. Use the public IP address to access your website. If you don’t see the public IP address consider setting up an Elastic IP address.

In order to confirm the PHP installation on your server, remove the default page and replace it with the PHP code below. To do this use the commands below.

rm /var/www/html/index.html

Replace with a new file:

touch /var/www/html/index.php
nano /var/www/html/index.php

Enter a sample PHP code below:

<?php
phpinfo();
?>

To check your page, navigate to your web browser and enter the public IP address. You should see information about PHP installation if the LAMP stack is correctly installed on your server.

The post How to Deploy and Host a Joomla Website on Alibaba Cloud ECS appeared first on SitePoint.


Source: Sitepoint

6 Ways to Bring Your Development Team Together with Technology

Are you looking for better ways of bringing the team together? Would you benefit from an arsenal of tools that facilitate team working, while boosting productivity and creativity?

Project managers in development teams have to be leaders of balance, progress, decision, and business need.

Exhausting!

Maybe you need to consider the workplace of the future: the potential for virtual teams to collaborate in new ways using interactive technologies.

If you’re developing software and applications for the web and touchscreen devices, you need the cutting-edge of interactive technologies to help you communicate, test, display, and tweak your work in ways that push creative possibilities.

Read on for 6 ways to incorporate interactive technology into your development team.

The Needs of the Software Development Team

Software is only ever as good as its user interface. Web technologies are inherently interactive, so it makes absolute sense that the tools that you use to develop them have interactivity built into their core.

1. Interactive displays

The web is a visual medium. Sure, it’s made up of billions and billions of words, but – at its core – successful web apps have user-interactivity at their heart.

But when you’re working as a team to develop applications that might look good on a mobile phone screen, you can’t cram everyone around a single 10-inch screen.

Interactive displays deliver 4K definition, with an interactive touchscreen that replicates the mobile experience.

They add tactility to the development process; offering fingertip control during every stage of the design, testing, and implementation processes.

Interactive displays offer teams the ability to work collaboratively, using a central portal: designing and developing individual elements of an app in unison, while individual workstations connect wirelessly to a primary display.

2. Communication Hubs

Email has become a stalwart of communication within organisations.

But does it suit your mode of operation?

Most of us multi-task – there are few of us who have the privilege of working on just one project at a time. And if you’re receiving emails from all angles, it can be almost impossible to track individual workstreams.

There’s an ever-growing collection of virtual messaging services that can help keep each workstream separate and more manageable.

These are our favourite messaging services:

  • Slack is a messenger service that allows you to compartmentalise conversations into channels; giving you message threads, rather than a sporadic trail of emails. Slack integrates natively with Dropbox, Google Drive, Trello, Google Calendar, Google+ Hangouts, MS OneDrive and many other useful services that make work more efficient and collaboration more natural.
  • Microsoft Teamsintegrates with Office 365 and specialises in bringing large teams together. Consolidating video messaging, email, document creation suites, and project monitoring applications, MS Teams offers one central platform so that everyone can see what each other is doing. Projects become viewable from a single portal, facilitating transparent tracking of objectives and milestones. In combination with interactive displays, MS Teams really does come to life: bringing genuine and reliable flexibility to the workplace.

3. File Sharing

In web development teams, you’re working with large amounts of data.

The post 6 Ways to Bring Your Development Team Together with Technology appeared first on SitePoint.


Source: Sitepoint

How to Install Cockpit on Ubuntu 18.04

This article was originally published on Alibaba Cloud. Thank you for supporting the partners who make SitePoint possible.

Cockpit is a server manager that makes it easy to administer your GNU/Linux servers via a web browser. It makes Linux discoverable, allowing sysadmins to easily perform tasks such as starting containers, storage administration, network configuration, inspecting logs and so on.

Cockpit provides convenient switching between the terminal and the web tool. A service started via Cockpit can be stopped via the terminal. Likewise, if an error occurs in the terminal, it can be seen in the Cockpit journal interface. Using Cockpit you can monitor and administer several servers at the same time. Just add it easily and your server will look after its buddies.

Cockpit is released under the LGPL v2.1+, and it is available for Redhat, CentOS, Debian, Ubuntu, Atomic, and Arch Linux. Cockpit is compatible and works well with Alibaba Cloud Elastic Compute Service (ECS) servers. In this tutorial, I will be installing Cockpit on an ECS with Ubuntu 18.04 LTS installed on it. Until Ubuntu 18.04 matures and is included in Alibaba Cloud’s library of operating system images, we can upgrade Ubuntu 16.04 to Ubuntu 18.04 by using the do-release-upgrade utility.

Prerequisites

  1. You must have Alibaba Cloud Elastic Compute Service (ECS) activated and verified your valid payment method. If you are a new user, you can get a free account in your Alibaba Cloud account. If you don’t know about how to setup your ECS instance, you can refer to this tutorial or quick-start guide.
  2. You should set up your server’s hostname.
  3. Access to VNC console in your Alibaba Cloud or SSH client installed in your PC.

After completing the prerequisites, log in as root user with your root username & password via SSH client (e.g. Putty) or VNC console available in your Alibaba Cloud account dashboard.

Install Cockpit on Ubuntu 18.04

Cockpit is included in Ubuntu 18.04, so you can just use apt command to install it.

sudo apt update

Install the Cockpit package.

sudo apt -y install cockpit

Start and enable the Cockpit.

sudo systemctl start cockpit.socket
sudo systemctl enable cockpit.socket

Working with Cockpit

Once you start the Cockpit service, it will start listening on port 9090. Now, open up your browser and navigate it to below URL.

https://ip-address:9090

Cockpit uses a self-signed SSL certificate for secure communication. So, you need to add an exception in your browser to access the Cockpit.

Log in with your local user account. In my case it is gqadir.

If the user is a non-privileged user and has sudo access, then tick mark Reuse my password for privileged tasks.

We must now insert our credentials in the related input fields and click on the Log In button. Once logged in we will be redirected to the main cockpit page:

Let’s take a look at it. The main page section shows us some information about the machine we are running on, as the hardware, hostname, operating system and system time. In this case I am running Ubuntu on a virtual machine, therefore the value of the hardware section is QEMU Standard Pc.

We also have a dropdown menu which let us perform a power option on the system as restart or shutdown. On the right we can see some graphs which let us monitoring crucial system activities, in order: CPU and memory usage, disk activity and network traffic.

The Logs Section

In the left column menu, just below the system section, we can click on logs to access to the page dedicated to system logs. Here, at the top of the page, we have two nice menus which let us filter the logs by period of time and severity, choosing between problems, notices, warnings and errors.

The post How to Install Cockpit on Ubuntu 18.04 appeared first on SitePoint.


Source: Sitepoint

CSS Frameworks Or CSS Grid: What Should I Use For My Project?

CSS Frameworks Or CSS Grid: What Should I Use For My Project?

CSS Frameworks Or CSS Grid: What Should I Use For My Project?

Rachel Andrew

2018-11-09T12:30:30+02:00
2018-11-09T17:55:33+00:00

Among the questions I am most frequently asked is some variety of the question, “Should I use CSS Grid or Bootstrap?” In this article, I will take a look at that question. You will discover that the reasons for using frameworks are varied, and not simply centered around use of the grid system contained in that framework. I hope that by unpacking these reasons, I can help you to make your own decision, in terms of what is best for the sites and applications that you are working on, and also for the team you work with.

In this article when I talk about a framework, I’m describing a third party CSS framework such as Bootstrap or Foundation. You might argue these are really component libraries, but many people (including their own docs) would describe them as a framework so that is what we will use here. The important factor is that these are something developed externally to you, without reference to your specific issues. The alternative to using a third party framework is to write your own CSS — that might involve developing your own internal framework, using a bunch of common files as a starting point, or creating every project as a new thing. All these things are done in reference to your own specific needs rather than very generic ones.

Why Choose A CSS Framework?

The question of whether to use Grid or a framework is flawed, as CSS Grid is not a drop-in replacement for the things that a CSS framework does. Any exploration of the subject needs to consider what of our framework CSS Grid is going to replace. I wanted to start by finding out why people had chosen to use a CSS framework at all. So I turned to Twitter and posted this tweet.

There were a lot of responses. As I expected, there are far more reasons to use a framework than simply the grid system that it contains.

A Framework Gives Your Team Ready Made Documentation

If you are working on a project with a number of other developers then any internal system you create will need to also include documentation to help your team members use it effectively. Creating useful documentation is time-consuming, skilled work in itself, and something that the big frameworks do very well.


Screenshot of the Bootstrap documentation homepage
The Bootstrap Documentation. (Large preview)

Framework documentation came up again and again, with many experienced front-end developers chipping in and explaining this is why they would recommend and use a CSS framework. I sometimes hear the opinion that people are using frameworks because they don’t really know CSS, many of the people replying, however, are well known to me as expert CSS developers. I’m sure that they are sometimes frustrated by the choices made by the framework, however, the positive aspects of that choice outweigh that.

Online Communities: Easy Access To Help

When you decide to use a particular tool, you also gain a community of users to ask for help. Unless you have a very clear CSS issue, and can produce a reduced use case to demonstrate it, asking for help with CSS can be difficult. It is especially so if you want to ask how to approach building a certain component. Using a framework can give you a starting point for your question; in general, you will be asking how to modify or style a particular component rather than starting from scratch. This is an easier thing to ask, as well as an easier thing to answer.

The Grid System

Despite the fact that we have CSS Grid, many people replied that the main reason they decided to use a framework was for the grid system. Of course, many of these projects may have been started a long time before CSS Grid was available. Even today, however, concerns about backwards compatibility or team understanding of newer layout methods might cause people to decide to use a framework rather than adopting native CSS.

Speed Of Project Delivery

Opting for a framework will, in general, make it far quicker to deliver your project, in particular if that project fits very well with the way the framework does things and doesn’t need a lot of customization.

In the case of developing an MVP for a new idea, a framework may well be an excellent choice. You will have many things to spend time on, and be still testing assumptions in terms of what the project needs. Being able to develop that first version using a framework can help you get the product in front of users more quickly, and save burning up a lot of time developing things you then decide not to use.

Another place where speed and a bunch of ready built components can be very useful is when developing the backend admin system for a site or application. In the case where you simply need to create a few admin screens, a framework can save a lot of time styling form fields and other components! There are even dashboard themes for Bootstrap and Foundation that can give a helpful starting point.


Screenshot of a dashboard kit for Foundation
Collections of dasboard components make it quicker to build out the admin for an app. (Large preview)

I’m Not A Designer!

This point is the reason I’ve opted for a CSS framework in the past. I’m not a designer, and if I have to both design and build something, I’ll spend a long time trying to make design decisions I am entirely unqualified to make. It would be lovely to have the funds to hire a designer for every side project, however, I don’t, and so a framework might mean the difference between shipping the thing and not.

Dealing With CSS Bugs And Browser Compatibility Issues

Mentioned less than I thought it might be was the fact that the framework authors would already have dealt with browser issues, be that due to actual bugs or lack of support for certain features. However, this was still a factor in the decision-making for many people.

To Help With Responsive Design

This came up a few times; people were opting for a framework specifically due to the fact it was responsive, or that I made decisions about breakpoints for them. I thought it interesting that this specifically was something called out as a factor in choosing to use a framework.

Why Not Use A Framework?

Among positive reasons why frameworks had been selected were some of the issues that people have had with that choice.

Difficulty Of Overriding Framework Code

Many people commented on the fact that it could become difficult to override the code used in the framework, and that frameworks were a good choice if they didn’t need a lot of overriding. The benefits of ease of use, and everyone on the team understanding how to use the framework can be lost if there are then a huge number of customizations in place.

All Websites End Up Looking The Same

The blame for all websites starting to look the same has been placed squarely at the door of the well known CSS frameworks. I have seen sites where I am sure a certain framework has been used, then discover they are custom CSS, so prevalent are the design choices made in these frameworks.

The difficulty in overriding framework styles already mentioned is a large part of why sites developed using a particular framework will tend to look similar. This isn’t just a creative issue, it can be very odd as a user of a few websites which have all opted for the same framework to feel that they are all the same thing. In terms of conveying your brand, and making good user experience part of that, perhaps you lose something when opting for the generic choices of a framework.

Inheriting The CSS Problems Of The Entire World

Whether front or back-end, any tool or framework that seeks to hit the mainstream has to solve as many problems as possible. Unless the tool is tightly coupled to solving one particular use-case it is going to contain a lot of very generic code, and code which solves problems that you do not have, and will never have.

You may be in the fortunate position of only needing your full experience to be viewed in the most current browsers, allowing for a more limited experience in Internet Explorer, or older versions of Chrome. Using a framework with lots of built-in support going back to IE9 would result in lots of additional code — especially given the improvements in CSS layout recently. It might also prevent you from being creative, as everything in the framework is assuming this requirement for support. Things which are possible using CSS may well be limited by the framework.

As an example, the grid systems in popular frameworks do not have an ability to span rows, as there isn’t any concept or rows in layout systems prior to Grid Layout. CSS Grid Layout easily allows for this. If you are tied to the Bootstrap Grid and your designer comes up with a design that includes elements which span rows, you are left unable to implement it — despite the fact that Grid might be supported by your target browsers.

Performance Issues

Related to the above are performance issues inherent in using fairly generic code, rather than something optimized for the exact use cases that you have. When trying to improve performance you will find yourself hitting up against the decisions of the framework.

Increased Technical Debt

While a framework might be a great way to get your startup quickly off the ground, and at the time of making that decision you are sure that you will replace it, is there a plan to make that happen?

Learning A Framework Rather Than Learning CSS

When talking to conference and workshop attendees, I have discovered that many people have only ever used a framework to write CSS. There is nothing wrong with coming into web development via one of these tools, given the complexity of the web platform today I imagine that will be the route in for many people. However, it can become a career-limiting choice, especially if the framework you based your skills around falls out of favor.

Having front-end developers without CSS knowledge should worry a company. It makes it incredibly hard to move away from that framework if your team doesn’t actually understand how to do CSS without it. While this isn’t really a reason not to use a framework, it is something to bear in mind when using one. When the day comes to move away you would hope that the team will be ready to take on something new, not needing to remember (or learn for the first time) how to write CSS!

The Choice Doesn’t Consider End Users

Nicole Sullivan asked pretty much the same question a few days prior to my question as I was thinking about writing this article, although she was considering front-end frameworks as a whole rather than just CSS frameworks. Jeremy Keith noted that precisely zero of the answers concerned end users. This was also the case with the responses to my question.

In our race to get our site built quickly, our desire to make things as good as possible for ourselves as the designers and developers of the site, do we forget who we are doing this for? Do the decisions made by the framework developer match up with the needs of the users of the site you are building?

Can We Replace Frameworks With “Vanilla” CSS?

If you are considering replacing your framework or starting a new project without one, what are some of the things that you could consider in order to make that process easier?

Understand Which Parts Of The Framework You Need

If you are replacing the use of a framework with your own CSS, a good place to start would be to audit your use of the current framework. Work out what you are using and why. Consider how you will replace those things in the new design.

You could follow a similar process when thinking about whether to select a framework or write your own. What parts of this could you reasonably expect to need? How well does it fit with your requirements? Will there be a lot of code that you import, potentially ask visitors to download, but never make use of?

Create A Documented Pattern Library Or Style Guide

I am a huge fan of working with pattern libraries and you can read my post here on Smashing Magazine about our use of Fractal. A pattern library or a style guide enables the creation of documentation along with all of your components. I start all of my projects by working on the CSS in the pattern library.

You are still going to need to write the documentation, as someone who writes documentation, however, I know that often the hardest thing is knowing where to start and how to structure the docs. A pattern library helps with this by keeping the docs along with the CSS for the component itself. This approach can also help prevent the docs becoming out of date as they are tightly linked to the component they refer to.

Develop Your Own CSS Code Guidelines

Consistency across the team is incredibly useful, and without a framework, there may be nothing dictating that. With newer layout methods, in particular, there are often several ways in which a pattern could be built, if everyone picks a different one then inconsistencies are likely to creep in.

Better Places To Ask For Help

Other than sending people in the direction of Stack Overflow, it seems that there are very few places to ask for help on CSS. In particular there seem to be few places which are approachable for beginners. If we are to encourage people away from third-party tools then we need to fill that need for friendly, helpful support which comes from the communities around those tools.

Within a company, it is possible that more experienced developers can become the CSS support for newer team members. If moving away from a framework to your own solution, it would be wise to consider what training might be needed to help bridge the gap, especially if people are used to using the help provided around the third party tool when they needed help in the past.

Style Guides Or Starting Points For Non-Designers

I tie myself in knots with questions such as, “Which fonts should I use?”, “How big should the headings be in relationship to the body text?”, “Is it OK to use a drop shadow?” I can easily write the CSS — if I know what I’m trying to do! What I really need are some rules for making a not-terrible design, some kind of typography starting point, or a set of basic guidelines would replace being able to use the defaults of a framework in a lot of cases.

Educating People About The State Of Modern Browser Interoperability

I have discovered that people who have been immersed in framework-based development for a number of years, often have a view of browser interoperability which is several years out of date. We have never been in a better situation in terms of CSS working cross-browser. It may be that some browsers don’t support one new shiny bit of CSS, but in general, CSS (when supported) won’t be full of strange bugs. For example, in almost all cases if you use CSS Grid in one browser your CSS will work in exactly the same way in another.

If trying to make a case for not using a framework to team members who believe that the framework saves them from browser bugs, this point may be a useful one to raise. Are the browser compatibility problems real, or based on the concerns of the past?

Will We See A New Breed Of Frameworks?

Something that interests me is whether our new layout methods will help usher in a new breed of tools and frameworks. Will we see tools which take advantage of new layout methods, allow for more creativity but still give teams and individuals some of the undeniable advantages that came out of the responses to my tweet.

Perhaps by relying on new layout methods, rather than an inbuilt grid system, a new-style framework could be much lighter, becoming a collection of useful components. It might be able to then get away from some of the performance issues inherent in very generic code.

An area a framework could help with would be in helping to create solid fallbacks for browsers which don’t support newer layout methods, or by having really solid accessibility baked into the components. This could help provide guidance into a way of working that considers interoperability and accessibility, even for those people who don’t have these things in the forefront of their minds.

I don’t think that simply switching the Bootstrap Grid for CSS Grid Layout will achieve this. Instead, authors coming up with new frameworks, should perhaps look at some of the reasons outlined here, and try to solve them in new ways, using the new functionality we have in CSS to do that.

Should You Use A Framework?

You and your team will need to answer that question yourself. And, despite what people might try to have you believe, there is no universal right or wrong answer. There is only what is right or wrong for your project. I hope that this article and the many responses to my original question might give you some things to discuss as you ponder that question.

Remember that the answer will change over time. It might be a useful thought experiment to not only consider what you need right now, in terms of initially doing the development for the site, but consider the lifespan of the site. Do you expect this still to be around in five years? Will your choice be a positive or negative one then?

Document your decisions, don’t be afraid to revisit them, and do ensure that you and your team maintain your skills outside of any framework that you decide to use. That way, you will be in a good place to move on in future and to make the best decisions for the next phases of your project.

I’d love the conversation started in that tweet to continue. Let us know your stories in the comments — they may well help other folks trying to work out what is best for their projects right now.

Smashing Editorial
(il)


Source: Smashing Magazine

Sharing Data Among Multiple Servers Through AWS S3

Sharing Data Among Multiple Servers Through AWS S3

Sharing Data Among Multiple Servers Through AWS S3

Leonardo Losoviz

2018-11-08T12:30:30+01:00
2018-11-08T14:22:17+00:00

When providing some functionality for processing a file uploaded by the user, the file must be available to the process throughout the execution. A simple upload and save operation presents no issues. However, if in addition the file must be manipulated before being saved, and the application is running on several servers behind a load balancer, then we need to make sure that the file is available to whichever server is running the process at each time.

For instance, a multi-step “Upload your user avatar” functionality may require the user to upload an avatar on step 1, crop it on step 2, and finally save it on step 3. After the file is uploaded to a server on step 1, the file must be available to whichever server handles the request for steps 2 and 3, which may or may not be the same one for step 1.

A naive approach would be to copy the uploaded file on step 1 to all other servers, so the file would be available on all of them. However, this approach is not just extremely complex but also unfeasible: for instance, if the site runs on hundreds of servers, from several regions, then it cannot be accomplished.

A possible solution is to enable “sticky sessions” on the load balancer, which will always assign the same server for a given session. Then, steps 1, 2 and 3 will be handled by the same server, and the file uploaded to this server on step 1 will still be there for steps 2 and 3. However, sticky sessions are not fully reliable: If in between steps 1 and 2 that server crashed, then the load balancer will have to assign a different server, disrupting the functionality and the user experience. Likewise, always assigning the same server for a session may, under special circumstances, lead to slower response times from an overburdened server.

A more proper solution is to keep a copy of the file on a repository accessible to all servers. Then, after the file is uploaded to the server on step 1, this server will upload it to the repository (or, alternatively, the file could be uploaded to the repository directly from the client, bypassing the server); the server handling step 2 will download the file from the repository, manipulate it, and upload it there again; and finally the server handling step 3 will download it from the repository and save it.

In this article, I will describe this latter solution, based on a WordPress application storing files on Amazon Web Services (AWS) Simple Storage Service (S3) (a cloud object storage solution to store and retrieve data), operating through the AWS SDK.

Note 1: For a simple functionality such as cropping avatars, another solution would be to completely bypass the server, and implement it directly in the cloud through Lambda functions. But since this article is about connecting an application running on the server with AWS S3, we don’t consider this solution.

Note 2: In order to use AWS S3 (or any other of the AWS services) we will need to have a user account. Amazon offers a free tier here for 1 year, which is good enough for experimenting with their services.

Note 3: There are 3rd party plugins for uploading files from WordPress to S3. One such plugin is WP Media Offload (the lite version is available here), which provides a great feature: it seamlessly transfers files uploaded to the Media Library to an S3 bucket, which allows to decouple the contents of the site (such as everything under /wp-content/uploads) from the application code. By decoupling contents and code, we are able to deploy our WordPress application using Git (otherwise we cannot since user-uploaded content is not hosted on the Git repository), and host the application on multiple servers (otherwise, each server would need to keep a copy of all user-uploaded content.)

Creating The Bucket

When creating the bucket, we need to pay consideration to the bucket name: Each bucket name must be globally unique on the AWS network, so even though we would like to call our bucket something simple like “avatars”, that name may already be taken, then we may choose something more distinctive like “avatars-name-of-my-company”.

We will also need to select the region where the bucket is based (the region is the physical location where the data center is located, with locations all over the world.)

The region must be the same one as where our application is deployed, so that accessing S3 during the process execution is fast. Otherwise, the user may have to wait extra seconds from uploading/downloading an image to/from a distant location.

Note: It makes sense to use S3 as the cloud object storage solution only if we also use Amazon’s service for virtual servers on the cloud, EC2, for running the application. If instead, we rely on some other company for hosting the application, such as Microsoft Azure or DigitalOcean, then we should also use their cloud object storage services. Otherwise, our site will suffer an overhead from data traveling among different companies’ networks.

In the screenshots below we will see how to create the bucket where to upload the user avatars for cropping. We first head to the S3 dashboard and click on “Create bucket”:


S3 dashboard
S3 dashboard, showing all our existing buckets. (Large preview)

Then we type in the bucket name (in this case, “avatars-smashing”) and choose the region (“EU (Frankfurt)”):


Create a bucket screen
Creating a bucket through in S3. (Large preview)

Only the bucket name and region are mandatory. For the following steps we can keep the default options, so we click on “Next” until finally clicking on “Create bucket”, and with that, we will have the bucket created.

Setting Up The User Permissions

When connecting to AWS through the SDK, we will be required to enter our user credentials (a pair of access key ID and secret access key), to validate that we have access to the requested services and objects. User permissions can be very general (an “admin” role can do everything) or very granular, just granting permission to the specific operations needed and nothing else.

As a general rule, the more specific our granted permissions, the better, as to avoid security issues. When creating the new user, we will need to create a policy, which is a simple JSON document listing the permissions to be granted to the user. In our case, our user permissions will grant access to S3, for bucket “avatars-smashing”, for the operations of “Put” (for uploading an object), “Get” (for downloading an object), and “List” (for listing all the objects in the bucket), resulting in the following policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:Put*",
                "s3:Get*",
                "s3:List*"
            ],
            "Resource": [
                "arn:aws:s3:::avatars-smashing",
                "arn:aws:s3:::avatars-smashing/*"
            ]
        }
    ]
}

In the screenshots below, we can see how to add user permissions. We must go to the Identity and Access Management (IAM) dashboard:


IAM dashboard
IAM dashboard, listing all the users we have created. (Large preview)

In the dashboard, we click on “Users” and immediately after on “Add User”. In the Add User page, we choose a user name (“crop-avatars”), and tick on “Programmatic access” as the Access type, which will provide the access key ID and secret access key for connecting through the SDK:


Add user page
Adding a new user. (Large preview)

We then click on button “Next: Permissions”, click on “Attach existing policies directly”, and click on “Create policy”. This will open a new tab in the browser, with the Create policy page. We click on the JSON tab, and enter the JSON code for the policy defined above:


Create policy page
Creating a policy granting ‘Get’, ‘Post’ and ‘List’ operations on the ‘avatars-smashing’ bucket. (Large preview)

We then click on Review policy, give it a name (“CropAvatars”), and finally click on Create policy. Having the policy created, we switch back to the previous tab, select the CropAvatars policy (we may need to refresh the list of policies to see it), click on Next: Review, and finally on Create user. After this is done, we can finally download the access key ID and secret access key (please notice that these credentials are available for this unique moment; if we don’t copy or download them now, we’ll have to create a new pair):


User creation success page
After the user is created, we are offered a unique time to download the credentials. (Large preview)

Connecting To AWS Through The SDK

The SDK is available through a myriad of languages. For a WordPress application, we require the SDK for PHP which can be downloaded from here, and instructions on how to install it are here.

Once we have the bucket created, the user credentials ready, and the SDK installed, we can start uploading files to S3.

Uploading And Downloading Files

For convenience, we define the user credentials and the region as constants in the wp-config.php file:

define ('AWS_ACCESS_KEY_ID', '...'); // Your access key id
define ('AWS_SECRET_ACCESS_KEY', '...'); // Your secret access key
define ('AWS_REGION', 'eu-central-1'); // Region where the bucket is located. This is the region id for "EU (Frankfurt)"

In our case, we are implementing the crop avatar functionality, for which avatars will be stored on the “avatars-smashing” bucket. However, in our application we may have several other buckets for other functionalities, requiring to execute the same operations of uploading, downloading and listing files. Hence, we implement the common methods on an abstract class AWS_S3, and we obtain the inputs, such as the bucket name defined through function get_bucket, in the implementing child classes.

// Load the SDK and import the AWS objects
require 'vendor/autoload.php';
use AwsS3S3Client;
use AwsExceptionAwsException;

// Definition of an abstract class
abstract class AWS_S3 {
  
  protected function get_bucket() {

    // The bucket name will be implemented by the child class
    return '';
  }
}

The S3Client class exposes the API for interacting with S3. We instantiate it only when needed (through lazy-initialization), and save a reference to it under $this->s3Client as to keep using the same instance:

abstract class AWS_S3 {

  // Continued from above...

  protected $s3Client;

  protected function get_s3_client() {

    // Lazy initialization
    if (!$this->s3Client) {

      // Create an S3Client. Provide the credentials and region as defined through constants in wp-config.php
      $this->s3Client = new S3Client([
        'version' => '2006-03-01',
        'region' => AWS_REGION,
        'credentials' => [
          'key' => AWS_ACCESS_KEY_ID,
          'secret' => AWS_SECRET_ACCESS_KEY,
        ],
      ]);
    }

    return $this->s3Client;
  }
}

When we are dealing with $file in our application, this variable contains the absolute path to the file in disk (e.g. /var/app/current/wp-content/uploads/users/654/leo.jpg), but when uploading the file to S3 we should not store the object under the same path. In particular, we must remove the initial bit concerning the system information (/var/app/current) for security reasons, and optionally we can remove the /wp-content bit (since all files are stored under this folder, this is redundant information), keeping only the relative path to the file (/uploads/users/654/leo.jpg). Conveniently, this can be achieved by removing everything after WP_CONTENT_DIR from the absolute path. Functions get_file and get_file_relative_path below switch between the absolute and the relative file paths:

abstract class AWS_S3 {

  // Continued from above...

  function get_file_relative_path($file) {

    return substr($file, strlen(WP_CONTENT_DIR));
  }

  function get_file($file_relative_path) {

    return WP_CONTENT_DIR.$file_relative_path;
  }
}

When uploading an object to S3, we can establish who is granted access to the object and the type of access, done through the access control list (ACL) permissions. The most common options are to keep the file private (ACL => “private”) and to make it accessible for reading on the internet (ACL => “public-read”). Because we will need to request the file directly from S3 to show it to the user, we need ACL => “public-read”:

abstract class AWS_S3 {

  // Continued from above...

  protected function get_acl() {

    return 'public-read';
  }
}

Finally, we implement the methods to upload an object to, and download an object from, the S3 bucket:

abstract class AWS_S3 {

  // Continued from above...

  function upload($file) {

    $s3Client = $this->get_s3_client();

    // Upload a file object to S3
    $s3Client->putObject([
      'ACL' => $this->get_acl(),
      'Bucket' => $this->get_bucket(),
      'Key' => $this->get_file_relative_path($file),
      'SourceFile' => $file,
    ]);
  }

  function download($file) {

    $s3Client = $this->get_s3_client();

    // Download a file object from S3
    $s3Client->getObject([
      'Bucket' => $this->get_bucket(),
      'Key' => $this->get_file_relative_path($file),
      'SaveAs' => $file,
    ]);
  }
}

Then, in the implementing child class we define the name of the bucket:

class AvatarCropper_AWS_S3 extends AWS_S3 {

  protected function get_bucket() {

    return 'avatars-smashing';
  }
}

Finally, we simply instantiate the class to upload the avatars to, or download from, S3. In addition, when transitioning from steps 1 to 2 and 2 to 3, we need to communicate the value of $file. We can do this by submitting a field “file_relative_path” with the value of the relative path of $file through a POST operation (we don’t pass the absolute path for security reasons: no need to include the “/var/www/current” information for outsiders to see):

// Step 1: after the file was uploaded to the server, upload it to S3. Here, $file is known
$avatarcropper = new AvatarCropper_AWS_S3();
$avatarcropper->upload($file);

// Get the file path, and send it to the next step in the POST
$file_relative_path = $avatarcropper->get_file_relative_path($file);
// ...

// --------------------------------------------------

// Step 2: get the $file from the request and download it, manipulate it, and upload it again
$avatarcropper = new AvatarCropper_AWS_S3();
$file_relative_path = $_POST['file_relative_path'];
$file = $avatarcropper->get_file($file_relative_path);
$avatarcropper->download($file);

// Do manipulation of the file
// ...

// Upload the file again to S3
$avatarcropper->upload($file);

// --------------------------------------------------

// Step 3: get the $file from the request and download it, and then save it
$avatarcropper = new AvatarCropper_AWS_S3();
$file_relative_path = $_REQUEST['file_relative_path'];
$file = $avatarcropper->get_file($file_relative_path);
$avatarcropper->download($file);

// Save it, whatever that means
// ...

Displaying The File Directly From S3

If we want to display the intermediate state of the file after manipulation on step 2 (e.g. the user avatar after cropped), then we must reference the file directly from S3; the URL couldn’t point to the file on the server since, once again, we don’t know which server will handle that request.

Below, we add function get_file_url($file) which obtains the URL for that file in S3. If using this function, please make sure that the ACL of the uploaded files is “public-read”, or otherwise it won’t be accessible to the user.

abstract class AWS_S3 {

  // Continue from above...

  protected function get_bucket_url() {

    $region = $this->get_region();

    // North Virginia region is simply "s3", the others require the region explicitly
    $prefix = $region == 'us-east-1' ? 's3' : 's3-'.$region;

    // Use the same scheme as the current request
    $scheme = is_ssl() ? 'https' : 'http';

    // Using the bucket name in path scheme
    return $scheme.'://'.$prefix.'.amazonaws.com/'.$this->get_bucket();
  }

  function get_file_url($file) {

    return $this->get_bucket_url().$this->get_file_relative_path($file);
  }
}

Then, we can simply we get the URL of the file on S3 and print the image:

printf(
  "<img src='%s'>",
  $avatarcropper->get_file_url($file)
);

Listing Files

If in our application we want to allow the user to view all previously uploaded avatars, we can do so. For that, we introduce function get_file_urls which lists the URL for all the files stored under a certain path (in S3 terms, it’s called a prefix):

abstract class AWS_S3 {

  // Continue from above...

  function get_file_urls($prefix) {

    $s3Client = $this->get_s3_client();

    $result = $s3Client->listObjects(array(
      'Bucket' => $this->get_bucket(),
      'Prefix' => $prefix
    ));

    $file_urls = array();
    if(isset($result['Contents']) && count($result['Contents']) > 0 ) {

      foreach ($result['Contents'] as $obj) {

        // Check that Key is a full file path and not just a "directory"
        if ($obj['Key'] != $prefix) { 

          $file_urls[] = $this->get_bucket_url().$obj['Key'];
        }
      }
    }

    return $file_urls;
  }
}

Then, if we are storing each avatar under path “/users/${user_id}/“, by passing this prefix we will obtain the list of all files:

$user_id = get_current_user_id();
$prefix = "/users/${user_id}/";
foreach ($avatarcropper->get_file_urls($prefix) as $file_url) {
  printf(
    "<img src='%s'>", 
    $file_url
  );
}

Conclusion

In this article, we explored how to employ a cloud object storage solution to act as a common repository to store files for an application deployed on multiple servers. For the solution, we focused on AWS S3, and proceeded to show the steps needed to be integrated into the application: creating the bucket, setting-up the user permissions, and downloading and installing the SDK. Finally, we explained how to avoid security pitfalls in the application, and saw code examples demonstrating how to perform the most basic operations on S3: uploading, downloading and listing files, which barely required a few lines of code each. The simplicity of the solution shows that integrating cloud services into the application is not difficult, and it can also be accomplished by developers who are not much experienced with the cloud.

Smashing Editorial
(rb, ra, yk, il)


Source: Smashing Magazine

Best Practices of Web Application Hosting in Alibaba Cloud

This article was originally published on Alibaba Cloud. Thank you for supporting the partners who make SitePoint possible.

Deploying a highly available and scalable web application on a traditional data center is a complex and expensive undertaking. One must invest a lot of effort and resources into capacity management. But more often than not, it ends up in over or under-provisioning of resources, further resulting in inefficient investment in underutilized hardware. To tackle this challenge, Alibaba Cloud offers a reliable, scalable, and high-performing cloud infrastructure for most demanding web application deployment scenarios. This document intends to provide practical solutions and best practices when it comes to scaling your web application on Alibaba Cloud.

Traditional Solution for Common Web Application Hosting

In a traditional web hosting space, designing a scalable architecture is always a challenge. The below diagram depicts a traditional web hosting model. The purpose of this diagram is to help you compare it with a similar architecture hosted on the cloud.

Traditional web hosting usually follows a three-tier design that divides the architecture into presentation, application, and persistence layers. The design achieves scalability through the inclusion of additional servers at each of these layers. The architecture also has built-in high availability features. The section below examines the means of deploying this traditional web hosting in Alibaba Cloud.

Simple Web Application Hosting Architecture on Alibaba Cloud

The diagram below shows how the traditional web hosting architecture looks like when deployed using various Alibaba Cloud products and services:

The key components of this architecture include:

  • Elastic Compute Service (ECS) — Built on Alibaba Cloud’s own large-scale distributed computing system, Elastic Compute Service or ECS is a scalable and highly-efficient cloud computing service. Alibaba Cloud ECS helps you to quickly build more stable and secure web applications to adapt to your business’ real-time needs.
  • Object Storage Service (OSS) — Alibaba Cloud offers various options to store, access, and backup your data on the cloud. For static storage, it provides Object Storage Service (OSS) to facilitate automatic data replication and failure recovery.
  • ApsaraDB for RDS — Relational Database Service or RDS is a stable, reliable, elastic and high-performance online database service based on Alibaba Cloud’s own distributed system. It supports MySQL, SQL Server, PostgreSQL, and PPAS. Furthermore, it provides a comprehensive set of features including disaster recovery, data backup, monitoring, and migration.
  • DNS — Alibaba Cloud DNS service provides a highly available and scalable DNS service for your domain management needs. It automatically reroutes requests for your domain to the nearest DNS server.
  • Server Load Balancer (SLB) — Server Load Balancer is a web traffic distribution service that maximizes and extends the external service capabilities of your web applications. By seamlessly distributing traffic across multiple cloud servers and eliminating single points of failure, SLB enhances the reliability, usability, and availability of your applications.

Leveraging the Cloud for Web Application Hosting

When deploying a web application on Alibaba Cloud, you should consider making modifications in your deployment to fully utilize the advantages of the cloud. Below are some key considerations of when hosting an application on Alibaba Cloud.

Multiple Data Centers in a Region

Within a certain region, Alibaba Cloud usually operates at least two data centers called Availability Zones (AZs). Elastic Compute Service (ECS) in different AZs are both logically and physically separated. Alibaba Cloud provides an easy-to-use model for deploying your applications across AZs for higher availability and reliability.

High Security for Web Applications and Servers

Web application security is one of the primary concerns for organizations today, with more than 90% of the applications being vulnerable to security attacks. These attacks can exploit websites and inherent servers, which puts businesses at the risk of financial loss. To protect your web applications from such attacks, Alibaba Cloud provides a suite of network and application security services, such as Anti-DDoS (Basic and Pro), Web Application Firewall (WAF), and Server Guard.

In addition to these services, users can proactively limit external traffic by defining firewalls and permissions. The diagram below depicts the Alibaba Cloud web application hosting architecture that comes with a group firewall to secure the entire infrastructure.

The post Best Practices of Web Application Hosting in Alibaba Cloud appeared first on SitePoint.


Source: Sitepoint

How To Build A Virtual Reality Model With A Real-Time Cross-Device Preview

How To Build A Virtual Reality Model With A Real-Time Cross-Device Preview

How To Build A Virtual Reality Model With A Real-Time Cross-Device Preview

Alvin Wan

2018-11-06T23:50:16+01:00
2018-11-07T10:45:16+00:00

Virtual reality (VR) is an experience based in a computer-generated environment; a number of different VR products make headlines and its applications range far and wide: for the winter Olympics, the US team utilized virtual reality for athletic training; surgeons are experimenting with virtual reality for medical training; and most commonly, virtual reality is being applied to games.

We will focus on the last category of applications and will specifically focus on point-and-click adventure games. Such games are a casual class of games; the goal is to point and click on objects in the scene, to finish a puzzle. In this tutorial, we will build a simple version of such a game but in virtual reality. This serves as an introduction to programming in three dimensions and is a self-contained getting-started guide to deploying a virtual reality model on the web. You will be building with webVR, a framework that gives a dual advantage — users can play your game in VR and users without a VR headset can still play your game on a phone or desktop.

Developing For Virtual Reality

Any developer can create content for VR nowadays. To get a better understanding of VR development, working a demo project can help. Read article →

In the second half of these tutorial, you will then build a “mirror” for your desktop. This means that all movements the player makes on a mobile device will be mirrored in a desktop preview. This allows you see what the player sees, allowing you to provide guidance, record the game, or simply keep guests entertained.

Prerequisites

To get started, you will need the following. For the second half of this tutorial, you will need a Mac OSX. Whereas the code can apply to any platform, the dependency installation instructions below are for Mac.

  • Internet access, specifically to glitch.com;
  • A virtual reality headset (optional, recommended). I use Google Cardboard, which is offered at $15 a piece.

Step 1: Setting Up A Virtual Reality (VR) Model

In this step, we will set up a website with a single static HTML page. This allows us to code from your desktop and automatically deploy to the web. The deployed website can then be loaded on your mobile phone and placed inside a VR headset. Alternatively, the deployed website can be loaded by a standalone VR headset. Get started by navigating to glitch.com. Then,

  1. Click on “New Project” in the top-right.
  2. Click on “hello-express” in the drop-down.

Get started by navigating to glitch.com
(Large preview)

Next, click on views/index.html in the left sidebar. We will refer to this as your “editor”.


The next step would be to click on views/index.html in the left sidebar which will be referred to this as your “editor”.
(Large preview)

To preview the webpage, click on “Preview” in the top left. We will refer to this as your preview. Note that any changes in your editor will be automatically reflected in this preview, barring bugs or unsupported browsers.


To preview the webpage, click on “Preview” in the top left. We will refer to this as your preview. Note that any changes in your editor will be automatically reflected in this preview, barring bugs or unsupported browsers.
(Large preview)

Back in your editor, replace the current HTML with the following boilerplate for a VR model.

<!DOCTYPE html>
<html>
  <head>
      <script src="https://aframe.io/releases/0.7.0/aframe.min.js"></script>
  </head>
  <body>
    <a-scene>
      
      <!-- blue sky -->
      <a-sky color="#a3d0ed"></a-sky>
      
      <!-- camera with wasd and panning controls -->
      <a-entity camera look-controls wasd-controls position="0 0.5 2" rotation="0 0 0"></a-entity>
        
      <!-- brown ground -->
      <a-box shadow id="ground" shadow="receive:true" color="#847452" width="10" height="0.1" depth="10"></a-box>
        
      <!-- start code here -->
      <!-- end code here -->
    </a-scene>
  </body>
</html>

Navigate see the following.


When navigating back to your preview, you will see the bakground colors blue and brown.
(Large preview)

To preview this on your VR headset, use the URL in the omnibar. In the picture above, the URL is https://point-and-click-vr-game.glitch.me/. Your working environment is now set up; feel free to share this URL with family and friends. In the next step, you will create a virtual reality model.

Step 2: Build A Tree Model

You will now create a tree, using primitives from aframe.io. These are standard objects that Aframe has pre-programmed for ease of use. Specifically, Aframe refers to objects as entities. There are three concepts, related to all entities, to organize our discussion around:

  1. Geometry and material,
  2. Transformation Axes,
  3. Relative Transformations.

First, geometry and material are two building blocks of all three-dimensional objects in code. The geometry defines the “shape” — a cube, a sphere, a pyramid, and so on. The material defines static properties of the shape, such as color, reflectiveness, roughness.

Aframe simplifies this concept for us by defining primitives, such as <a-box>, <a-sphere>, <a-cylinder> and many others to make a specification of a geometry and its material simpler. Start by defining a green sphere. On line 19 in your code, right after <!-- start code here -->, add the following.


      <!-- start code here -->
      <a-sphere color="green" radius="0.5"></a-sphere>  <!-- new line -->
      <!-- end code here -->

Second, there are three axes to transform our object along. The x axis runs horizontally, where x values increase as we move right. The y axis runs vertically, where y values increase as we move up. The z axis runs out of your screen, where z values increase as we move towards you. We can translate, rotate, or scale entities along these three axes.

For example, to translate an object “right,” we increase its x value. To spin an object like a top, we rotate it along the y-axis. Modify line 19 to move the sphere “up” — this means you need to increase the sphere’s y value. Note that all transformations are specified as <x> <y> <z>, meaning to increase its y value, you need to increase the second value. By default, all objects are located at position 0, 0, 0. Add the position specification below.


      <!-- start code here -->
      <a-sphere color="green" radius="0.5" position="0 1 0"></a-sphere> <!-- edited line -->
      <!-- end code here -->

Third, all transformations are relative to its parent. To add a trunk to your tree, add a cylinder inside of the sphere above. This ensures that the position of your trunk is relative to the sphere’s position. In essence, this keeps your tree together as one unit. Add the <a-cylinder> entity between the <a-sphere ...> and </a-sphere> tags.


      <a-sphere color="green" radius="0.5" position="0 1 0">
        <a-cylinder color="#84651e" position="0 -0.9 0" radius="0.05"></a-cylinder> <!-- new line -->
      </a-sphere>

To make this treeless barebones, add more foliage, in the form of two more green spheres.


      <a-sphere color="green" radius="0.5" position="0 0.75 0">
        <a-cylinder color="#84651e" position="0 -0.9 0" radius="0.05"></a-cylinder>
        <a-sphere color="green" radius="0.35" position="0 0.5 0"></a-sphere> <!-- new line -->
        <a-sphere color="green" radius="0.2" position="0 0.8 0"></a-sphere> <!-- new line -->
      </a-sphere>

Navigate back to your preview, and you will see the following tree:


When navigating back to your preview, you will now be able to see a green tree placed in your background.
(Large preview)

Reload the website preview on your VR headset, and check out your new tree. In the next section, we will make this tree interactive.

Step 3: Add Click Interaction To Model

To make an entity interactive, you will need to:

  • Add an animation,
  • Have this animation trigger on click.

Since the end user is using a virtual reality headset, clicking is equivalent to staring: in other words, stare at an object to “click” on it. To effect these changes, you will start with the cursor. Redefine the camera, by replacing line 13 with the following.

<a-entity camera look-controls wasd-controls position="0 0.5 2" rotation="0 0 0">
  <a-entity cursor="fuse: true; fuseTimeout: 250"
            position="0 0 -1"
            geometry="primitive: ring; radiusInner: 0.02; radiusOuter: 0.03"
            material="color: black; shader: flat"
            scale="0.5 0.5 0.5"
            raycaster="far: 20; interval: 1000; objects: .clickable">
    <!-- add animation here -->
  </a-entity>
</a-entity>

The above adds a cursor that can trigger the clicking action. Note the objects: .clickable property. This means that all objects with the class “clickable” will trigger the animation and receive a “click” command where appropriate. You will also add an animation to the click cursor, so that users know when the cursor triggers a click. Here, the cursor will shrink slowly when pointing at a clickable object, snapping after a second to denote an object has been clicked. Replace the comment <!-- add animation here --> with the following code:

<a-animation begin="fusing" easing="ease-in" attribute="scale"
  fill="backwards" from="1 1 1" to="0.2 0.2 0.2" dur="250"></a-animation>

Move the tree to the right by 2 units and add class “clickable” to the tree, by modifying line 29 to match the following.

<a-sphere color="green" radius="0.5" position="2 0.75 0" class="clickable">

Next, you will:

  • Specify an animation,
  • Trigger the animation with a click.

Due to Aframe’s easy-to-use animation entity, both steps can be done in quick succession.

Add an <a-animation> tag on line 33, right after the <a-cylinder> tag but before the end of the </a-sphere>.

<a-animation begin="click" attribute="position" from="2 0.75 0" to="2.2 0.75 0" fill="both" direction="alternate" repeat="1"></a-animation>

The above properties specify a number of configurations for the animation. The animation:

  • Is triggered by the click event
  • Modifies the tree’s position
  • Starts from the original position 2 0.75 0
  • Ends in 2.2 0.75 0 (moving 0.2 units to the right)
  • Animates when traveling to and from the destination
  • Alternates animation between traveling to and from the destination
  • Repeats this animation once. This means the object animates twice in total — once to the destination and once back to the original position.

Finally, navigate to your preview, and drag from the cursor to your tree. Once the black circle rests on the tree, the tree will move to the right and back.

Large preview

This concludes the basics needed to build a point-and-click adventure game, in virtual reality. To view and play a more complete version of this game, see the following short scene. The mission is to open the gate and hide the tree behind the gate, by clicking on various objects in the scene.


The mission is to open the gate and hide the tree behind the gate, by clicking on various objects in the scene.
(Large preview)

Next, we set up a simple nodeJS server to serve our static demo.

Step 4: Setup NodeJS Server

In this step, we will set up a basic, functional nodeJS server that serves your existing VR model. In the left sidebar of your editor, select package.json.

Start by deleting lines 2-4.

"//1": "describes your app and its dependencies",
"//2": "https://docs.npmjs.com/files/package.json",
"//3": "updating this file will download and update your packages", 

Change the name to mirrorvr.

{
  "name": "mirrorvr", // change me
  "version": "0.0.1",
  ...

Under dependencies, add socket.io.

"dependencies": {
  "express": "^4.16.3",
  "socketio": "^1.0.0",
},

Update the repository URL to match your current glitch’s. The example glitch project is named point-and-click-vr-game. Replace that with your glitch project’s name.

"repository": {
  "url": "https://glitch.com/edit/#!/point-and-click-vr-game"
},

Finally, Change the "glitch" tag to "vr".

"keywords": [
  "node",
  "vr",  // change me
  "express"
]

Double check that your package.json now matches the following.

{
  "name": "mirrorvr",
  "version": "0.0.1",
  "description": "Mirror virtual reality models",
  "main": "server.js",
  "scripts": {
    "start": "node server.js"
  },
  "dependencies": {
    "express": "^4.16.3",
    "socketio": "^1.0.0"
  },
  "engines": {
    "node": "8.x"
  },
  "repository": {
    "url": "https://glitch.com/edit/#!/point-and-click-vr-game"
  },
  "license": "MIT",
  "keywords": [
    "node",
    "vr",
    "express"
  ]
}

Double check that your code from the previous parts matches the following, in views/index.html.

<!DOCTYPE html>
<html>
  <head>
      <script src="https://aframe.io/releases/0.7.0/aframe.min.js"></script>
  </head>
  <body>
    <a-scene>
      
      <!-- blue sky -->
      <a-sky color="#a3d0ed"></a-sky>
      
      <!-- camera with wasd and panning controls -->
      <a-entity camera look-controls wasd-controls position="0 0.5 2" rotation="0 0 0">
        <a-entity cursor="fuse: true; fuseTimeout: 250"
                  position="0 0 -1"
                  geometry="primitive: ring; radiusInner: 0.02; radiusOuter: 0.03"
                  material="color: black; shader: flat"
                  scale="0.5 0.5 0.5"
                  raycaster="far: 20; interval: 1000; objects: .clickable">
            <a-animation begin="fusing" easing="ease-in" attribute="scale"
               fill="backwards" from="1 1 1" to="0.2 0.2 0.2" dur="250"></a-animation>
        </a-entity>
      </a-entity>
        
      <!-- brown ground -->
      <a-box shadow id="ground" shadow="receive:true" color="#847452" width="10" height="0.1" depth="10"></a-box>
        
      <!-- start code here -->
      <a-sphere color="green" radius="0.5" position="2 0.75 0" class="clickable">
        <a-cylinder color="#84651e" position="0 -0.9 0" radius="0.05"></a-cylinder>
        <a-sphere color="green" radius="0.35" position="0 0.5 0"></a-sphere>
        <a-sphere color="green" radius="0.2" position="0 0.8 0"></a-sphere>
        <a-animation begin="click" attribute="position" from="2 0.75 0" to="2.2 0.75 0" fill="both" direction="alternate" repeat="1"></a-animation>
      </a-sphere>
      <!-- end code here -->
    </a-scene>
  </body>
</html>

Modify the existing server.js.

Start by importing several NodeJS utilities.

  • Express
    This is the web framework we will use to run the server.
  • http
    This allows us to launch a daemon, listening for activity on various ports.
  • socket.io
    The sockets implementation that allows us to communicate between client-side and server-side in nearly real-time.

While importing these utilities, we additionally initialize the ExpressJS application. Note the first two lines are already written for you.

var express = require('express');
var app = express();

/* start new code */
var http = require('http').Server(app);
var io = require('socket.io')(http);
/* end new code */

// we've started you off with Express, 

With the utilities loaded, the provided server next instructs the server to return index.html as the homepage. Note there is no new code written below; this is simply an explanation of the existing source code.

// http://expressjs.com/en/starter/basic-routing.html
app.get('/', function(request, response) {
  response.sendFile(__dirname + '/views/index.html');
});

Finally, the existing source code instructs the application to bind to and listen to a port, which is 3000 by default unless specified otherwise.

// listen for requests 🙂
var listener = app.listen(process.env.PORT, function() {
  console.log('Your app is listening on port ' + listener.address().port);
});

Once you are finished editing, Glitch automatically reloads the server. Click on “Show” in the top-left to preview your application.

Your web application is now up and running. Next, we will send messages from the client to the server.

Step 5: Send Information From Client To Server

In this step, we will use the client to initialize a connection with the server. The client will additionally inform the server if it is a phone or a desktop. To start, import the soon-to-exist Javascript file in your views/index.html.

After line 4, include a new script.

<script src="https//www.smashingmagazine.com/client.js" type="text/javascript"></script>

On line 14, add camera-listener to the list of properties for the camera entity.

<a-entity camera-listener camera look-controls...>
    ...
</a-entity>

Then, navigate to public/client.js in the left sidebar. Delete all Javascript code in this file. Then, define a utility function that checks if the client is a mobile device.

/**
 * Check if client is on mobile
 */
function mobilecheck() {
  var check = false;
  (function(a){if(/(android|bbd+|meego).+mobile|avantgo|bada/|blackberry|blazer|compal|elaine|fennec|hiptop|iemobile|ip(hone|od)|iris|kindle|lge |maemo|midp|mmp|mobile.+firefox|netfront|opera m(ob|in)i|palm( os)?|phone|p(ixi|re)/|plucker|pocket|psp|series(4|6)0|symbian|treo|up.(browser|link)|vodafone|wap|windows ce|xda|xiino/i.test(a)||/1207|6310|6590|3gso|4thp|50[1-6]i|770s|802s|a wa|abac|ac(er|oo|s-)|ai(ko|rn)|al(av|ca|co)|amoi|an(ex|ny|yw)|aptu|ar(ch|go)|as(te|us)|attw|au(di|-m|r |s )|avan|be(ck|ll|nq)|bi(lb|rd)|bl(ac|az)|br(e|v)w|bumb|bw-(n|u)|c55/|capi|ccwa|cdm-|cell|chtm|cldc|cmd-|co(mp|nd)|craw|da(it|ll|ng)|dbte|dc-s|devi|dica|dmob|do(c|p)o|ds(12|-d)|el(49|ai)|em(l2|ul)|er(ic|k0)|esl8|ez([4-7]0|os|wa|ze)|fetc|fly(-|_)|g1 u|g560|gene|gf-5|g-mo|go(.w|od)|gr(ad|un)|haie|hcit|hd-(m|p|t)|hei-|hi(pt|ta)|hp( i|ip)|hs-c|ht(c(-| |_|a|g|p|s|t)|tp)|hu(aw|tc)|i-(20|go|ma)|i230|iac( |-|/)|ibro|idea|ig01|ikom|im1k|inno|ipaq|iris|ja(t|v)a|jbro|jemu|jigs|kddi|keji|kgt( |/)|klon|kpt |kwc-|kyo(c|k)|le(no|xi)|lg( g|/(k|l|u)|50|54|-[a-w])|libw|lynx|m1-w|m3ga|m50/|ma(te|ui|xo)|mc(01|21|ca)|m-cr|me(rc|ri)|mi(o8|oa|ts)|mmef|mo(01|02|bi|de|do|t(-| |o|v)|zz)|mt(50|p1|v )|mwbp|mywa|n10[0-2]|n20[2-3]|n30(0|2)|n50(0|2|5)|n7(0(0|1)|10)|ne((c|m)-|on|tf|wf|wg|wt)|nok(6|i)|nzph|o2im|op(ti|wv)|oran|owg1|p800|pan(a|d|t)|pdxg|pg(13|-([1-8]|c))|phil|pire|pl(ay|uc)|pn-2|po(ck|rt|se)|prox|psio|pt-g|qa-a|qc(07|12|21|32|60|-[2-7]|i-)|qtek|r380|r600|raks|rim9|ro(ve|zo)|s55/|sa(ge|ma|mm|ms|ny|va)|sc(01|h-|oo|p-)|sdk/|se(c(-|0|1)|47|mc|nd|ri)|sgh-|shar|sie(-|m)|sk-0|sl(45|id)|sm(al|ar|b3|it|t5)|so(ft|ny)|sp(01|h-|v-|v )|sy(01|mb)|t2(18|50)|t6(00|10|18)|ta(gt|lk)|tcl-|tdg-|tel(i|m)|tim-|t-mo|to(pl|sh)|ts(70|m-|m3|m5)|tx-9|up(.b|g1|si)|utst|v400|v750|veri|vi(rg|te)|vk(40|5[0-3]|-v)|vm40|voda|vulc|vx(52|53|60|61|70|80|81|83|85|98)|w3c(-| )|webc|whit|wi(g |nc|nw)|wmlb|wonu|x700|yas-|your|zeto|zte-/i.test(a.substr(0,4))) check = true;})(navigator.userAgent||navigator.vendor||window.opera);
  return check;
};

Next, we will define a series of initial messages to exchange with the server side. Define a new socket.io object to represent the client’s connection to the server. Once the socket connects, log a message to the console.

var socket = io();

socket.on('connect', function() {
  console.log(' * Connection established');
});

Check if the device is mobile, and send corresponding information to the server, using the function emit.

if (mobilecheck()) {
  socket.emit('newHost');
} else {
  socket.emit('newMirror');
}

This concludes the client’s message sending. Now, amend the server code to receive this message and react appropriately. Open the server server.js file.

Handle new connections, and immediately listen for the type of client. At the end of the file, add the following.

/**
 * Handle socket interactions
 */

io.on('connection', function(socket) {

  socket.on('newMirror', function() {
    console.log(" * Participant registered as 'mirror'")
  });

  socket.on('newHost', function() {
    console.log(" * Participant registered as 'host'");
  });
});

Again, preview the application by clicking on “Show” in the top left. Load that same URL on your mobile device. In your terminal, you will see the following.

listening on *: 3000
 * Participant registered as 'host'
 * Participant registered as 'mirror'

This is the first of simple message-passing, where our client sends information back to the server. Quit the running NodeJS process. For the final part of this step, we will have the client send camera information back to the server. Open public/client.js.

At the very end of the file, include the following.

var camera;
if (mobilecheck()) {
  AFRAME.registerComponent('camera-listener', {
    tick: function () {
      camera = this.el.sceneEl.camera.el;
      var position = camera.getAttribute('position');
      var rotation = camera.getAttribute('rotation');
      socket.emit('onMove', {
        "position": position,
        "rotation": rotation
      });
    }
  });
}

Save and close. Open your server file server.js to listen for this onMove event.

Add the following, in the newHost block of your socket code.

socket.on('newHost', function() {
    console.log(" * Participant registered as 'host'");
    /* start new code */
    socket.on('onMove', function(data) {
      console.log(data);
    });
    /* end new code */
  });

Once again, load the preview on your desktop and on your mobile device. Once a mobile client is connected, the server will immediately begin logging camera position and rotation information, sent from the client to the server. Next, you will implement the reverse, where you send information from the server back to the client.

Step 6: Send Information From Server To Client

In this step, you will send a host’s camera information to all mirrors. Open your main server file, server.js.

Change the onMove event handler to the following:

socket.on('onMove', function(data) {
  console.log(data);  // delete me
  socket.broadcast.emit('move', data)
});

The broadcast modifier ensures that the server sends this information to all clients connected to the socket, except for the original sender. Once this information is sent to a client, you then need to set the mirror’s camera accordingly. Open the client script, public/client.js.

Here, check if the client is a desktop. If so, receive the move data and log accordingly.

if (!mobilecheck()) {
  socket.on('move', function(data) {
    console.log(data);
  });
}

Load the preview on your desktop and on your mobile device. In your desktop browser, open the developer console. Then, load the app on your mobile phone. As soon as the mobile phone loads the app, the developer console on your desktop should light up with camera position and rotation.

Open the client script once more, at public/client.js. We finally adjust the client camera depending on the information sent.

Amend the event handler above for the move event.

socket.on('move', function(data) {
  /* start new code */
  camera.setAttribute('rotation', data["rotation"]);
  camera.setAttribute('position', data["position"]);
  /* end new code */
});

Load the app on your desktop and your phone. Every movement of your phone is reflected in the corresponding mirror on your desktop! This concludes the mirror portion of your application. As a desktop user, you can now preview what your mobile user sees. The concepts introduced in this section will be crucial for further development of this game, as we transform a single-player to a multiplayer game.

Conclusion

In this tutorial, we programmed three-dimensional objects and added simple interactions to these objects. Additionally, you built a simple message passing system between clients and servers, to effect a desktop preview of what your mobile users see.

These concepts extend beyond even webVR, as the notion of a geometry and material extend to SceneKit on iOS (which is related to ARKit), Three.js (the backbone for Aframe), and other three-dimensional libraries. These simple building blocks put together allow us ample flexibility in creating a fully-fledged point-and-click adventure game. More importantly, they allow us to create any game with a click-based interface.

Here are several resources and examples to further explore:

  • MirrorVR
    A fully-fledged implementation of the live preview built above. With just a single Javascript link, add a live preview of any virtual reality model on mobile to a desktop.
  • Bit by Bit
    A gallery of kids’ drawings and each drawing’s corresponding virtual reality model.
  • Aframe
    Examples, developer documentation, and more resources for virtual reality development.
  • Google Cardboard Experiences
    Experiences for the classroom with custom tools for educators.

Next time, we will build a complete game, using web sockets to facilitate real-time communication between players in a virtual reality game. Feel free to share your own models in the comments below.

Smashing Editorial
(rb, ra, yk, il)


Source: Smashing Magazine