Come Rain Or Come Shine: Inspiring Wallpapers For September 2018

Come Rain Or Come Shine: Inspiring Wallpapers For September 2018

Come Rain Or Come Shine: Inspiring Wallpapers For September 2018

Cosima Mielke

2018-08-31T12:45:54+02:00
2018-08-31T10:58:27+00:00

September is a time of transition. While some are trying to conserve the summer feeling just a bit longer, others are eager for fall to come with its colorful leaves and rainy days. But no matter how you feel about September or what the new month might be bringing along, this wallpaper collection sure has something to inspire you.

Just like every month since more than nine years already, artists and designers from across the globe once again challenged their creative skills and designed wallpapers to help you break out of your routine and give your desktop a fresh makeover. Each one of them comes in versions with and without a calendar for September 2018 and can be downloaded for free.

As a little extra goodie, we also went through our archives on the look for some timeless September wallpaper treasures which you’ll find assembled at the end of this post. Please note that these oldies, thus, don’t come with a calendar. Happy September!

Please note that:

  • All images can be clicked on and lead to the preview of the wallpaper,
  • You can feature your work in our magazine by taking part in our Desktop Wallpaper Calendar series. We are regularly looking for creative designers and artists to be featured on Smashing Magazine. Are you one of them?

Further Reading on SmashingMag:

Cacti Everywhere

“Seasons come and go, but our brave cactuses still stand. Summer is almost over, and autumn is coming, but the beloved plants don’t care.” — Designed by Lívia Lénárt from Hungary.

Cacti Everywhere

Batmom

Designed by Ricardo Gimenes from Sweden.

Batmom

Summer Is Not Over Yet

“This is our way of asking the summer not to go away. We were inspired by travel and exotic islands. In fact, it seems that September was the seventh month in the Roman calendar, dedicated to Vulcan, a god of fire. The legend has it that he was the son of Jupiter and Juno, and being an ugly baby with a limp, his mother tried to push him off a cliff into a volcano. Not really a nice story, but that’s where the tale took us. Anyway, enjoy September — because summer’s not over yet!” — Designed by PopArt Studio from Novi Sad, Serbia.

Summer Is Not Over Yet

Summer Collapsed Into Fall

“The lands are painted gold lit with autumn blaze. And all at once the leaves of the trees started falling, but none of them are worried. Since, everyone falls in love with fall.” — Designed by Mindster from India.

Summer Collapsed Into Fall

Fresh Breeze

“I’m already looking forward to the fresh breezes of autumn, summer’s too hot for me!” — Designed by Bryan Van Mechelen from Belgium.

Fresh Breeze

No More Inflatable Flamingos!

“Summer is officially over and we will no longer need our inflatable flamingos. Now, we’ll need umbrellas. And some flamingos will need an umbrella too!” — Designed by Marina Bošnjak from Croatia.

No More Inflatable Flamingos!

New Beginnings

“In September the kids and students go back to school.” — Designed by Melissa Bogemans from Belgium.

New Beginnings

New Destination

“September is the beginning of the course. We see it as a never ending road because we are going to enjoy the journey.” — Designed by Veronica Valenzuela from Spain.

New Destination

Good Things Come To Those Who Wait

“They say ‘patience is a virtue’, and so great opportunities and opulence in life come to those who are patient. Here we depicted a snail in the visual, one which longs to seize the shine that comes its way. It goes by the same watchword, shows no impulsiveness and waits for the right chances.” — Designed by Sweans from London.

Good Things Come To Those Who Wait

Back To School

Designed by Ilse van den Boogaart from The Netherlands.

Back To School

From The Archives

Some things are too good to be forgotten and our wallpaper archives are full of timeless treasures. So here’s a small selection of favorites from past September editions. Please note that these don’t come with a calendar.

Autumn Rains

“This autumn, we expect to see a lot of rainy days and blues, so we wanted to change the paradigm and wish a warm welcome to the new season. After all, if you come to think of it: rain is not so bad if you have an umbrella and a raincoat. Come autumn, we welcome you!” — Designed by PopArt Studio from Serbia.

Autumn Rains

Maryland Pride

“As summer comes to a close, so does the end of blue crab season in Maryland. Blue crabs have been a regional delicacy since the 1700s and have become Maryland’s most valuable fishing industry, adding millions of dollars to the Maryland economy each year. With more than 455 million blue crabs swimming in the Chesapeake Bay, these tasty critters can be prepared in a variety of ways and have become a summer staple in many homes and restaurants across the state. The blue crab has contributed so much to the state’s regional culture and economy, in 1989 it was named the State Crustacean, cementing its importance in Maryland history.” — Designed by The Hannon Group from Washington DC.

Maryland Pride

Summer Is Leaving

“It is inevitable. Summer is leaving silently. Let us think of ways to make the most of what is left of the beloved season.” — Designed by Bootstrap Dashboards from India.

Summer Is Leaving

Early Autumn

“September is usually considered as early autumn so I decided to draw some trees and leaves. However, nobody likes that summer is coming to an end, that’s why I kept summerish colours and style.” — Designed by Kat Gluszek from Germany.

Early Autumn

Long Live Summer

“While September’s Autumnal Equinox technically signifies the end of the summer season, this wallpaper is for all those summer lovers, like me, who don’t want the sunshine, warm weather and lazy days to end.” — Designed by Vicki Grunewald from Washington.

Long Live Summer

Listen Closer… The Mushrooms Are Growing…

“It’s this time of the year when children go to school and grown-ups go to collect mushrooms.” — Designed by Igor Izhik from Canada.

Listen Closer… The Mushrooms Are Growing…

Autumn Leaves

“Summer is coming to an end in the northern hemisphere, and that means Autumn is on the way!” — Designed by James Mitchell from the United Kingdom.

Autumn Leaves

Festivities And Ganesh Puja

“The month of September starts with the arrival of festivals, mainly Ganesh Puja.” — Designed by Sayali Sandeep Harde from India.

Festivities And Ganesh Puja

Hungry

Designed by Elise Vanoorbeek from Belgium.

Hungry

Sugar Cube

Designed by Luc Versleijen from the Netherlands.

Sugarcube

Miss, My Dragon Burnt My Homework!

“We all know the saying ‘Miss, my dog ate my homework!’ Well, not everyone has a dog, so here’s a wallpaper to inspire your next excuse at school ;)” — Designed by Ricardo Gimenes from Sweden.

My Dragon Burnt My Homework!

Meet The Bulbs!

“This summer we have seen lighting come to the forefront of design once again, with the light bulb front and center, no longer being hidden by lampshades or covers. Many different bulbs have been featured by interior designers including vintage bulbs, and oddly shaped energy-saving bulbs. We captured the personality of a variety of different bulbs in this wallpaper featuring the Bulb family.” — Designed by Carla Genovesio from the USA.

Meet the Bulbs!

World Bat Night

“In the night from September 20th to 21st, the world has one of the most unusual environmental events — Night of the bats. Its main purpose: to draw public attention to the problems of bats and their protection, as well as to debunk the myths surrounding the animals, as many people experience unjustified superstitious fear, considering them vampires.” — Designed by cheloveche.ru from Russia.

World Bat Night

Autumn Invaders

“Invaders of autumn are already here. Make sure you are well prepared!” Designed by German Ljutaev from Ukraine.

Smashing Desktop Wallpaper - September 2012

Hello Spring

“September is the start of spring in Australia so this bright wallpaper could brighten your day and help you feel energized!” — Designed by Tazi Design from Australia.

Hello Spring

Join In Next Month!

Please note that we respect and carefully consider the ideas and motivation behind each and every artist’s work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience throughout their works. This is also why the themes of the wallpapers weren’t anyhow influenced by us, but rather designed from scratch by the artists themselves.

Thank you to all designers for their participation. Join in next month!


Source: Smashing Magazine

Adsense and AdWords – why use them?

Overview There are a lot of different ways that savvy people monetise the web, of which AdSense is only one. For perfect clarity, where we are talking ‘Adsense’ we’re really talking about Adsense for content, contextual ads on the Google Adsense network that that can be utilised on blogs, websites or any other web-based properties  […]

The post Adsense and AdWords – why use them? appeared first on SitePoint.


Source: Sitepoint

Quickly Deploy WordPress & phpMyAdmin on Alibaba Cloud with ROS

This article was originally published on Alibaba Cloud. Thank you for supporting the partners who make SitePoint possible.

Think you got a better tip for making the best use of Alibaba Cloud services? Tell us about it and go in for your chance to win a Macbook Pro (plus other cool stuff). Find out more here.

This document shows how to deploy a WordPress site and phpMyAdmin application using the ROS template with a single click.

Introduction

Many users do not have in-house technical capabilities to build and manage a website. They have teams who can manage the content but not the infrastructure that comes along with it. Currently available solutions are suitable for a limited period when the requirements are basic, but as soon as they require customization, high availability, and scalability, additional solutions supporting scaling are needed.

The ROS stack template (WordPressCluster-phpMyAdmin.ros) discussed in this document helps address the high availability and scalability requirements of such users. With a single click, it creates the entire VPC stack, Server Load Balancer, Auto Scaling, ECS, RDS, and other instances, deploys WordPress and phpMyAdmin, and configures Auto Scaling to guarantee that any new instances can be added and configured without manual intervention.

Prerequisites

  • Availability of an OSS bucket.
  • RAM user with read and write access to the OSS bucket.
  • Understanding of ROS and ability to create ROS stack templates and stacks from the console.

Architecture overview

Architecture

This diagram provides an overview of the deployment architecture that is generated based on the template WordPressCluster-phpMyAdmin.ros.

Three types of users access the infrastructure:

End users, who access the website hosted on WordPress through a URL that is resolved to a Public Server Load Balancer instance.

WordPress is hosted on Apache web servers. The servers have their document root set to /wwwroot which is on an OSS bucket shared across the web servers using OSSFS (a FUSE-based file system officially provided by Alibaba Cloud).

RAM users who have the access permission of the OSS bucket can mount the OSS bucket on the ECS instance.

The RDS for MySQL database holds the WordPress content and is accessed from the web server over its intranet connect string.

System administrator, who can access to the VPC environment through an SSH logon to the JumpBox (bastion host).

The JumpBox has an elastic IP and is accessed over the Internet.

The access through JumpBox is to manage the instances inside the VPC.

phpMyAdmin is installed on the JumpBox and is accessible over the Internet.

In this way, the administrators can administer the RDS database.

Content managers, who can access the WordPress Management console through the Internet.

Access to all these services can be controlled by security groups and can be configured as per the environment.

The Template Overview

Click WordPressCluster-phpMyAdmin.ros to download the ROS stack template for use.

Note: In the template, the ZoneId is set as eu-central-1a, and the ImageId is m-gw8efmfk0y184zs0m0aj. You can modify the ZoneId and the ImageId according to the zones and images supported in the ROS console. Log on to the ROS console, click ECS Instance Information, select a region, and click ECS Zone or ECS Image to view all the zones or images supported in this region.

Based on the WordPressCluster-phpMyAdmin.ros stack template, the system creates the VPC, Server Load Balancer, VSwitch, NAT Gateway, Security Groups, and ECS instances for JumpBox, Elastic IP for JumpBox, Auto Scaling for ECS instances, and RDS instance.

It takes the following input parameters to make the resource stack generic enough to be deployable for any user in any region.

parameters1

parameter2

According to the template, the system installs httpd, mysql-client, PHP, OSSFS, phpMyAdmin, and WordPress on the JumpBox and also configures them in the UserData section of the ALIYUN::ECS::Instance resource.

The following is a snippet of the UserData section of the JumpBox.

 "ossbucketendpoint=",
   {
   "Ref": "OSSBucketEndPoint"
   },
   "n",
   "DatabaseUser=",
   {
   "Ref": "MasterUserName"
   },
   "n",
   "DatabasePwd=",
   {
   "Ref": "MasterDBPassword"
   },
   "n",
   "DatabaseName=",
   {
   "Ref": "DBName"
   },
   "n",
   "DatabaseHost=",
   {
   "Fn::GetAtt": ["Database", "InnerConnectionString"]
   },
   "n",
   "yum install -y curl httpd mysql-server php php-common php-mysqln",
   "yum install -y php-gd php-imap php-ldap php-odbc php-pear php-xml php-xmlrpcn",
   "yum install -y phpmyadminn",
   "sed -i "s%localhost%$DatabaseHost%" /etc/phpMyAdmin/config.inc.phpn",
   "sed -i "s%Deny,Allow%Allow,Deny%" /etc/httpd/conf.d/phpMyAdmin.confn",
   "sed -i "s%Deny from All%Allow from All%" /etc/httpd/conf.d/phpMyAdmin.confn",
   "sed -i "/<RequireAny>/a Require all Granted" /etc/httpd/conf.d/phpMyAdmin.confn",
   "chkconfig httpd onn",
   "service httpd stopn",
   "wget  
   https://github.com/aliyun/ossfs/releases/download/v1.80.3/ossfs_1.80.3_centos6.5_x86_64.rpmn",
   "yum install -y ossfs_1.80.3_centos6.5_x86_64.rpmn",
   "echo $ossbucket:$ossbucketaccesskey:$ossbucketsecret >> /etc/passwd-ossfsn",
   "chmod 600 /etc/passwd-ossfsn",
   "mkdir $ossbucketmountpointn",
   "chmod -R 755 $ossbucketmountpointn",
   "echo #This script will automount the ossbucketn",
   "echo umount $ossbucketmountpoint >> /usr/local/bin/ossfs-automount.shn",
   "echo #Mounting OSS Bucketn",
              "echo ossfs $ossbucket $ossbucketmountpoint -ourl=http://$ossbucketendpoint -o allow_other -o mp_umask=0022 -ouid=48 -ogid=48 >> /usr/local/bin/ossfs-automount.shn",
"chmod 755 /usr/local/bin/ossfs-automount.shn",
"echo /usr/local/bin/ossfs-automount.sh >> /etc/rc.d/rc.localn",
"chmod +x /etc/rc.d/rc.localn",
"/usr/local/bin/./ossfs-automount.shn",
"wget http://WordPress.org/latest.tar.gzn",
"tar -xzvf latest.tar.gzn",             
"sed -i "s%database_name_here%$DatabaseName%" WordPress/wp-config-sample.phpn",
"sed -i "s%username_here%$DatabaseUser%" WordPress/wp-config-sample.phpn",
"sed -i "s%password_here%${DatabasePwd:-$DatabasePwdDef}%" WordPress/wp-config-sample.phpn",
"sed -i "s%localhost%$DatabaseHost%" WordPress/wp-config-sample.phpn",
"mv WordPress/wp-config-sample.php WordPress/wp-config.phpn",
"cp -a WordPress/* $ossbucketmountpointn",
"chmod -R 755 /wwwroot/*n",
"rm -rf WordPress*n",
"service httpd startn",
"donen"

The UserData section deploys WordPress on the OSS bucket which can be mounted to the web servers created using Auto Scaling. This guarantees that the web servers have the updated content from the document root.

The web servers are started through Auto Scaling. The installation and configuration of httpd, PHP, and ossutil, mounting of DocumentRoo, and starting of services are done in the UserData section of the Auto Scaling configuration.

The following is a snippet of the UserData section of the web server Auto Scaling configuration.

The post Quickly Deploy WordPress & phpMyAdmin on Alibaba Cloud with ROS appeared first on SitePoint.


Source: Sitepoint

Tech Recruiting, Finding Niches & Starting Your Own Business

In this interview, Joe Woodham discusses recruiting for a tech start-up, finding your niche, and starting your own business.

Joe is known as the “intro guy” at Torii Recruitment, the specialist IT recruitment firm he founded in 2012. Finding a good fit in IT recruitment, he says, is a more nuanced challenge than just finding someone with the right skills.

The Interview

In one sentence how would you describe yourself?

I’m highly driven and passionate about business. However, I focus on maintaining balance in all aspects of my life. I love to enjoy myself, challenge myself and grow as a person.

In one sentence, how would describe your career?

My career has been a constant progression in terms of self-development and learning new skills. I’ve been lucky to fall into what I do and I’ve enjoyed the challenges along the way.

In one sentence, how would you describe your business Torii Recruitment?

Torii is agile and able to move quickly, enabling us to understand the market and our clients’ pain points and deliver a high-touch service in a skill-short market.

Finding your niche

During your talk in the WeTeachMe Master’s Series, you mentioned finding a niche and then building a business around filling it. How did you determine IT recruitment was the correct niche for your business?

I started my career in IT Recruitment and it wasn’t IT being the niche. The niche was actually working with specific skill sets within IT that the market was experiencing a shortage of.

Working with companies, understanding their pain points and looking at where the market was heading, I was able to leverage the skills I’d built to deliver a service the market and my clients needed.

You also spoke of your big learning curve when starting a business. Can you share some of them here?

When I started a business, I was good at IT Recruitment. However, when I started working for myself I quickly realized there was a lot more that was happening in the background. I had to start working on building a brand/image in the market which people knew me/Torii for. There were so many new things I had to master and make decisions on that I had taken for granted — small things like the software I used, the accounting system we used, and how I sent an invoice. I basically had to learn very quickly, and fill all the gaps to be able to run a business.

Recruiting: Joe Woodham with the WeTeachMe teamJoe Woodham (second left) at WeTeachMe’s Masters Series

The post Tech Recruiting, Finding Niches & Starting Your Own Business appeared first on SitePoint.


Source: Sitepoint

Advanced CSS Theming with Custom Properties and JavaScript

Throughout this tutorial on CSS theming, we’ll be using CSS custom properties (also known as CSS variables) to implement dynamic themes for a simple HTML page. We’ll create dark and light example themes, then write JavaScript to switch between the two when the user clicks a button.

Just like in typical programming languages, variables are used to hold or store values. In CSS, they’re typically used to store colors, font names, font sizes, length units, etc. They can then be referenced and reused in multiple places in the stylesheet. Most developers refer to “CSS variables”, but the official name is custom properties.

CSS custom properties make it possible to modify variables that can be referenced throughout the stylesheet. Previously, this was only possible with CSS preprocessors such as Sass.

Understanding :root and var()

Before creating our dynamic theming example, let’s understand the essential basics of custom properties.

A custom property is a property whose name starts with two hyphens () like --foo. They define variables that can be referenced using var(). Let’s consider this example:

:root {
  --bg-color: #000;
  --text-color: #fff;
}

Defining custom properties within the :root selector means they are available in the global document space to all elements. :root is a CSS pseudo class which matches the root element of the document — the <html> element. It’s similar to the html selector, but with higher specificity.

You can access the value of a :root custom property anywhere in the document:

div {
  color: var(--text-color);
  background-color: var(--bg-color);
}

You can also include a fallback value with your CSS variable. For example:

div {
  color: var(--text-color, #000);
  background-color: var(--bg-color, #fff);
}

If a custom property isn’t defined, their fallback value is used instead.

Defining custom properties inside a CSS selector other than the :root or html selector makes the variable available to matching elements and their children.

CSS Custom Properties vs Preprocessor Variables

CSS pre-processors such as Sass are often used to aid front-end web development. Among the other useful features of preprocessors are variables. But what’s the difference between Sass variables and CSS custom properties?

  • CSS custom properties are natively parsed in modern browsers. Preprocessor variables require compilation into a standard CSS file and all variables are converted to values.
  • Custom properties can be accessed and modified by JavaScript. Preprocessor variables are compiled once and only their final value is available on the client.

Writing a Simple HTML Page

Let’s start by creating a folder for our project:

$ mkdir css-variables-theming

Next, add an index.html inside the project’s folder:

$ cd css-variables-theming
$ touch index.html

And add the following content:

<nav class="navbar">Title</nav>
<div class="container">
  <div>
    <input type="button" value="Light/Dark" id="toggle-theme" />
  </div>
    <h2 class="title">What is Lorem Ipsum?</h2>
    <p class="content">Lorem Ipsum is simply dummy text of the printing and typesetting industry...</p>
</div>
<footer>
  Copyright 2018
</footer>

We are adding a navigation bar using a <nav> tag, a footer, and a container <div> that contains a button (that will be used to switch between light and dark themes) and some dummy Lorem Ipsum text.

The post Advanced CSS Theming with Custom Properties and JavaScript appeared first on SitePoint.


Source: Sitepoint

A Brief Guide About Competitive Analysis

A Brief Guide About Competitive Analysis

A Brief Guide About Competitive Analysis

Mayur Kshirsagar

2018-08-30T14:00:57+02:00
2018-08-30T12:00:56+00:00

In this article, I will introduce the subject of competitive analysis, which is basically a method to determine how well your competitors are performing. My aim is to introduce the subject to those of you who are new to the concept. It should be useful if you are new to product design, UX, interaction or digital design, or if you have experience in these fields but have not performed a competitive analysis before.

No prior knowledge of the topic is needed because I’ll be explaining what the term means and how to perform a competitive analysis as we go. I am assuming some basic knowledge of the design process and UX research, but I’ll provide plenty of practical examples and reference links to help with any terms and concepts you might be unfamiliar with.

Note: If you are a beginner in UX and interaction design, it would be good to know the basics of the design process and to know what is UX research (and the methods used for UX research) before diving into the article’s main topic. Please read the next section carefully because I’ve added reference links to help you get started.

Recommended reading: Standing Out From The Crowd: Improving Your Mobile App With Competitive Analysis

Competitive Analysis, Service Design Cycle, Five-Stages Design Process

If you are a UX designer, then you might be aware of the service design cycle. This cycle contains four stages: discover, explore, test and listen. Each one of these stages has multiple research methods, and competitive analysis is part of the exploration. Susan Farrell has very helpfully distinguished different UX research methods and activities that can be performed for your project. (You can check this detailed segregation in her “UX Research Cheat Sheet”.)

The image below shows the four steps and the most commonly used methods in these steps.



(Large preview)

If you are new to this concept, you might first ask, “What is service design?” Shahrzad Samadzadeh explains it very well in her article, “So, Like, What Is Service Design?.”

Note: You can also learn more about service design in Sarah Gibbons’s article, “Service Design 101.”

Often, UX designers follow the five-stages design process in their projects:

  1. empathize,
  2. define,
  3. ideate,
  4. prototype,
  5. test.

The five-stages design process.
The five-stages design process. (Large preview)

Please don’t confuse the five-stages design process with the service design cycle. Basically, they serve the same purpose in the design thinking process, but are explained in different styles. Here is a brief explanation of what these five stages contain:

  • Empathize
    This stage involves gaining a clear understanding of the problem you are trying to solve from the user’s point of view.
  • Define
    This stage involves defining the correct statement for the problem you are trying to solve, using the knowledge you gained in the first stage.
  • Ideate
    In this stage, you can generate different solution ideas for the problem.
  • Prototype
    Basically, a prototype is an attempt to give your solution some form so that it can be explained to others. For digital products, a prototype could be a wireframe set created using pen and paper or using a tool such as Balsamiq or Sketch, or it could be a visual design prototype created using a tool such as Sketch, Figma, Adobe XD or InVision.
  • Test
    Testing involves validating and evaluating all of your solutions with the users.

You can perform UX research at any stage. Many articles and books are available for you to learn more about this design process. “Five Stages in the Design Thinking Process” by Rikke Dam and Teo Siang is one of my favorite articles on the topic.


The most frequent methods used by UX professionals during the exploration stage of the design life cycle
The most frequent methods used by UX professionals during the exploration stage of the design life cycle. (Nielsen Norman Group, “User Experience Careers” survey report) (Large preview)

According to Nielsen Norman Group’s “User Experience Careers” survey report, 61% of UX professionals prefer to do the competitive analysis for their projects. But what exactly is competitive analysis? In simple language, competitive analysis is nothing but a method to determine how your competitors are performing, what they are offering and how well they are doing it.

Sometimes, competitive analysis is referred as competitive usability evaluation.

Why Should You Do A Competitive Analysis?

There are many reasons to do a competitive analysis, but I think the most important reason is that it helps us to understand the rights and wrongs of our own product or service.

Using competitive analysis, you can make decisions based on knowledge of what is currently working well for your users, rather than based on guesses or intuition. In doing competitive analysis, you can also identify risks in your product or service and use those insights to add value to it.

Recently, I was working on a project in which I did a competitive analysis of a feature (collaborative meeting note-taking) that a client wanted to introduce in their web app. Note-taking is not exactly a new or highly innovative thing, so the biggest challenge I was facing was to make this functionality simpler and easier to handle, because the product I was working on was in the very early stages of development. The feature, in a nutshell, was to create a simple text document where some interactive action items could be added.

Because a ton of apps are out there that allow you to create simple text documents, I decided to do a competitive analysis for this functionality. (I’ll explain this process in more detail later in the section “Five Easy Steps to Do a Competitive Analysis”.)

How To Find The Right Competitors?

Basically, there are two types of competitors: direct and indirect. As a UX designer, your role is to study the designs of these competitors.

Jaime Levy gives very good definitions of direct and indirect competitors in her book UX Strategy. You can learn more about competitive analysis (and types of competitors) in chapter 4 of the book, “Conducting Competitive Research”.


Types of competitors
Types of competitors. (Large preview)

Direct competitors are the ones who offer the same, or a very similar, set of features to your current or future customers, which means they are solving a similar problem to the one you are trying to solve, for a customer base that you are targeting as well.

Indirect competitors are the ones who offers a similar set of features but to a different customer segment; or, they target your exact customer base without offering the exact same set of features, which means indirect competitors are solving the same problem but for a different customer base, or are solving the same problem but offer a different solution.

You can search for these types of competitors online (by doing a simple web search), or you can directly ask your current and potential customers what they are using already. You can also look for your direct and indirect competitors on websites such as Crunchbase and Product Hunt, and you can search for them in the Google Play and the iOS App Store.

Five Easy Steps To Do A Competitive Analysis

You can perform a competitive analysis for your existing or new product using the following five-step process.


5 steps to do a competitive analysis
5 steps to do a competitive analysis. (Large preview)

1. Define And Understand The Goals

Defining and understanding the goal is an integral part of any UX research process. You must define an accurate goal (or set of goals) for your research; otherwise, there is a chance you’ll get the wrong outcome.

Draft all of your goals right before starting your process. When defining your goals, consider the following questions: Why are you doing this competitive analysis? What kind of outcome do you expect? Will this analysis affect UX decisions?

Remember: When setting up goals for any kind of UX research, be as specific as possible.

I mentioned earlier that I recently performed a competitive analysis for a collaborative meeting note-taking feature, to be introduced in the app that I was developing for a client. The goals for my research were very general because innumerable apps all provide this type of functionality, and the product I was working on was in the very early stages of development.

Even though your research goals might be simple, make them as specific as possible, and write them all down. Writing down your goals will help you stay on the right track.

The goals for my analysis were more like questions for which I was trying to find the answers. Here is the list of goals I set for this research:

  • Which apps do users prefer for note-taking? And why do they prefer them?
    Goal: To find out the user’s behavior with these apps, their preferences and their comfort zone.
  • What is the working mechanism of these apps?
    Goal: To find how out competitors’ apps work, so that we can identify their pros and cons.
  • What are the “star” features of these apps?
    Goal: To identify functionalities that we were trying to introduce as well, to see whether they already exist and, if they exist, how exactly they were implemented.
  • How comfortable does a user feel when using these apps?
    Goal: To identify user loyalty and engagement in the apps of our competitors.
  • How does collaborative editing work in these competitive apps?
    Goal: To identify how collaborative-editing functionality works and to study its technical aspects.
  • What is the visual structure and user interface of these apps?
    Goal: To check the visual look and feel of the apps (user interface and interaction).

2. Find The Right Competitors

After setting the goals, go on a search and make a list of both direct and indirect competitors. It’s not necessary to analyze all of the competitors you find. The number is completely up to you. Some people suggest analyzing at least two to four competitors, while others suggest five to ten or more.

Finding the right competitors for my research wasn’t a hard task because I already knew many apps that provided similar features, but I still did a quick search on Google, and the results were a bit surprising — surprising because most of the apps I knew turned out to be more like indirect competitors to the app I was working on; and later, after a bit more searching, I also found the apps that were our direct competitors.

Putting each competitor in the right list is a very important part of competitive analysis because the features and functionality in your competitors’ apps are based on exactly what users of those apps want. Let’s assume you put one indirect competitor, XYZ, under the “direct competitors” list and start doing your analysis. While doing the research, you might find some impressive feature in XYZ’s app and decide to add a similar feature in your own app; then, later it turns out that the feature you added is not useful for the users you are targeting. You might end up wasting a lot of energy, time and money building something that is not at all useful. So, be careful when sorting your competitors.

For my research, the competitors were as follows:

  • Direct competitorsQuip, Cisco Spark Meeting Notes, Workboard, Lucid Meeting, Less Meeting, MeetingSense, Minute-it, etc.
    • All of the apps above provide the same type of functionality, which we were trying to introduce for almost the same type of user base.
  • Indirect competitorsEvernote, Google Keep, Google Docs, Microsoft Word, Microsoft OneNote and other traditional note-taking apps and pen-paper note-taking methods.
    • The user base for all of the above is not exactly different from the user base we were targeting, but most of the users we were targeting were using these apps because they were unaware of the more convenient ways to take meeting notes.

3. Make A Competitive Analysis Matrix

A competitive analysis matrix is not complex, just a simple spreadsheet. You can use Microsoft Excel, Google Sheets, Apple Numbers or any other tool you are comfortable with.

First, divide all competitors you’ve found into two groups (direct and indirect) and put them in a spreadsheet. Jamie Levy suggests making the following columns:

  1. competitor’s name,
  2. URL,
  3. login credentials,
  4. purpose,
  5. year founded.

Example of competitive analysis matrix spreadsheet from UX Strategy, Jaime Levy’s book.
Example of competitive analysis matrix spreadsheet from UX Strategy, Jaime Levy’s book. (Large preview)

I would recommend digging a bit deeper and adding a few more columns, such as for “unique features”, “pros and cons”, etc. It would help to summarize your analysis. It’s not necessary to set your columns exactly as mentioned above. You can modify the columns to your own research goals and needs.

For my analysis, I created only four columns. My competitive analysis matrix looked as follows:

  • Competitor nameIn this column, I put the names of all of the competitors.
  • URLThese are website links or app download links for these competitors.
  • Features/commentsIn this column, I put all of my comments, some ”star” features I needed to focus on, and the pros and cons of the competitor. I color-coded the cells so that later I (or anyone viewing the matrix) could easily identify the difference between them. For example, I used light yellow for features, light purple for comments, green for pros and red for cons.
  • Screenshots/video linksIn this column, I put all of the screenshots and videos related to the features and comments mentioned in the third column. This way, it became very easy and quick to understand what a particular comment or feature was all about.


(Large preview)

4. Write A Summary And An Analysis

Once you are done with the analysis matrix spreadsheet, move on and create a summary of your findings. Be as specific as possible, and try to answer all of your questions while setting up a goal or during the overall process.

This will help you and your team members and stakeholders make the right design and UX decisions. This summary will also help you find new design and UX opportunities in the product you’re building.

In writing the summary and the presentation for the competitive analysis that I did for this collaborative note-taking app, the competitive analysis matrix helped me a lot. I drafted a document with all of the high-level takeaways from this analysis and answered all of the questions that were set as goals. For the presentation, I shared the document with the client, which helped both the client and me to finalize the features, the flows and the end requirements for the product.

5. Presentation

The last step of your competitive analysis is the presentation. It’s not a typical slideshow presentation — rather, just share all of the data and information you collected throughout the process with your teammates, stakeholders and/or clients.

Getting feedback from everywhere you can and being open to this feedback is a very important part of the designer’s workflow. So, share all of your finding with your teammates, stakeholders and clients, and ask for their opinion. You might find some missing points in your analysis or discover something new and exciting from someone’s feedback.

Conclusion

We live in a data-driven world, and we should build products, services and apps based on data, rather than our intuition (or guesswork).

As UX designers, we should go out there and collect as much data as possible before building a real product. This data will help us to create a solid product that users will want to use, rather than a product we want or imagine. These kinds of products are more likely to succeed in the market. Competitive analysis is one of the ways to get this data and to create a user-friendly product.

Finally, no matter what kind of product you are building or research you are conducting, always try to put yourself in the users’ shoes every now and then. This way, you will be able to identify the users’ struggles and ultimately deliver a better solution.

I hope this article has helped you plan and make your first competitive analysis for your next project!

Further Reading

If you want to become a better UX, interaction, visual (UI) or product designer, there are a lot of sources from which you can learn — articles, books, online courses. I often check the following few: Smashing Magazine, InVision blog, Interaction Design Foundation, NN Group and UX Mastery. These websites have a very good collection of articles on the topics of UI and UX design and UX research.

Here are some additional resources:

Smashing Editorial
(mb, ra, al, yk, il)


Source: Smashing Magazine

WordPress Management Made Easy with the Plesk WordPress Toolkit

This article was originally published on Alibaba Cloud. Thank you for supporting the partners who make SitePoint possible.

WordPress users of all skill levels are looking for smart and intuitive ways to eliminate time-consuming daily routines when managing and securing their WordPress websites. In this session, we will show you the Plesk WordPress Toolkit – a tool that is simplifying the lives of web professionals on Alibaba Cloud. WordPress Toolkit takes care of deploying, securing, cloning, and updating WordPress installations, letting you spend more time on development and content creation.

The post WordPress Management Made Easy with the Plesk WordPress Toolkit appeared first on SitePoint.


Source: Sitepoint

Progressively Enhanced CSS Layouts: Floats to Flexbox & Grid

It can be difficult to achieve complex yet flexible and responsive grid layouts. Various techniques have evolved over the years but most, such as faux columns, were hacks rather than robust design options.

Most of these hacks were built on top of the CSS float property. When the flexbox layout module was introduced to the list of display property options, a new world of options became possible. Now you can not only define the direction the container is going to stack the items but also wrap, align (items and lines), order, shrink, etc. them in a container.

With all that power in their hands, developers started to create their own combinations of rules for all sorts of layouts. Flexibility reigned. However, flexbox was designed to deal with one-dimensional layouts: either a row or a column. CSS Grid Layout, in contrast, permitted two-dimensional row and column layouts.

Progressive Enhancement vs Graceful Degradation

It’s difficult to create a website that supports every user’s browser. Two options are commonly used — “graceful degradation” and “progressive enhancement”.

Graceful degradation ensures a website continues to function even when something breaks. For example, float: right may fail if an element is too big for the screen but it wraps to the next empty space so the block remains usable.

Progressive enhancement takes the opposite approach. The page starts with minimum functionality and features are added when they’re supported. The example above could use a CSS media query to verify the screen is a minimum width before allowing an element to float.

When it comes to grid layouts, each browser determines the appearance of its components. In this article, you’re going to understand with some real samples how to evolve some web contents from an old strategy to a new one. More specifically, how to progressively enhance the model from a float-based layout to flexbox, and then CSS Grid, respectively.

Your Old Float Layout for a Page

Take a look at the following HTML page:

<main>
  <article>
    article content
  </article>

  <aside>
    aside content
  </aside>
</main>

It’s a small, common example of grid disposition you can have in a webpage: two divs sharing the same container (body).

A float example

The following CSS can be used in all the examples we’ll create to set the body style:

body {
  font-family: Segoe UI;
  font-style: normal;
  font-weight: 400;
  font-size: 1rem;
}

Plus, the CSS snippet for each of our divs, enabling the floating effect:

main {
  width: 100%;
}

main, article, aside {
  border: 1px solid #fcddd1;
  padding: 5px;
  float: left;
}

article {
  background-color: #fff4dd;
  width: 74%;
}

aside {
  width: 24%;
}

You can see the example in action here:

See the Pen Float Layout Example by SitePoint (@SitePoint) on CodePen.

You can float as many elements as you want, one after another, in a way all of them suit the whole available width. However, this strategy has some downsides:

  • it’s hard to manage the heights of the container elements
  • Vertical centering, if needed, can be painfully hard to manage
  • depending on the content you have inside the elements (along with each inner container’s CSS properties), the browser may wrap elements to the next free line and break the layout.

One solution is the display: table layout:

main {
  display: table;
}

main, article, aside {
  display: table-cell;
}

However, using display: table becomes less convenient as your layouts get more complex, and it can get messy to work with in responsive layouts. display: table works best with small sections of a page, rather than major layout sections.

Flexbox Approach

The flexible box module, known by the name of flexbox, is a more recent layout model capable of distributing space and powerfully aligning items of a container (the box) in a one-dimensional way. Its one dimensional nature, though, does not impede you to design multidimensional layouts (rows and columns), but flexbox may not result in reliable row stacking.

Besides the float approach being very popular and broadly adopted by popular grid frameworks, flexbox presents a series of benefits over float:

  • vertical alignment and height equality for the container’s items on each wrapped row
  • the container (box) can increase/decrease based on the available space, and you determine whether it is a column or a row
  • source independence — meaning that the order of the items doesn’t matter, they just need to be inside the box.

To initiate a flexbox formatting strategy, all you need to do is set the CSS display property with a flex value:

main {
  width: 100%;
  display: flex;
}

main, article, aside {
  border: 1px solid #fcddd1;
  padding: 5px;
  float: left;
}

article {
  background-color: #fff4dd;
  width: 74%;
}

aside {
  width: 24%;
}

The following image shows the result:

A flexbox example

Here’s a live example:

See the Pen Flexbox Layout Example by SitePoint (@SitePoint) on CodePen.

Progressing to CSS Grid Layout

The CSS Grid layout follows up closely the flexbox one, the big difference being that it works in two dimensions. That is, if you need to design a layout that deals with both rows and columns, the grid layout will most likely suit better. It has the same aligning and space distribution factors of flexbox, but now acting directly to the two dimensions of your container (box). In comparison to the float property, it has even more advantages: easy elements disposition, alignment, row/column/cell control, etc.

Working with CSS Grid is as simple as changing the display property of your container element to grid. Inside the container, you can also create columns and rows with divs, for example. Let’s consider an example of an HTML page with four inner container divs.

Regarding the CSS definitions, let’s start with the grid container div:

div.container {
  display: grid;
  grid-template-columns: 24% 75%;
  grid-template-rows: 200px 300px;
  grid-column-gap: 15px;
  grid-row-gap: 15px;
}

The property grid-template-columns defines the same configuration you had before: two grid columns occupying 24% and 75% of the whole container width, respectively. The grid-template-rows do the same, applying 200px and 300px as height for the first and second rows, respectively.

Use the properties grid-column-gap and grid-row-gap to allocate space around the grid elements.

In regards to applying grid alignment properties to specific cells, let’s take a look:

.div1 {
  background-color: #fff4dd;
  align-self: end;
  justify-self: end;
}

.div4 {
  align-self: center;
}

For the div of class div1, you’re aligning and justifying it at the end of the grid cell. The properties align-self and justify-self are valid for all flex items you have in the layout, including, in this case, the grid cells. div4 was set only the centered alignment, for you to check the difference between both.

Here’s how the elements are positioned now:

Grid divs

You can check out the final grid layout example here:

See the Pen Grid Layout Example by SitePoint (@SitePoint) on CodePen.

Most browsers also offer native support for grid layout inspections, which is great for seeing how it handles the grid mechanism internally. Let’s try our grid example on Firefox with its Grid Inspector, available via Firefox DevTools. In order to open it, right-click the container element and, at the CSS pane’s Rules view, find and click the grid icon right after the display: grid:

The grid property seen through Firefox's developer console

This will toggle the Grid highlighter. You can also control more display settings at the CSS pane’s Layout view like the line numbers or the area names exhibition:

Grid sections highlighted in Firefox

The post Progressively Enhanced CSS Layouts: Floats to Flexbox & Grid appeared first on SitePoint.


Source: Sitepoint

Building A Room Detector For IoT Devices On Mac OS

Building A Room Detector For IoT Devices On Mac OS

Building A Room Detector For IoT Devices On Mac OS

Alvin Wan

2018-08-29T14:20:53+02:00
2018-08-29T13:45:12+00:00

Knowing which room you’re in enables various IoT applications — from turning on the light to changing TV channels. So, how can we detect the moment you and your phone are in the kitchen, or bedroom, or living room? With today’s commodity hardware, there are a myriad of possibilities:

One solution is to equip each room with a bluetooth device. Once your phone is within range of a bluetooth device, your phone will know which room it is, based on the bluetooth device. However, maintaining an array of Bluetooth devices is significant overhead — from replacing batteries to replacing dysfunctional devices. Additionally, proximity to the Bluetooth device is not always the answer: if you’re in the living room, by the wall shared with the kitchen, your kitchen appliances should not start churning out food.

Another, albeit impractical, solution is to use GPS. However, keep in mind hat GPS works poorly indoors in which the multitude of walls, other signals, and other obstacles wreak havoc on GPS’s precision.

Our approach instead is to leverage all in-range WiFi networks — even the ones your phone is not connected to. Here is how: consider the strength of WiFi A in the kitchen; say it is 5. Since there is a wall between the kitchen and the bedroom, we can reasonably expect the strength of WiFi A in the bedroom to differ; say it is 2. We can exploit this difference to predict which room we’re in. What’s more: WiFi network B from our neighbor can only be detected from the living room but is effectively invisible from the kitchen. That makes prediction even easier. In sum, the list of all in-range WiFi gives us plentiful information.

This method has the distinct advantages of:

  1. not requiring more hardware;
  2. relying on more stable signals like WiFi;
  3. working well where other techniques such as GPS are weak.

The more walls the better, as the more disparate the WiFi network strengths, the easier the rooms are to classify. You will build a simple desktop app that collects data, learns from the data, and predicts which room you’re in at any given time.

Further Reading on SmashingMag:

Prerequisites

For this tutorial, you will need a Mac OSX. Whereas the code can apply to any platform, we will only provide dependency installation instructions for Mac.

Step 0: Setup Work Environment

Your desktop app will be written in NodeJS. However, to leverage more efficient computational libraries like numpy, the training and prediction code will be written in Python. To start, we will setup your environments and install dependencies. Create a new directory to house your project.

mkdir ~/riot

Navigate into the directory.

cd ~/riot

Use pip to install Python’s default virtual environment manager.

sudo pip install virtualenv

Create a Python3.6 virtual environment named riot.

virtualenv riot --python=python3.6

Activate the virtual environment.

source riot/bin/activate

Your prompt is now preceded by (riot). This indicates we have successfully entered the virtual environment. Install the following packages using pip:

  • numpy: An efficient, linear algebra library
  • scipy: A scientific computing library that implements popular machine learning models
pip install numpy==1.14.3 scipy
==1.1.0

With the working directory setup, we will start with a desktop app that records all WiFi networks in-range. These recordings will constitute training data for your machine learning model. Once we have data on hand, you will write a least squares classifier, trained on the WiFi signals collected earlier. Finally, we will use the least squares model to predict the room you’re in, based on the WiFi networks in range.

Step 1: Initial Desktop Application

In this step, we will create a new desktop application using Electron JS. To begin, we will instead the Node package manager npm and a download utility wget.

brew install npm wget

To begin, we will create a new Node project.

npm init

This prompts you for the package name and then the version number. Hit ENTER to accept the default name of riot and default version of 1.0.0.

package name: (riot)
version: (1.0.0)

This prompts you for a project description. Add any non-empty description you would like. Below, the description is room detector

description: room detector

This prompts you for the entry point, or the main file to run the project from. Enter app.js.

entry point: (index.js) app.js

This prompts you for the test command and git repository. Hit ENTER to skip these fields for now.

test command:
git repository:

This prompts you for keywords and author. Fill in any values you would like. Below, we use iot, wifi for keywords and use John Doe for the author.

keywords: iot,wifi
author: John Doe

This prompts you for the license. Hit ENTER to accept the default value of ISC.

license: (ISC)

At this point, npm will prompt you with a summary of information so far. Your output should be similar to the following.

{
  "name": "riot",
  "version": "1.0.0",
  "description": "room detector",
  "main": "app.js",
  "scripts": {
    "test": "echo "Error: no test specified" && exit 1"
  },
  "keywords": [
    "iot",
    "wifi"
  ],
  "author": "John Doe",
  "license": "ISC"
}

Hit ENTER to accept. npm then produces a package.json. List all files to double-check.

ls

This will output the only file in this directory, along with the virtual environment folder.

package.json
riot

Install NodeJS dependencies for our project.

npm install electron --global  # makes electron binary accessible globally
npm install node-wifi --save

Start with main.js from Electron Quick Start, by downloading the file, using the below. The following -O argument renames main.js to app.js.

wget https://raw.githubusercontent.com/electron/electron-quick-start/master/main.js -O app.js

Open app.js in nano or your favorite text editor.

nano app.js

On line 12, change index.html to static/index.html, as we will create a directory static to contain all HTML templates.

function createWindow () {
  // Create the browser window.
  win = new BrowserWindow({width: 1200, height: 800})

  // and load the index.html of the app.
  win.loadFile('static/index.html')

  // Open the DevTools.

Save your changes and exit the editor. Your file should match the source code of the app.js file. Now create a new directory to house our HTML templates.

mkdir static

Download a stylesheet created for this project.

wget https://raw.githubusercontent.com/alvinwan/riot/master/static/style.css?token=AB-ObfDtD46ANlqrObDanckTQJ2Q1Pyuks5bf79PwA%3D%3D -O static/style.css

Open static/index.html in nano or your favorite text editor. Start with the standard HTML structure.

<!DOCTYPE html>
  <html>
    <head>
      <meta charset="UTF-8">
      <title>Riot | Room Detector</title>
    </head>
    <body>
      <main>
      </main>
    </body>
  </html>

Right after the title, link the Montserrat font linked by Google Fonts and stylesheet.

<title>Riot | Room Detector</title>
  <!-- start new code -->
  <link href="https://fonts.googleapis.com/css?family=Montserrat:400,700" rel="stylesheet">
  <link href="style.css" rel="stylesheet">
  <!-- end new code -->
</head>

Between the main tags, add a slot for the predicted room name.

<main>
  <!-- start new code -->
  <p class="text">I believe you’re in the</p>
  <h1 class="title" id="predicted-room-name">(I dunno)</h1>
  <!-- end new code -->
</main>

Your script should now match the following exactly. Exit the editor.

<!DOCTYPE html>
  <html>
    <head>
      <meta charset="UTF-8">
      <title>Riot | Room Detector</title>
      <link href="https://fonts.googleapis.com/css?family=Montserrat:400,700" rel="stylesheet">
      <link href="style.css" rel="stylesheet">
    </head>
    <body>
      <main>
        <p class="text">I believe you’re in the</p>
        <h1 class="title" id="predicted-room-name">(I dunno)</h1>
      </main>
    </body>
  </html>

Now, amend the package file to contain a start command.

nano package.json

Right after line 7, add a start command that’s aliased to electron .. Make sure to add a comma to the end of the previous line.

"scripts": {
  "test": "echo "Error: no test specified" && exit 1",
  "start": "electron ."
},

Save and exit. You are now ready to launch your desktop app in Electron JS. Use npm to launch your application.

npm start

Your desktop application should match the following.


home page with button
Home page with “Add New Room” button available (Large preview)

This completes your starting desktop app. To exit, navigate back to your terminal and CTRL+C. In the next step, we will record wifi networks, and make the recording utility accessible through the desktop application UI.

Step 2: Record WiFi Networks

In this step, you will write a NodeJS script that records the strength and frequency of all in-range wifi networks. Create a directory for your scripts.

mkdir scripts

Open scripts/observe.js in nano or your favorite text editor.

nano scripts/observe.js

Import a NodeJS wifi utility and the filesystem object.

var wifi = require('node-wifi');
var fs = require('fs');

Define a record function that accepts a completion handler.

/**
 * Uses a recursive function for repeated scans, since scans are asynchronous.
 */
function record(n, completion, hook) {
}

Inside the new function, initialize the wifi utility. Set iface to null to initialize to a random wifi interface, as this value is currently irrelevant.

function record(n, completion, hook) {
    wifi.init({
        iface : null
    });
}

Define an array to contain your samples. Samples are training data we will use for our model. The samples in this particular tutorial are lists of in-range wifi networks and their associated strengths, frequencies, names etc.

function record(n, completion, hook) {
    ...
    samples = []
}

Define a recursive function startScan, which will asynchronously initiate wifi scans. Upon completion, the asynchronous wifi scan will then recursively invoke startScan.

function record(n, completion, hook) {
  ...
  function startScan(i) {
    wifi.scan(function(err, networks) {
    });
  }
  startScan(n);
}

In the wifi.scan callback, check for errors or empty lists of networks and restart the scan if so.

wifi.scan(function(err, networks) {
  if (err || networks.length == 0) {
    startScan(i);
    return
  }
});

Add the recursive function’s base case, which invokes the completion handler.

wifi.scan(function(err, networks) {
  ...
  if (i <= 0) {
    return completion({samples: samples});
  }
});

Output a progress update, append to the list of samples, and make the recursive call.

wifi.scan(function(err, networks) {
  ...
  hook(n-i+1, networks);
  samples.push(networks);
  startScan(i-1);
});

At the end of your file, invoke the record function with a callback that saves samples to a file on disk.

function record(completion) {
  ...
}

function cli() {
  record(1, function(data) {
    fs.writeFile('samples.json', JSON.stringify(data), 'utf8', function() {});
  }, function(i, networks) {
    console.log(" * [INFO] Collected sample " + (21-i) + " with " + networks.length + " networks");
  })
}

cli();

Double check that your file matches the following:

var wifi = require('node-wifi');
var fs = require('fs');

/**
 * Uses a recursive function for repeated scans, since scans are asynchronous.
 */
function record(n, completion, hook) {
  wifi.init({
      iface : null // network interface, choose a random wifi interface if set to null
  });

  samples = []
  function startScan(i) {
    wifi.scan(function(err, networks) {
        if (err || networks.length == 0) {
          startScan(i);
          return
        }
        if (i <= 0) {
          return completion({samples: samples});
        }
        hook(n-i+1, networks);
        samples.push(networks);
        startScan(i-1);
    });
  }

  startScan(n);
}

function cli() {
    record(1, function(data) {
        fs.writeFile('samples.json', JSON.stringify(data), 'utf8', function() {});
    }, function(i, networks) {
        console.log(" * [INFO] Collected sample " + i + " with " + networks.length + " networks");
    })
}

cli();

Save and exit. Run the script.

node scripts/observe.js

Your output will match the following, with variable numbers of networks.

 * [INFO] Collected sample 1 with 39 networks

Examine the samples that were just collected. Pipe to json_pp to pretty print the JSON and pipe to head to view the first 16 lines.

cat samples.json | json_pp | head -16

The below is example output for a 2.4 GHz network.

{
  "samples": [
    [
      {
        "mac": "64:0f:28:79:9a:29",
        "bssid": "64:0f:28:79:9a:29",
        "ssid": "SMASHINGMAGAZINEROCKS",
         "channel": 4,
         "frequency": 2427,
          "signal_level": "-91",
          "security": "WPA WPA2",
          "security_flags": [
           "(PSK/AES,TKIP/TKIP)",
          "(PSK/AES,TKIP/TKIP)"
        ]
      },

This concludes your NodeJS wifi-scanning script. This allows us to view all in-range WiFi networks. In the next step, you will make this script accessible from the desktop app.

Step 3: Connect Scan Script To Desktop App

In this step, you will first add a button to the desktop app to trigger the script with. Then, you will update the desktop app UI with the script’s progress.

Open static/index.html.

nano static/index.html

Insert the “Add” button, as shown below.

<h1 class="title" id="predicted-room-name">(I dunno)</h1>
        <!-- start new code -->
        <div class="buttons">
            <a href="add.html" class="button">Add new room</a>
        </div>
        <!-- end new code -->
    </main>

Save and exit. Open static/add.html.

nano static/add.html

Paste the following content.

<!DOCTYPE html>
  <html>
    <head>
      <meta charset="UTF-8">
      <title>Riot | Add New Room</title>
      <link href="https://fonts.googleapis.com/css?family=Montserrat:400,700" rel="stylesheet">
      <link href="style.css" rel="stylesheet">
    </head>
    <body>
      <main>
        <h1 class="title" id="add-title">0</h1>
        <p class="subtitle">of <span>20</span> samples needed. Feel free to move around the room.</p>
        <input type="text" id="add-room-name" class="text-field" placeholder="(room name)">
        <div class="buttons">
          <a href="#" id="start-recording" class="button">Start recording</a>
          <a href="index.html" class="button light">Cancel</a>
        </div>
        <p class="text" id="add-status" style="display:none"></p>
      </main>
      <script>
        require('../scripts/observe.js')
      </script>
    </body>
  </html>

Save and exit. Reopen scripts/observe.js.

nano scripts/observe.js

Beneath the cli function, define a new ui function.

function cli() {
    ...
}

// start new code
function ui() {
}
// end new code

cli();

Update the desktop app status to indicate the function has started running.

function ui() {
  var room_name = document.querySelector('#add-room-name').value;
  var status = document.querySelector('#add-status');
  var number = document.querySelector('#add-title');
  status.style.display = "block"
  status.innerHTML = "Listening for wifi..."
}

Partition the data into training and validation data sets.

function ui() {
  ...
  function completion(data) {
    train_data = {samples: data['samples'].slice(0, 15)}
    test_data = {samples: data['samples'].slice(15)}
    var train_json = JSON.stringify(train_data);
    var test_json = JSON.stringify(test_data);
  }
}

Still within the completion callback, write both datasets to disk.

function ui() {
  ...
  function completion(data) {
    ...
    fs.writeFile('data/' + room_name + '_train.json', train_json, 'utf8', function() {});
    fs.writeFile('data/' + room_name + '_test.json', test_json, 'utf8', function() {});
    console.log(" * [INFO] Done")
    status.innerHTML = "Done."
  }
}

Invoke record with the appropriate callbacks to record 20 samples and save the samples to disk.

function ui() {
  ...
  function completion(data) {
    ...
  }
  record(20, completion, function(i, networks) {
    number.innerHTML = i
    console.log(" * [INFO] Collected sample " + i + " with " + networks.length + " networks")
  })
}

Finally, invoke the cli and ui functions where appropriate. Start by deleting the cli(); call at the bottom of the file.

function ui() {
    ...
}

cli();  // remove me

Check if the document object is globally accessible. If not, the script is being run from the command line. In this case, invoke the cli function. If it is, the script is loaded from within the desktop app. In this case, bind the click listener to the ui function.

if (typeof document == 'undefined') {
    cli();
} else {
    document.querySelector('#start-recording').addEventListener('click', ui)
}

Save and exit. Create a directory to hold our data.

mkdir data

Launch the desktop app.

npm start

You will see the following homepage. Click on “Add room”.



(Large preview)

You will see the following form. Type in a name for the room. Remember this name, as we will use this later on. Our example will be bedroom.


Add New Room page
“Add New Room” page on load (Large preview)

Click “Start recording,” and you will see the following status “Listening for wifi…”.


starting recording
“Add New Room” starting recording (Large Preview)

Once all 20 samples are recorded, your app will match the following. The status will read “Done.”



“Add New Room” page after recording is complete (Large preview)

Click on the misnamed “Cancel” to return to the homepage, which matches the following.


finished recording
“Add New Room” page after recording is complete (Large preview)

We can now scan wifi networks from the desktop UI, which will save all recorded samples to files on disk. Next, we will train an out-of-box machine learning algorithm-least squares on the data you have collected.

Step 4: Write Python Training Script

In this step, we will write a training script in Python. Create a directory for your training utilities.

mkdir model

Open model/train.py

nano model/train.py

At the top of your file, import the numpy computational library and scipy for its least squares model.

import numpy as np
from scipy.linalg import lstsq
import json
import sys

The next three utilities will handle loading and setting up data from the files on disk. Start by adding a utility function that flattens nested lists. You will use this to flatten a list of list of samples.

import sys

def flatten(list_of_lists):
    """Flatten a list of lists to make a list.
    >>> flatten([[1], [2], [3, 4]])
    [1, 2, 3, 4]
    """
    return sum(list_of_lists, [])

Add a second utility that loads samples from the specified files. This method abstracts away the fact that samples are spread out across multiple files, returning just a single generator for all samples. For each of the samples, the label is the index of the file. e.g., If you call get_all_samples('a.json', 'b.json'), all samples in a.json will have label 0 and all samples in b.json will have label 1.

def get_all_samples(paths):
  """Load all samples from JSON files."""
  for label, path in enumerate(paths):
  with open(path) as f:
    for sample in json.load(f)['samples']:
      signal_levels = [
        network['signal_level'].replace('RSSI', '') or 0
        for network in sample]
      yield [network['mac'] for network in sample], signal_levels, label

Next, add a utility that encodes the samples using a bag-of-words-esque model. Here is an example: Assume we collect two samples.

  1. wifi network A at strength 10 and wifi network B at strength 15
  2. wifi network B at strength 20 and wifi network C at strength 25.

This function will produce a list of three numbers for each of the samples: the first value is the strength of wifi network A, the second for network B, and the third for C. In effect, the format is [A, B, C].

  1. [10, 15, 0]
  2. [0, 20, 25]
def bag_of_words(all_networks, all_strengths, ordering):
  """Apply bag-of-words encoding to categorical variables.

  >>> samples = bag_of_words(
  ...     [['a', 'b'], ['b', 'c'], ['a', 'c']],
  ...     [[1, 2], [2, 3], [1, 3]],
  ...     ['a', 'b', 'c'])
  >>> next(samples)
  [1, 2, 0]
  >>> next(samples)
  [0, 2, 3]
  """
  for networks, strengths in zip(all_networks, all_strengths):
    yield [strengths[networks.index(network)]
      if network in networks else 0
      for network in ordering]

Using all three utilities above, we synthesize a collection of samples and their labels. Gather all samples and labels using get_all_samples. Define a consistent format ordering to one-hot encode all samples, then apply one_hot encoding to samples. Finally, construct the data and label matrices X and Y respectively.

def create_dataset(classpaths, ordering=None):
  """Create dataset from a list of paths to JSON files."""
  networks, strengths, labels = zip(*get_all_samples(classpaths))
  if ordering is None:
    ordering = list(sorted(set(flatten(networks))))
  X = np.array(list(bag_of_words(networks, strengths, ordering))).astype(np.float64)
  Y = np.array(list(labels)).astype(np.int)
  return X, Y, ordering

These functions complete the data pipeline. Next, we abstract away model prediction and evaluation. Start by defining the prediction method. The first function normalizes our model outputs, so that the sum of all values totals to 1 and that all values are non-negative; this ensures that the output is a valid probability distribution. The second evaluates the model.

def softmax(x):
  """Convert one-hotted outputs into probability distribution"""
  x = np.exp(x)
  return x / np.sum(x)


def predict(X, w):
  """Predict using model parameters"""
  return np.argmax(softmax(X.dot(w)), axis=1)

Next, evaluate the model’s accuracy. The first line runs prediction using the model. The second counts the numbers of times both predicted and true values agree, then normalizes by the total number of samples.

def evaluate(X, Y, w):
  """Evaluate model w on samples X and labels Y."""
  Y_pred = predict(X, w)
  accuracy = (Y == Y_pred).sum() / X.shape[0]
  return accuracy

This concludes our prediction and evaluation utilities. After these utilities, define a main function that will collect the dataset, train, and evaluate. Start by reading the list of arguments from the command line sys.argv; these are the rooms to include in training. Then create a large dataset from all of the specified rooms.

def main():
  classes = sys.argv[1:]

  train_paths = sorted(['data/{}_train.json'.format(name) for name in classes])
  test_paths = sorted(['data/{}_test.json'.format(name) for name in classes])
  X_train, Y_train, ordering = create_dataset(train_paths)
  X_test, Y_test, _ = create_dataset(test_paths, ordering=ordering)

Apply one-hot encoding to the labels. A one-hot encoding is similar to the bag-of-words model above; we use this encoding to handle categorical variables. Say we have 3 possible labels. Instead of labelling 1, 2, or 3, we label the data with [1, 0, 0], [0, 1, 0], or [0, 0, 1]. For this tutorial, we will spare the explanation for why one-hot encoding is important. Train the model, and evaluate on both the train and validation sets.

def main():
  ...
  X_test, Y_test, _ = create_dataset(test_paths, ordering=ordering)
  
  Y_train_oh = np.eye(len(classes))[Y_train]
  w, _, _, _ = lstsq(X_train, Y_train_oh)
  train_accuracy = evaluate(X_train, Y_train, w)
  test_accuracy = evaluate(X_test, Y_test, w)

Print both accuracies, and save the model to disk.

def main():
  ...
  print('Train accuracy ({}%), Validation accuracy ({}%)'.format(train_accuracy*100, test_accuracy*100))
  np.save('w.npy', w)
  np.save('ordering.npy', np.array(ordering))
  sys.stdout.flush()

At the end of the file, run the main function.

if __name__ == '__main__':
  main()

Save and exit. Double check that your file matches the following:

import numpy as np
from scipy.linalg import lstsq
import json
import sys


def flatten(list_of_lists):
    """Flatten a list of lists to make a list.
    >>> flatten([[1], [2], [3, 4]])
    [1, 2, 3, 4]
    """
    return sum(list_of_lists, [])


def get_all_samples(paths):
    """Load all samples from JSON files."""
    for label, path in enumerate(paths):
        with open(path) as f:
            for sample in json.load(f)['samples']:
                signal_levels = [
                    network['signal_level'].replace('RSSI', '') or 0
                    for network in sample]
                yield [network['mac'] for network in sample], signal_levels, label


def bag_of_words(all_networks, all_strengths, ordering):
    """Apply bag-of-words encoding to categorical variables.
    >>> samples = bag_of_words(
    ...     [['a', 'b'], ['b', 'c'], ['a', 'c']],
    ...     [[1, 2], [2, 3], [1, 3]],
    ...     ['a', 'b', 'c'])
    >>> next(samples)
    [1, 2, 0]
    >>> next(samples)
    [0, 2, 3]
    """
    for networks, strengths in zip(all_networks, all_strengths):
        yield [int(strengths[networks.index(network)])
            if network in networks else 0
            for network in ordering]


def create_dataset(classpaths, ordering=None):
    """Create dataset from a list of paths to JSON files."""
    networks, strengths, labels = zip(*get_all_samples(classpaths))
    if ordering is None:
        ordering = list(sorted(set(flatten(networks))))
    X = np.array(list(bag_of_words(networks, strengths, ordering))).astype(np.float64)
    Y = np.array(list(labels)).astype(np.int)
    return X, Y, ordering


def softmax(x):
    """Convert one-hotted outputs into probability distribution"""
    x = np.exp(x)
    return x / np.sum(x)


def predict(X, w):
    """Predict using model parameters"""
    return np.argmax(softmax(X.dot(w)), axis=1)


def evaluate(X, Y, w):
    """Evaluate model w on samples X and labels Y."""
    Y_pred = predict(X, w)
    accuracy = (Y == Y_pred).sum() / X.shape[0]
    return accuracy


def main():
    classes = sys.argv[1:]

    train_paths = sorted(['data/{}_train.json'.format(name) for name in classes])
    test_paths = sorted(['data/{}_test.json'.format(name) for name in classes])
    X_train, Y_train, ordering = create_dataset(train_paths)
    X_test, Y_test, _ = create_dataset(test_paths, ordering=ordering)

    Y_train_oh = np.eye(len(classes))[Y_train]
    w, _, _, _ = lstsq(X_train, Y_train_oh)
    train_accuracy = evaluate(X_train, Y_train, w)
    validation_accuracy = evaluate(X_test, Y_test, w)

    print('Train accuracy ({}%), Validation accuracy ({}%)'.format(train_accuracy*100, validation_accuracy*100))
    np.save('w.npy', w)
    np.save('ordering.npy', np.array(ordering))
    sys.stdout.flush()


if __name__ == '__main__':
    main()

Save and exit. Recall the room name used above when recording the 20 samples. Use that name instead of bedroom below. Our example is bedroom. We use -W ignore to ignore warnings from a LAPACK bug.

python -W ignore model/train.py bedroom

Since we’ve only collected training samples for one room, you should see 100% training and validation accuracies.

Train accuracy (100.0%), Validation accuracy (100.0%)

Next, we will link this training script to the desktop app.

In this step, we will automatically retrain the model whenever the user collects a new batch of samples. Open scripts/observe.js.

nano scripts/observe.js

Right after the fs import, import the child process spawner and utilities.

var fs = require('fs');
// start new code
const spawn = require("child_process").spawn;
var utils = require('./utils.js');

In the ui function, add the following call to retrain at the end of the completion handler.

function ui() {
  ...
  function completion() {
    ...
    retrain((data) => {
      var status = document.querySelector('#add-status');
      accuracies = data.toString().split('n')[0];
      status.innerHTML = "Retraining succeeded: " + accuracies
    });
  }
    ...
}

After the ui function, add the following retrain function. This spawns a child process that will run the python script. Upon completion, the process calls a completion handler. Upon failure, it will log the error message.

function ui() {
  ..
}

function retrain(completion) {
  var filenames = utils.get_filenames()
  const pythonProcess = spawn('python', ["./model/train.py"].concat(filenames));
  pythonProcess.stdout.on('data', completion);
  pythonProcess.stderr.on('data', (data) => {
    console.log(" * [ERROR] " + data.toString())
  })
}

Save and exit. Open scripts/utils.js.

nano scripts/utils.js

Add the following utility for fetching all datasets in data/.

var fs = require('fs');

module.exports = {
  get_filenames: get_filenames
}

function get_filenames() {
  filenames = new Set([]);
  fs.readdirSync("data/").forEach(function(filename) {
      filenames.add(filename.replace('_train', '').replace('_test', '').replace('.json', '' ))
  });
  filenames = Array.from(filenames.values())
  filenames.sort();
  filenames.splice(filenames.indexOf('.DS_Store'), 1)
  return filenames
}

Save and exit. For the conclusion of this step, physically move to a new location. There ideally should be a wall between your original location and your new location. The more barriers, the better your desktop app will work.

Once again, run your desktop app.

npm start

Just as before, run the training script. Click on “Add room”.


home page with button
Home page with “Add New Room” button available (Large preview)

Type in a room name that is different from your first room’s. We will use living room.


Add New Room page
“Add New Room” page on load (Large preview)

Click “Start recording,” and you will see the following status “Listening for wifi…”.



“Add New Room” starting recording for second room (Large preview)

Once all 20 samples are recorded, your app will match the following. The status will read “Done. Retraining model…”


finished recording 2
“Add New Room” page after recording for second room complete (Large preview)

In the next step, we will use this retrained model to predict the room you’re in, on the fly.

Step 6: Write Python Evaluation Script

In this step, we will load the pretrained model parameters, scan for wifi networks, and predict the room based on the scan.

Open model/eval.py.

nano model/eval.py

Import libraries used and defined in our last script.

import numpy as np
import sys
import json
import os
import json

from train import predict
from train import softmax
from train import create_dataset
from train import evaluate

Define a utility to extract the names of all datasets. This function assumes that all datasets are stored in data/ as <dataset>_train.json and <dataset>_test.json.

from train import evaluate

def get_datasets():
  """Extract dataset names."""
  return sorted(list({path.split('_')[0] for path in os.listdir('./data')
    if '.DS' not in path}))

Define the main function, and start by loading parameters saved from the training script.

def get_datasets():
  ...

def main():
  w = np.load('w.npy')
  ordering = np.load('ordering.npy')

Create the dataset and predict.

def main():
  ...
  classpaths = [sys.argv[1]]
  X, _, _ = create_dataset(classpaths, ordering)
  y = np.asscalar(predict(X, w))

Compute a confidence score based on the difference between the top two probabilities.

def main():
  ...
  sorted_y = sorted(softmax(X.dot(w)).flatten())
  confidence = 1
  if len(sorted_y) > 1:
    confidence = round(sorted_y[-1] - sorted_y[-2], 2)

Finally, extract the category and print the result. To conclude the script, invoke the main function.

def main()
  ...
  category = get_datasets()[y]
  print(json.dumps({"category": category, "confidence": confidence}))

if __name__ == '__main__':
  main()

Save and exit. Double check your code matches the following (source code):

import numpy as np
import sys
import json
import os
import json

from train import predict
from train import softmax
from train import create_dataset
from train import evaluate


def get_datasets():
    """Extract dataset names."""
    return sorted(list({path.split('_')[0] for path in os.listdir('./data')
        if '.DS' not in path}))


def main():
    w = np.load('w.npy')
    ordering = np.load('ordering.npy')

    classpaths = [sys.argv[1]]
    X, _, _ = create_dataset(classpaths, ordering)
    y = np.asscalar(predict(X, w))

    sorted_y = sorted(softmax(X.dot(w)).flatten())
    confidence = 1
    if len(sorted_y) > 1:
        confidence = round(sorted_y[-1] - sorted_y[-2], 2)

    category = get_datasets()[y]
    print(json.dumps({"category": category, "confidence": confidence}))


if __name__ == '__main__':
    main()

Next, we will connect this evaluation script to the desktop app. The desktop app will continuously run wifi scans and update the UI with the predicted room.

Step 7: Connect Evaluation To Desktop App

In this step, we will update the UI with a “confidence” display. Then, the associated NodeJS script will continuously run scans and predictions, updating the UI accordingly.

Open static/index.html.

nano static/index.html

Add a line for confidence right after the title and before the buttons.

<h1 class="title" id="predicted-room-name">(I dunno)</h1>
<!-- start new code -->
<p class="subtitle">with <span id="predicted-confidence">0%</span> confidence</p>
<!-- end new code -->
<div class="buttons">

Right after main but before the end of the body, add a new script predict.js.

</main>
  <!-- start new code -->
  <script>
  require('../scripts/predict.js')
  </script>
  <!-- end new code -->
</body>

Save and exit. Open scripts/predict.js.

nano scripts/predict.js

Import the needed NodeJS utilities for the filesystem, utilities, and child process spawner.

var fs = require('fs');
var utils = require('./utils');
const spawn = require("child_process").spawn;

Define a predict function which invokes a separate node process to detect wifi networks and a separate Python process to predict the room.

function predict(completion) {
  const nodeProcess = spawn('node', ["scripts/observe.js"]);
  const pythonProcess = spawn('python', ["-W", "ignore", "./model/eval.py", "samples.json"]);
}

After both processes have spawned, add callbacks to the Python process for both successes and errors. The success callback logs information, invokes the completion callback, and updates the UI with the prediction and confidence. The error callback logs the error.

function predict(completion) {
  ...
  pythonProcess.stdout.on('data', (data) => {
    information = JSON.parse(data.toString());
    console.log(" * [INFO] Room '" + information.category + "' with confidence '" + information.confidence + "'")
    completion()

    if (typeof document != "undefined") {
      document.querySelector('#predicted-room-name').innerHTML = information.category
      document.querySelector('#predicted-confidence').innerHTML = information.confidence
    }
  });
  pythonProcess.stderr.on('data', (data) => {
    console.log(data.toString());
  })
}

Define a main function to invoke the predict function recursively, forever.

function main() {
  f = function() { predict(f) }
  predict(f)
}

main();

One last time, open the desktop app to see the live prediction.

npm start

Approximately every second, a scan will be completed and the interface will be updated with the latest confidence and predicted room. Congratulations; you have completed a simple room detector based on all in-range WiFi networks.

demo
Recording 20 samples inside the room and another 20 out in the hallway. Upon walking back inside, the script correctly predicts “hallway” then “bedroom.” (Large preview)

Conclusion

In this tutorial, we created a solution using only your desktop to detect your location within a building. We built a simple desktop app using Electron JS and applied a simple machine learning method on all in-range WiFi networks. This paves the way for Internet-of-things applications without the need for arrays of devices that are costly to maintain (cost not in terms of money but in terms of time and development).

Note: You can see the source code in its entirety on Github.

With time, you may find that this least squares does not perform spectacularly in fact. Try finding two locations within a single room, or stand in doorways. Least squares will be large unable to distinguish between edge cases. Can we do better? It turns out that we can, and in future lessons, we will leverage other techniques and the fundamentals of machine learning to better performance. This tutorial serves as a quick test bed for experiments to come.

Smashing Editorial
(ra, il)


Source: Smashing Magazine

How to Deploy Apps Effortlessly with Packer and Terraform

This article was originally published on Alibaba Cloud. Thank you for supporting the partners who make SitePoint possible.

Think you got a better tip for making the best use of Alibaba Cloud services? Tell us about it and go in for your chance to win a Macbook Pro (plus other cool stuff). Find out more here.

Alibaba Cloud published a very neat white paper about DevOps that is very interesting to read. It shows how “DevOps is a model that goes beyond simple implementation of agile principles to manage the infrastructure. John Willis and Damon Edwards defined DevOps using the term CAMS: Culture, Automation, Measurement, and Sharing. DevOps seeks to promote collaboration between the development and operations teams“.

This means, roughly, that there is a new role or mindset in a team that aims to connect both software development and infrastructure management. This role requires knowledge of both worlds and takes advantage of the cloud paradigm that nowadays grows in importance. But DevOps practices are not limited to large enterprises. As developers, we can easily incorporate DevOps in our daily tasks. With this tutorial you will see how easy is to orchestrate a whole deployment with just a couple of config files. We will be running our application on an Alibaba Cloud Elastic Compute Service (ECS) instance.

What Is Packer?

Packer is an open source DevOps tool made by Hashicorp to create images from a single JSON config file, which helps in keeping track of its changes in the long run. This software is cross-platform compatible and can create multiple images in parallel.

If you have Homebrew, just type brew install packer to install it.

It basically creates ready-to-use images with the Operating System and some extra software ready to use for your applications, like creating your own distribution. Imagine you want Debian but with some custom PHP application you made built-in by default. Well, with Packer this is very easy to do, and in this how-to, we will create one.

What Is Terraform?

When deploying we have two big tasks to complete. One is to pack the actual application in a suitable environment, creating an image. The other big task is to create the underlying infrastructure in where the application is going to live, this is, the actual server to host it.

For this, Terraform, made by the same company as Packer, Hashicorp, came to existence as a very interesting and powerful tool. Based in the same principles as Packer, Terraform lets you build infrastructure in Alibaba Cloud by just using a single config file, in the TF format this time, also helping with versioning and a clear understanding of how all the bits are working beneath your application.

To install Terraform and the Alibaba Cloud Official provider, please follow the instructions in this other article.

What We Want to Achieve

As mentioned at the top of the article, we are going to create and deploy a simple PHP application in a pure DevOps way — that is, taking care of every part of the deployment, from the software running it to the subjacent infrastructure supporting it.

Steps

For the sake of KISS principle, we will create a docker-compose based application to retrieve the METAR weather data from airports, using its ICAO airport code and pulling the data from the National US Weather Service. Then we will create the image with Packer using Ubuntu and deploy the infrastructure using the image with Terraform.

PHP Application

For the sake of simplicity I created the application ready to use. You can find the source code here to have a look, which includes an index.php, 2 JavaScript files to decode the METAR data and a bit of CSS and a PNG image to make it less boring. It even indicates the direction of wind! But don’t worry, you won’t need to clone the repository at this point.

The application is based in docker-compose and is something we will install as a dependency with Packer later.

Building the Image with Packer

Let’s get our hands dirty! For this, we need to create a folder somewhere in our computer, lets say ~/metar-app. Then lets cd in and create a file named metar-build.json with the following contents:

{
  "variables": {
    "access_key": "{{env `ALICLOUD_ACCESS_KEY`}}",
    "region": "{{env `ALICLOUD_REGION`}}",
    "secret_key": "{{env `ALICLOUD_SECRET_KEY`}}"
  },
  "builders": [
    {
      "type": "alicloud-ecs",
      "access_key": "{{user `access_key`}}",
      "secret_key": "{{user `secret_key`}}",
      "region":"{{user `region`}}",
      "image_name": "metar_app",
      "source_image": "ubuntu_16_0402_64_20G_alibase_20180409.vhd",
      "ssh_username": "root",
      "instance_type": "ecs.t5-lc1m1.small",
      "internet_charge_type": "PayByTraffic",
      "io_optimized": "true"
    }
  ],
  "provisioners": [
    {
      "type": "shell",
      "script": "base-setup"
    }
  ]
}

And right next to it, a file named base-setup with the following:

#!/usr/bin/env bash

apt-get update && apt-get install -y apt-transport-https ca-certificates curl git-core software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
apt-get update && apt-get install -y docker-ce docker-compose
curl -L https://github.com/docker/compose/releases/download/1.21.2/docker-compose-`uname -s`-`uname -m` -o /usr/bin/docker-compose

mkdir /var/docker
git clone https://github.com/roura356a/metar.git /var/docker/metar

Once you have these two files ready, you can run packer build metar-build.json and wait for it to finish. Please note that, for this to work, you need to have 3 environment variables in your machine with the relevant values for you: ALICLOUD_REGION, ALICLOUD_ACCESS_KEY and ALICLOUD_SECRET_KEY. This step will take a while, as it creates an ECS, installs all the software in it, then stops the instance, creates a snapshot of it, and finally creates the image of the whole system.

After the image is completed, packer will output ==> Builds finished. Good, we have an image ready to use! We are ready to continue to the next step.

Deploying the Infrastructure with Terraform

It’s time to create the ECS Instance. For this, in the same folder, we will create a file named main.tf with the following content:

The post How to Deploy Apps Effortlessly with Packer and Terraform appeared first on SitePoint.


Source: Sitepoint