Getting Started with Redux: Connecting Redux with React

This is the third part of the series on Getting Started with Redux and in this tutorial, we’re going to learn how to connect a Redux store with React.  Redux is an independent library that works with all the popular front-end libraries & frameworks. And it works flawlessly with React because of its functional approach.

You don’t need to have followed the previous parts of this series for this tutorial to make sense. If you’re here to learn about using React with Redux, you can take the Quick Recap below and then check out the code from the previous part and start from there. 

Quick Recap

In the first post, we learned about the Redux workflow and answered the question, Why Redux?. We created a very basic demo application and showed you how the various components of Redux—actions, reducers, and the store— are connected.

In the previous post, we started building a contact list application that lets you add contacts and then displays them as a list. A Redux store was created for our contact list and we added a few reducers and actions. We attempted to dispatch actions and retrieve the new state using store methods like store.dispatch() and store.getState().

By the end of this article, you’ll learn

  1. the difference between container components and presentational components
  2. about the react-redux library
  3. how to bind react and redux using connect()
  4. how to dispatch actions using mapDispatchToProps
  5. how to retrieve state using mapStateToProps

The code for the tutorial is available on GitHub at the react-redux-demo repo. Grab the code from the v2 branch and use that as a starting point for this tutorial. If you’re curious to know how the application looks by the end of this tutorial, try the v3 branch. Let’s get started.

Designing a Component Hierarchy: Smart vs. Dumb Component

This is a concept that you’ve probably heard of before. But let’s have a quick look at the difference between smart and dumb components. Recall that we had created two separate directories for components, one named containers/, and the other components/. The benefit of this approach is that the behavior logic is separated from the view.

The presentational components are said to be dumb because they are concerned about how things look. They are decoupled from the business logic of the application and receive data and callbacks from a parent component exclusively via props. They don’t care if your application is connected to a Redux store if the data is coming from the local state of the parent component. 

The container components, on the other hand, deal with the behavioral part and should contain very limited DOM markup and style. They pass the data that needs to be rendered to the dumb components as props. 

I’ve covered the topic in-depth in another tutorial, Stateful vs. Stateless Components in React.

Moving on, let’s see how we’re going to organize our components.

Designing component Hierarchy

Presentational Components

Here are the presentational components that we’ll be using in this tutorial. 

components/AddContactForm.jsx

This is an HTML form for adding a new contact. The component receives onInputChange and onFormSubmit callback as props. The onInputChange event is triggered when the input value changes and onFormSubmit when the form is being submitted.

components/ContactList.jsx

This component receives an array of contact objects as props and hence the name ContactList. We use the Array.map() method to extract individual contact details and then pass on that data to <ContactCard />.

components/ContactCard.jsx

This component receives a contact object and displays the contact’s name and image. For practical applications, it might make sense to host JavaScript images in the cloud.

Container Components

We’re also going to construct barebones container components.

containers/Contacts.jsx

The returnContactList() function retrieves the array of contact objects and passes it to the ContactList component. Since returnContactList() retrieves the data from the store, we’ll leave that logic blank for the moment.

containers/AddContacts.jsx

We’ve created three barebones handler method that corresponds to the three actions. They all dispatch actions to update the state. In the render method, we’ve left out the logic for showing/hiding the form because we need to fetch the state. 

Now let’s see how to bind react and redux together

The react-redux Library

React bindings are not available in Redux by default. You will need to install an extra library called react-redux first. 

The library exports just two APIs that you need to remember, a <Provider /> component and a higher-order function known as connect()

The Provider Component

Libraries like Redux need to make the store data accessible to the whole React component tree starting from the root component. The Provider pattern allows the library to pass the data from top to the bottom. The code below demonstrates how Provider magically adds the state to all the components in the component tree. 

Demo Code

The entire app needs to have access to the store. So, we wrap the provider around the app component and then add the data that we need to the tree’s context. The descendants of the component then have access to the data. 

The connect() Method 

Now that we’ve provided the store to our application, we need to connect the React to the store. The only way that you can communicate with the store is by dispatching actions and by retrieving the state. We’ve previously used store.dispatch() to dispatch actions and store.getState() to retrieve the latest snapshot of the state. The connect()lets you do exactly this, but with the help of two methods known as mapDispatchToProps and mapStateToProps. I have demonstrated this concept in the example below:

Demo Code

mapStateToProps and mapDispatchToProps both return an object and the key of this object becomes a prop of the connected component. For instance, state.contacts.newContact is mapped to props.newContact. The action creator addContact() is mapped to props.addContact.  

But for this to work, you need the last line in the code snippet above. 

Instead of exporting the AddContact component directly, we’re exporting a connected component. The connect provides addContact and newContact as props to the <AddContact/> component. 

How to Connect React and Redux?

Next, we’re going to cover the steps that you need to follow to connect React and Redux.

Install the react-redux Library

Install the react-redux library if you haven’t already. You can use NPM or Yarn to install it. 

Provide the Store to your App Component

Create the store first. Then, make the store object accessible to your component tree by passing it as a prop to <Provider />.

index.js

Connect React Containers to Redux

The connect function is used to bind React container to Redux. What that means is that you can use the connect feature to:

  1. subscribe to the store and map its state to your props
  2. dispatch actions and map the dispatch callbacks into your props

Once you’ve connected your application to Redux, you can use this.props to access the current state and also to dispatch actions. I am going to demonstrate the process on theAddContact component. AddContact needs to dispatch three actions and get the state of two properties from the store. Let’s have a look at the code.

First, import connect into AddContact.jsx.

Second, create two methods mapStateToProps and mapDispatchToProps.

mapStateToProps receives the state of the store as an argument. It returns an object that describes how the state of the store is mapped into your props. mapDispatchToProps returns a similar object that describes how the dispatch actions are mapped to your props. 

Finally, we use connect to bind the AddContact component to the two functions as follows:

Update the Container Components to Use the Props

The component’s props are now equipped to read state from the store and dispatch actions. The logic for handeInputChange, handleSubmit and showAddContactBox should be updated as follows:

We’ve defined the handler methods. But there is still one part missing—the conditional statement inside the render function.

If isHidden is false, the form is rendered. Otherwise,a button gets rendered. 

Displaying The Contacts

We’ve completed the most challenging part. Now, all that’s left is to display these contacts as a list. The Contacts container is the best place for that logic. 

We’ve gone through the same procedure that we followed above to connect the Contacts component with the Redux store. The mapStateToProps function maps the store object to the contactList props. We then use connect to bind the props value to the Contact component. The second argument to the connect is null because we don’t have any actions to be dispatched. That completes the integration of our app with the state of the Redux store. 

What Next?

In the next post, we’ll take a deeper look at middlewares and start dispatching actions that involve fetching data from the server. Share your thoughts in the comments!


Source: Nettuts Web Development

What’s new in ES2017: Async functions, improved objects and more

Let’s take a look at the most important JavaScript updates that came with ES2017, and also briefly cover how this updating process actually takes place.

The Update Process

JavaScript (ECMAScript) is an ever-evolving standard implemented by many vendors across multiple platforms. ES6 (ECMAScript 2015) was a large release which took six years to finalize. A new annual release process was formulated to streamline the process and rapidly add new features.

The modestly named Technical Committee 39 (TC39) consists of parties including browser vendors who meet to push JavaScript proposals along a strict progression path:

Stage 0: strawman –
An initial submission of ideas for new or improved ECMAScript features.

Stage 1: proposal –
A formal proposal document championed by at least one member of TC39, which includes API examples, language semantics, algorithms, potential obstacles, polyfills and demonstrations.

Stage 2: draft –
An initial version of the feature specification. Two experimental implementations of the feature are required, although one can be in a transpiler such as Babel.

Stage 3: candidate –
The proposal specification is reviewed and feedback is gathered from vendors.

Stage 4: finished –
The proposal is ready for inclusion in ECMAScript. A feature should only be considered a standard once it reaches this stage. However, it can take longer to ship in browsers and runtimes such as Node.js.

If ES2015 was too large, ES2016 was purposely tiny to prove the standardization process. Two new features were added:

  1. The array .includes() method which returns true or false when a value is contained in an array, and
  2. The a ** b exponentiation operator, which is identical to Math.pow(a, b).

What’s New in ES2017

The feature set for ES2017 (or ES8 in old money) is considered to be the first proper amendment to the ECMAScript specification. It delivers the following goods …

Async functions

Unlike most languages, JavaScript is asynchronous by default. Commands which can take any amount of time do not halt execution. That includes operations such as requesting a URL, reading a file, or updating a database. A callback function must be passed, which executes when the result of that operation is known.

This can lead to callback hell when a series of nested asynchronous functions must be executed in order. For example:

function doSomething() {
  doSomething1((response1) => {
    doSomething2(response1, (response2) => {
      doSomething3(response2, (response3) => {
        // etc...
      };
    });
  });
}

ES2015 (ES6) introduced Promises, which provided a cleaner way to express the same functionality. Once your functions were Promisified, they could be executed using:

function doSomething() {
  doSomething1()
  .then(doSomething2)
  .then(doSomething3)
}

ES2017 Async functions expand on Promises to make asynchronous calls even clearer:

async function doSomething() {
  const
    response1 = await doSomething1(),
    response2 = await doSomething2(response1),
    response3 = await doSomething3(response2);
}

await effectively makes each call appear as though it’s synchronous while not holding up JavaScript’s single processing thread.

Async functions are supported in all modern browsers (not IE or Opera Mini) and Node.js 7.6+. They’ll change the way you write JavaScript, and a whole article could be dedicated to callbacks, Promises and Async functions. Fortunately, we have one! Refer to Flow Control in Modern JavaScript.

The post What’s new in ES2017: Async functions, improved objects and more appeared first on SitePoint.


Source: Sitepoint

Alibaba Cloud, AWS & DigitalOcean: Cloud Services Compared

This article was created in partnership with Alibaba Cloud. Thank you for supporting the partners who make SitePoint possible.

Cloud computing allows you to use computer services such as servers, databases, analytics and more over the internet using virtual machines.

The idea has been around since the inception of the Internet, but it really began to take off when Amazon launched its Elastic Compute Cloud in 2006, which later became known as Amazon Web Services, or AWS. Many other competitors have entered the market since then as cloud computing has blossomed into a huge growth industry.

Cloud Computing providers allow you to host websites and applications, store data and assets, run software as a service applications, provide on-demand services such as streaming videos, photo backup and storage and even use artificial intelligence services.

One of the key benefits of cloud computing is that as you only pay for what you use, in a similar way that you pay for gas or electricity. You can almost think of it as ‘renting’ a server located ‘in the cloud’. This makes it a very inexpensive way of gaining access to multiple high-end servers, running the latest software without the cost and hassle of running the actual servers yourself.

Another advantage of cloud computing is elastic scaling, which is the ability to increase or decrease the amount of computing power that your service uses, as and when required, using load balancers. This allows you to launch your service without the need big up-front investment and helps to take some of the risk out of launching a new service, while still allowing for expansion in the future. For example, Alibaba Cloud features a load balancer that can use shared resources, but also switch to guaranteed performance instances when required.

Cloud computing providers also make it easy to set up and deploy both ready-made and custom builds of virtual machines using a variety of operating systems. You can spin up a new instance and have it running in a matter of minutes. Virtual servers don’t require any management and are able to take advantage of a huge infrastructure with servers located globally means that people can access your site from around the world without having to worry about latency issues. Their distributed nature means that there isn’t a single point of failure, making them extremely reliable with an uptime of virtually 100%. Backup and recovery are also baked into the service, so you don’t need to worry about data loss. There’s no need for IT administrators here!

In this post we’re going to take a look at 3 of the big providers – AWS, Alibaba Cloud and Digital Ocean – and compare them using the following criteria:

  • Services offered
  • Pricing
  • Help & Support

All three services have a very similar offering, but do it in slightly different ways. Hopefully this post will help you decide which one is the right fit for you.

Meet The Competitors

Amazon Web Services, or AWS, is the big daddy of cloud computing. Launched in 2006, it now has an impressively vast array of services on offer and is the clear leader in the field. When you first log into AWS, you get the familiar feel Amazon’s website – it’s almost as if you’re just buying something else from their store. Initially, the sheer number of options can be quite overwhelming.

Alibaba are often referred to as the ‘Chinese Amazon’. They launched Alibaba Cloud in 2009 as a direct competitor to AWS, with a focus on Asian markets. It has grown significantly since then and is now the largest cloud provider in China. In 2017 they were named the official cloud services provider of the Olympics and this year they received the MySQL Corporate Contributor Award in recognition of their contributions to open source. Alibaba Cloud are really ramping up their offering in 2018, and recently made a commitment to invest $15 billion over the next 3 years into their already impressive cloud computing service. When you log into Alibaba Cloud, you notice that it has a modern and professional feel to it. They also offer a mind-boggling number of options that can seem a bit daunting at first.

Digital Ocean started life in 2012 with the aim of making it easy for developers to build applications in the cloud. My first impressions of their site was that it has a clean look and user-friendly design. They also use the concept of ‘Droplets’ that are virtual machines at various price points. This fits the theme of the company by reimagining the cloud as a digital ocean where each virtual machine is droplet in a digital ocean. This approach certainly makes it simpler to get started without having to wade through a vast number of options.

Services Offered

All 3 Cloud Computing providers in our roundup provide a similar offering in terms of the basic provision. This includes:

  • The ability to create virtual machines with multiple cores and up to 256GB of RAM
  • The ability to create custom images that can be recreated on demand
  • Load Balancing
  • Content Delivery Networks
  • Firewall including protection from Trojans and DDOS attacks
  • Snapshots & backups with built-in redundancy
  • An API so that other services and developers can interact with resources hosted in the cloud

All the services are easy to set up and have nice UIs, but AWS and Alibaba can look a bit complicated at times, with buttons, sliders and knobs all over the place! This is simply due to the fact that they have so many more options to choose from. Digital Ocean keeps things simple, which makes for a more pleasant user experience when setting things up.

All 3 services rely on you being comfortable giving commands via the console. They all have a wealth of documentation and tutorials that will walk you through various procedures.

The table below highlights some differences in what each service provides:

FeaturesAWSAlibaba CloudDigital Ocean
RegionsMultiple in US, Europe and Asia plus one in South America2 in the US, 1 in Europe, Multiple in Asia, including lots in China and some exclusive to Alibaba CloudMultiple in US and Europe, 1 in Asia
Operating SystemsWindows Server + Multiple Linux distributionsWindows Server + Multiple Linux distributionsMultiple Linux distributions
StorageIntegrated storage using Amazon S3, Elastic Block Store or Elastic File SystemIntegrated Object Storage Service, Elastic Block Storage and Network Attached StorageIntegrated solution called Spaces
DatabasesAmazon Dynamo DB, Amazon RedshiftAn umbrella offering which does Managed Redis, MongoDB, MySQL, SQL Server and PostreSQL. AsparaDB, HybridDB, PolarDB (coming soon)No integrated cloud options, but you can install any database such as MySQL, PostGresSQL MongoDB etc
ExtrasElastic Beanstalk allows you to build web apps quickly. Lots of integration with all Amazon’s other web services. Huge amount of documentation.The award-winning Peta Scale data platform MaxCompute, Managed database offering for every possible type of data, Plesk Admin console provided free of charge. Free DDoS protection for all public endpoints.Large and friendly developer community. One-click installation of popular services such as WordPress, Ruby on Rails and Ghost. Digital Ocean Monitoring allows you to track various metrics easily.

As you can see from the table, AWS is the only provider that has data centers in every continent. Alibaba has a big focus in Asia with data centers located in Dubai, Malaysia and Indonesia. They are also increasing their offering in the West with data centers in North America as well as Europe. Digital Ocean is primarily focused in Western countries.

AWS and Alibaba Cloud offer Windows Server, whereas Digital Ocean only uses Linux Servers. It should be noted, however, that it does cost more to run a virtual Windows Server. All 3 services offer their own integrated storage solution and cloud-based databases are provided by AWS and Alibaba Cloud. It’s also possible to install any database software directly onto a virtual machine on all the services. They all provide an impressive list of extras that help to manage your virtual server and ensure it runs smoothly.

Alibaba Cloud also boasts an impressively high-level of security service that has successfully defended their services from a huge number of attacks over the years using an elastic network of web application firewalls.

Pricing

Comparing the pricing for these services is actually very difficult! The granular level of customisation that AWS and Alibaba Cloud offer means that its very difficult to compare exactly like with like. To complicate matters further, you can get cheaper deals with AWS if you don’t need the service available all the time.

The post Alibaba Cloud, AWS & DigitalOcean: Cloud Services Compared appeared first on SitePoint.


Source: Sitepoint

A Side-by-side Comparison of Express, Koa and Hapi.js

If you’re a Node.js developer, chances are you have, at some point, used Express.js to create your applications or APIs. Express.js is a very popular Node.js framework, and even has some other frameworks built on top of it such as Sails.js, kraken.js, KeystoneJS and many others. However, amidst this popularity, a bunch of other frameworks have been gaining attention in the JavaScript world, such as Koa and hapi.

In this article, we’ll examine Express.js, Koa and hapi.js — their similarities, differences and use cases.

Background

Let’s firstly introduce each of these frameworks separately.

Express.js

Express.js is described as the standard server framework for Node.js. It was created by TJ Holowaychuk, acquired by StrongLoop in 2014, and is currently maintained by the Node.js Foundation incubator. With about 170+ million downloads in the last year, it’s currently beyond doubt that it’s the most popular Node.js framework.

Koa

Development began on Koa in late 2013 by the same guys at Express. It’s referred to as the future of Express. Koa is also described as a much more modern, modular and minimalistic version of the Express framework.

Hapi.js

Hapi.js was developed by the team at Walmart Labs (led by Eran Hammer) after they tried Express and discovered that it didn’t work for their requirements. It was originally developed on top of Express, but as time went by, it grew into a full-fledged framework.

Fun Fact: hapi is short for Http API server.

Philosophy

Now that we have some background on the frameworks and how they were created, let’s compare each of them based on important concepts, such as their philosophy, routing, and so on.

Note: all code examples are in ES6 and make use of version 4 of Express.js, 2.4 of Koa, and 17 for hapi.js.

Express.js

Express was built to be a simple, unopinionated web framework. From its GitHub README:

The Express philosophy is to provide small, robust tooling for HTTP servers, making it a great solution for single page applications, web sites, hybrids, or public HTTP APIs.

Express.js is minimal and doesn’t possess many features out of the box. It doesn’t force things like file structure, ORM or templating engine.

Koa

While Express.js is minimal, Koa can boast a much more minimalistic code footprint — around 2k LOC. Its aim is to allow developers be even more expressive. Like Express.js, it can easily be extended by using existing or custom plugins and middleware. It’s more futuristic in its approach, in that it relies heavily on the relatively new JavaScript features like generators and async/await.

Hapi.js

Hapi.js focusses more on configuration and provides a lot more features out of the box than Koa and Express.js. Eran Hammer, one of the creators of hapi, described the reason for building the framework properly in his blog post:

hapi was created around the idea that configuration is better than code, that business logic must be isolated from the transport layer, and that native node constructs like buffers and stream should be supported as first class objects.

Starting a Server

Starting a server is one of the basic things we’d need to do in our projects. Let’s examine how it can be done in the different frameworks. We’ll start a server and listen on port 3000 in each example.

Express.js

const express = require('express');
const app = express();

app.listen(3000, () => console.log('App is listening on port 3000!'));

Starting a server in Express.js is as simple as requiring the express package, initializing the express app to the app variable and calling the app.listen() method, which is just a wrapper around the native Node.js http.createServer() method.

Koa

Starting a server in Koa is quite similar to Express.js:

const Koa = require('koa');
const app = new Koa();

app.listen(3000, () => console.log('App is listening on port 3000!'));

The app.listen() method in Koa is also a wrapper around the http.createServer() method.

Hapi.js

Starting a server in hapi.js is quite a departure from what many of us may be used to from Express:

const Hapi = require('hapi');

const server = Hapi.server({
    host: 'localhost',
    port: 3000
});

async function start() {
  try {
    await server.start();
  }
  catch (err) {
    console.log(err);
    process.exit(1);
  }
  console.log('Server running at:', server.info.uri);
};

start();

In the code block above, first we require the hapi package, then instantiate a server with Hapi.server(), which has a single config object argument containing the host and port parameters. Then we start the server with the asynchronous server.start() function.

Unlike in Express.js and Koa, the server.start() function in hapi is not a wrapper around the native http.createServer() method. It instead implements its own custom logic.

The above code example is from the hapi.js website, and shows the importance the creators of hapi.js place on configuration and error handling.

The post A Side-by-side Comparison of Express, Koa and Hapi.js appeared first on SitePoint.


Source: Sitepoint

Improve Your Workflow: Top Invoicing and Time Management Apps

This article was created in partnership with BAWMedia. Thank you for supporting the partners who make SitePoint possible.

“Time is money” may be a cliché. But in the worlds of freelancing, project management, and business ownership, it takes on a true meaning. You can save time by doing something faster and better. You can create time by not doing something else, and you can waste time in any number of different ways.

You can be on a team, work at a company or be a freelancer tasked with serving your clients. Either way, you want to use your time efficiently and effectively.

You can, of course, work harder. But you’ll be far better off and be more productive by backing off a bit. You need to let someone or something else do some of your tasks. These tasks include scheduling and tracking to ensure things are getting done. Also, look into semi-automating or fully automating tasks like invoicing and receiving payments.

That’s what these top invoicing and time tracking apps can and will do for you.

1. FreshBooks

FreshBooks

FreshBooks is an ideal tool for any small, service-oriented business looking for better ways to manage time tracking, expense management, invoicing responsibilities and to create a more organized billing process.

FreshBooks provides a framework for businesses to create and submit proposals by helping define and document a specific project outline, scope, timeline, and cost. Having an established style and format can be a genuine time saver and help your brand and company remain consistent in look and feel when communicating with your clients.

FreshBooks has been designed with end users in mind who need a diverse and well thought out accounting solution that won’t take long to master. It takes an average of 30 seconds to generate an invoice, which can be customized to include your logo, brand colors, and present your business as the professional and innovative company that you know it is.

Should you have a question, or on the remote chance you encounter a problem, an award-winning support team is always on call. Try FreshBooks for 30 days – at no cost to you.

2. Jibble.io

Jibble.io

Jibble is a cloud-based time and attendance tracking app that provides project managers and team leaders with daily, weekly, and monthly time sheets and reports. Reports can also be produced on demand, and weekly and monthly timesheets can be used to support payroll review exercises.

You can download these timesheets and reports in spreadsheet format for accounting purposes. Jibble will also generate personal timesheets that individual team or staff members can review and add information to.

Jibble’s statistical reports show the average hours worked by teams and team members on a daily or weekly basis. Clock in and out times can be reported as well. Alerts are provided in those cases where attendance averages or hours fall outside the norm.

Jibble can produce timesheets and reports for one team or multiple teams.

3. Invoice Plane

Invoice Plane

The Invoice Plane authors set as their target users freelancers, self-employed individuals, and small businesses. Their goal of providing a smart, easy to use and cost-effective invoicing and client management app to these users was achieved.

You can download this open source software tool and use it for free. The authors suggest you view their demo before doing so, after which you can join the 193-country user base that have so far accounted for over 100,000 Invoice Plane downloads.

Invoice Plane’s template and theme formats can be customized to fit your business model and workflow and make your billing cycles and client relationships much easier to manage.

4. AND CO from Fiverr

AND CO from Fiverr

There are several reasons why AND CO from Fiver has been chosen by freelancers and creative studios. This invoicing and time tracking app is exceedingly easy to use, its UI is modern, clean, and attractive, and it easily adjusts to their individual business models.

AND CO will help you to more efficiently and effectively manage your time tracking, invoicing and payment tasks, and client proposal preparation as well.

5. TimeCamp

TimeCamp

You can use this time and attendance tracking app from your browser, as a mobile app, or both. TimeCamp gives you the freedom to view the status of your team or workforce from anywhere, at any time.

This app will seamlessly integrate with your other project management and accounting tools. Freelancers and other individual users should sign up for the SOLO package, which is free.

6. Minterapp

Minterapp

This app is the answer to small businesses and startups in search of a time tracking tool that can also separate billable hours from total hours and generate an invoice with a few clicks.

With Minterapp, you can view at any time which invoices are in draft, pending, or have been paid. Minterapp will also assist you with your project estimating activities.

7. Hiveage

Hiveage

Hiveage is a reporting, invoice-generating, and billing app designed to assist small business’s needs in these important areas. Hiveage has helped 60,000 small businesses by making it easy for them to prepare quotations, submit custom, professional-looking invoices to clients, and provide their clients with a range of attractive remittance options.

Hiveage can be used for both single and multiple project activities.

8. Invoice Ninja

Invoice Ninja

Invoice Ninja is a suite of time tracking, invoicing and other apps that’s especially well-suited for freelancers. The time tracking app will automate your time tracking tasks and collect the results, where the invoicing app can convert them into ready for delivery invoices.

The invoicing app also provides access to 40 gateways through which payments can be sent and received. Project management apps are included as well, and Invoice Ninja is totally free to use.

9. Scoro

Scoro

If you’re a business professional or head up a creative service, you should consider giving Scoro a try. This comprehensive business management package will be a step up for those who are tied down to double data entry, spreadsheets, or working with multiple tools to perform essential tasks. Track time and workforce, schedule tasks and manage projects with ease. With Scoro, the project-related information you need will be there for you in just a few clicks.

Conclusion

Are you a business owner, a project manager or team leader, or a freelancer? Have you been looking for something to simplify your time and attendance tracking? Or, perhaps, something to speed up the expense tracking or invoicing tasks? There should be something here for you.

You just might experience once again how much fun it can be to be able to kick back and relax a bit. You have to allow a faithful, reliable app to do the heavy lifting.

The post Improve Your Workflow: Top Invoicing and Time Management Apps appeared first on SitePoint.


Source: Sitepoint

Decentralized Storage and Publication with IPFS and Swarm

In this article, we outline two of the most prominent solutions for decentralized content publication and storage. These two solutions are IPFS (InterPlanetary File System) and Ethereum’s Swarm.

With the advent of blockchain applications in recent years, the Internet has seen a boom of decentralization. The developer world has suddenly gotten the sense of the green pastures that lie beyond the existing paradigm, based on the server–client model, susceptible to censoring at the whims of different jurisdictions, cloud provider monopolies, etc.

Turkey’s ban of Wikipedia and The “Great Firewall of China” are just some examples. Dependence on internet backbones, hosting companies, cloud providers like Amazon, search providers like Google — things like these have betrayed the initial internet promise of democratization of knowledge and access to information.

As this article on TechCrunch said two years ago, the original idea of the internet was “to build a common neutral network which everyone can participate in equally for the betterment of humanity”. This idea is now reemerging as Web 3.0, a term that now means the decentralized web — an architecture that is censorship proof, and without a single point of failure.

Dapps

As Gavin Wood, one of Ethereum’s founders, in his 2014 seminal work on Web 3.0 put it, there is “increasing need for a zero-trust interaction system”. He named the “post-Snowden web”, and described four components to it: “static content publication, dynamic messages, trustless transactions and an integrated user-interface”.

Decentralized Storage and Publication

Before the advent of cryptocurrency — and the Ethereum platform in particular — we had other projects that aimed to develop distributed applications.

  • Freenet: a peer to peer (p2p) platform created to be censorship resistant — with its distributed data store — was first published in 2000.
  • Gnutella network: enabled peer-to-peer file sharing with its many client incarnations.
  • BitTorrent: was developed and published as early as 2001, and Wikipedia reports that, in 2004, it was “responsible for 25% of all Internet traffic”. The project is still here, and is technically impressive, with new projects copying its aspects — hash-based content addressing, DHT distributed databases, Kademlia lookups …
  • Tribler: as a BitTorrent client, it added some other features for users, such as onion routed p2p communication.

Both of our aforementioned projects build on the shoulders of these giants.

IPFS

IPFS logo

The InterPlanetary File System was developed by Juan Benet, and was first published in 2014. It aims to be a protocol, and a distributed file system, replacing HTTP. It’s a mixture of technologies, and it’s pretty low level — meaning that it leaves a lot to projects or layers built on top of it.

An introduction to the project by Juan Benet from 2015 can be found in this YouTube video.

IPFS aims to offer the infrastructure for reinventing the Internet, which is a huge goal. It uses content addressing — naming and lookup of content by its cryptographic hash, like Git, and like BitTorrent, which we mentioned. This technique enables us to ensure authenticity of content regardless of where it sits, and the implications of this are huge. We can, for example, have the same website hosted in ten, or hundreds of computers around the world — and load it knowing for sure that it’s the original, authentic content just by its hash-based address.

This means that important websites — or websites that may get censored by governments or other actors — don’t depend on any single point, like servers, databases, or even domain registrars. This, further, means that they can’t be easily extinguished.

The Web becomes resistant.

One more consequence of this is that we don’t, as end users, have to depend on internet backbones and perfect connectivity to a remote data center on another continent hosting our website. Countries can get completely cut off, but we can still load the same, authentic content from some machine nearby, still certain of its authenticity. It can be content cached on a PC in our very neighborhood.

With IPFS, it would become difficult, if not impossible, for Turkey to censor Wikipedia, because Wikipedia wouldn’t be relying on certain IP addresses. Authentic Wikipedia could be hosted on hundreds or thousands of local websites within Turkey itself, and this network of websites could be completely dynamic.

IPFS has no single point of failure, and nodes don’t need to trust each other.

Addressing the content is algorithmic — and it becomes uncensorable. It also improves the efficiency. We don’t need to request a website, or video, or music file from a remote server if it’s cached somewhere close to us.

This can eliminate request latency. And anyone who’s ever optimized website speed knows that network latency is a factor.

By using the aforementioned Kademlia algorithm, the network becomes robust, and we don’t rely on domain registrars/nameservers to find content. Lookup is built into the network itself. It can’t be taken down. Some of the major attacks by hackers in recent years were attacks on nameservers. An example is this particular attack in 2016, which took down Spotify, Reddit, NYT and Wired, and many others.

IPFS is being developed by Protocol Labs as an open-source project. On top of it, the company is building an incentivization layer — Filecoin — which has had an initial coin offering in Summer 2017, and has collected around $260 million (if we count pre-ICO VC investment) — perhaps the biggest amount collected by an ICO so far. Filecoin itself is not at production-stage yet, but IPFS is being used by production apps like OpenBazaar. There’s also IPFS integration in the Brave browser, and more is coming …

The production video-sharing platform d.tube is using IPFS for storage, while Steemit is using it for monetization, voting, etc.

It’s a web app that’s waiting for wider adoption, but it’s currently in production stage, and works without ads.

Although IPFS is considered an alpha-stage project, just like Swarm, IPFS is serving real-world projects.

Other notable projects using IPFS are Bloom and Decentraland — an AR game being built on top of the Ethereum blockchain and IPFS. Peerpad is an open-source app built to be used as an example for developers developing on IPFS.

The post Decentralized Storage and Publication with IPFS and Swarm appeared first on SitePoint.


Source: Sitepoint

Cheerful Desktop Wallpapers To Kick Off June (2018 Edition)

Cheerful Desktop Wallpapers To Kick Off June (2018 Edition)

Cheerful Desktop Wallpapers To Kick Off June (2018 Edition)

Cosima Mielke

2018-05-31T08:43:08+02:00
2018-05-31T11:49:45+00:00

We all need a little inspiration boost every once in a while. And, well, whatever your strategy to get your creative juices flowing might be, sometimes inspiration lies closer than you think. As close as your desktop even.

Since more than nine years, we’ve been asking the design community to create monthly desktop wallpaper calendars. Wallpapers that are a bit more distinctive as what you’ll usually find out there, bound to cater for a little in-between inspiration spark. Of course, it wasn’t any different this time around.

This post features wallpapers for June 2018. All of them come in versions with and without a calendar, so it’s up to you to decide if you want to have the month at a glance or keep things simple. As a bonus goodie, we also collected some timeless June favorites from past years for this edition (please note that they thus don’t come with a calendar). A big thank-you to all the artists who have submitted their wallpapers and are still diligently continuing to do so. It’s time to freshen up your desktop!

Please note that:

  • All images can be clicked on and lead to the preview of the wallpaper,
  • You can feature your work in our magazine by taking part in our Desktop Wallpaper Calendar series. We are regularly looking for creative designers and artists to be featured on Smashing Magazine. Are you one of them?

Travel Time

“June is our favorite time of the year because the keenly anticipated sunny weather inspires us to travel. Stuck at the airport, waiting for our flight but still excited about wayfaring, we often start dreaming about the new places we are going to visit. Where will you travel to this summer? Wherever you go, we wish you a pleasant journey!” — Designed by PopArt Studio from Serbia.

Travel Time

Tropical Vibes

“With summer just around the corner, I’m already feeling the tropical vibes.” — Designed by Melissa Bogemans from Belgium.

Tropical Vibes

No Matter What, I’ll Be There

“June is the month when we celebrate our dads and to me the best ones are the ones that even if they don’t like what you are doing they’ll be by your side.” — Designed by Maria Keller from Mexico.

No Matter What, I’ll Be There

We Sailed Across The Summer Sea

“Summer is the season of family trips, outings and holidays. What better way is there to celeberate Father’s Day, than to enjoy the summer sailing across with father beloved.” — Designed by The Whisky Corporation from Singapore.

We Sailed Across The Summer Sea

Sounds Of Rain

“When it rains the world softens and listens to the gentle pattering of the rain. It gets covered in a green vegetation and puddles reflect the colors of nature now looking fresh and cleansed. The lullaby of the rain is accustomed to creaky frogs and soft chirping birds taking you to ecstasy.” — Designed by Aviv Digital from India.

Sounds Of Rain

SmashingConf Toronto

“What’s the best way to learn? By observing designers and developers working live. For our new conference in Toronto, all speakers aren’t allowed to use any slides at all. Welcome SmashingConf #noslides, a brand new conference in Toronto, full of interactive live sessions, showing how web designers design and how web developers build — including setup, workflow, design thinking, naming conventions and everything in-between.” — Designed by Ricardo Gimenes from Brazil.

Smashing Conference Toronto 2018

Are You Ready?

“The excitement is building, the slogans are ready, the roaring and the cheering make their way… Russia is all set for the football showdown. Are you ready?” — Designed by Sweans from London.

Are You Ready?

Summer Surf

“Summer vibes…” — Designed by Antun Hirsman from Croatia.

Summer Surf

Stop Child Labor

“‘Child labor and poverty are inevitably bound together, and if you continue to use the labor of children as the treatment for the social disease of poverty, you will have both poverty and child labor to the end of time.’ (Grace Abbott)” — Designed by Dipanjan Karmakar from India.

Stop Child Labor

June Best-Of

Some things are too good to be forgotten. That’s why we dug out some June favorites from our archives. Please note that these don’t come with a calendar because of this. Enjoy!

Oh, The Places You Will Go!

“In celebration of high school and college graduates ready to make their way in the world!” — Designed by Bri Loesch from the United States.

Oh the places you will go

Join The Wave

“The month of warmth and nice weather is finally here. We found inspiration in the World Oceans Day which occurs on June 8th and celebrates the wave of change worldwide. Join the wave and dive in!” — Designed by PopArt Studio from Serbia.

Join The Wave

Solstice Sunset

“June 21 marks the longest day of the year for the Northern Hemisphere – and sunsets like these will be getting earlier and earlier after that!” — Designed by James Mitchell from the United Kingdom.

Solstice Sunset

Midsummer Night’s Dream

“The summer solstice in the northern hemisphere is nigh. Every June 21 we celebrate the longest day of the year and, very often, end up dancing like pagans. Being landlocked, we here in Serbia can only dream about tidal waves and having fun at the beach. What will your Midsummer Night’s Dream be?” — Designed by PopArt Studio from Serbia.

Midsummer Night’s Dream

Flamingood Vibes Only

“I love flamingos! They give me a happy feeling that I want to share with the world.” — Designed by Melissa Bogemans from Belgium.

Flamingood Vibes Only

Papa Merman

“Dream away for a little while to a land where June never ends. Imagine the ocean, feel the joy of a happy and carefree life with a scent of shrimps and a sound of waves all year round. Welcome to the world of Papa Merman!” — Designed by GraphicMama from Bulgaria.

Papa Merman

Strawberry Fields

Designed by Nathalie Ouederni from France.

Strawberry Fields

Fishing Is My Passion!

“The month of June is a wonderful time to go fishing, the most soothing and peaceful activity.” — Designed by Igor Izhik from Canada.

Fishing Is My Passion!

Summer Time

“Summer is coming so I made this simple pattern with all my favorite summer things.” — Designed by Maria Keller from Mexico.

Summer Time

The Amazing Water Park

“Summer is coming. And it’s going to be like an amazing water park, full of stunning encounters.” — Designed by Netzbewegung / Mario Metzger from Germany.

The Amazing Water Park

Gravity

Designed by Elise Vanoorbeek (Doud Design) from Belgium.

Gravity

Periodic Table Of HTML5 Elements

“We wanted an easy reference guide to help navigate through HTML5 and that could be updateable” — Designed by Castus from the UK.

Periodic Table Of HTML5 Elements

Ice Creams Away!

“Summer is taking off with some magical ice cream hot air balloons.” — Designed by Sasha Endoh from Canada

Ice Creams Away!

Lavender Is In The Air!

“June always reminds me of lavender — it just smells wonderful and fresh. For this wallpaper I wanted to create a simple, yet functional design that featured… you guessed it… lavender!” — Designed by Jon Phillips from Canada.

Lavender Is In The Air!

Ice Cream June

“For me, June always marks the beginning of summer! The best way to celebrate summer is of course ice cream, what else?” — Designed by Tatiana Anagnostaki from Greece.

Ice cream June


Source: Smashing Magazine

ES6 (ES2015) and Beyond: Understanding JavaScript Versioning

As programming languages go, JavaScript’s development has been positively frantic in the last few years. With each year now seeing a new release of the ECMAScript specification, it’s easy to get confused about JavaScript versioning, which version supports what, and how you can future-proof your code.

To better understand the how and why behind this seemingly constant stream of new features, let’s take a brief look at the history of the JavaScript and JavaScript versioning, and find out why the standardization process is so important.

The Early History of JavaScript Versioning

The prototype of JavaScript was written in just ten days in May 1995 by Brendan Eich. He was initially recruited to implement a Scheme runtime for Netscape Navigator, but the management team pushed for a C-style language that would complement the then recently released Java.

JavaScript made its debut in version 2 of Netscape Navigator in December 1995. The following year, Microsoft reverse-engineered JavaScript to create their own version, calling it JScript. JScript shipped with version 3 of the Internet Explorer browser, and was almost identical to JavaScript — even including all the same bugs and quirks — but it did have some extra Internet Explorer-only features.

The Birth of ECMAScript

The necessity of ensuring that JScript (and any other variants) remained compatible with JavaScript motivated Netscape and Sun Microsystems to standardize the language. They did this with the help of the European Computer Manufacturers Association, who would host the standard. The standardized language was called ECMAScript to avoid infringing on Sun’s Java trademark — a move that caused a fair deal of confusion. Eventually ECMAScript was used to refer to the specification, and JavaScript was (and still is) used to refer to the language itself.

The working group in charge of JavaScript versioning and maintaining ECMAScript is known as Technical Committee 39, or TC39. It’s made up of representatives from all the major browser vendors such as Apple, Google, Microsoft and Mozilla, as well as invited experts and delegates from other companies with an interest in the development of the Web. They have regular meetings to decide on how the language will develop.

When JavaScript was standardized by TC39 in 1997, the specification was known as ECMAScript version 1. Subsequent versions of ECMAScript were initially released on an annual basis, but ultimately became sporadic due to the lack of consensus and the unmanageably large feature set surrounding ECMAScript 4. This version was thus terminated and downsized into 3.1, but wasn’t finalized under that moniker, instead eventually evolving into ECMAScript 5. This was released in December 2009, 10 years after ECMAScript 3, and introduced a JSON serialization API, Function.prototype.bind, and strict mode, amongst other capabilities. A maintenance release to clarify some of the ambiguity of the latest iteration, 5.1, was released two years later.


Do you want to dive deeper into the history of JavaScript? Then check out chapter one of JavaScript: Novice to Ninja, 2nd Edition.


ECMAScript 2015 and the Resurgence of Yearly Releases

With the resolution of TC39’s disagreement resulting from ECMAScript 4, Brendan Eich stressed the need for nearer-term, smaller releases. The first of these new specifications was ES2015 (originally named ECMAScript 6, or ES6). This edition was a large but necessary foundation to support the future, annual JavaScript versioning. It includes many features that are well-loved by many developers today, such as:

ES2015 was the first offering to follow the TC39 process, a proposal-based model for discussing and adopting elements.

Continue reading %ES6 (ES2015) and Beyond: Understanding JavaScript Versioning%


Source: Sitepoint

An Introduction to Sails.js

Sails.js is a Node.js MVC (model–view–controller) framework that follows the “convention over configuration” principle. It’s inspired by the popular Ruby on Rails web framework, and allows you to quickly build REST APIs, single-page apps and real-time (WebSockets-based) apps. It makes extensive use of code generators that allow you to build your application with less writing of code—particularly of common code that can be otherwise scaffolded.

The framework is built on top of Express.js, one of the most popular Node.js libraries, and Socket.io, a JavaScript library/engine for adding real-time, bidirectional, event-based communication to applications. At the time of writing, the official stable version of Sails.js is 0.12.14, which is available from npm. Sails.js version 1.0 has not officially been released, but according to Sails.js creators, version 1.0 is already used in some production applications, and they even recommend using it when starting new projects.

Main Features

Sails.js has many great features:

  • it’s built on Express.js
  • it has real-time support with WebSockets
  • it takes a “convention over configuration” approach
  • it has powerful code generation, thanks to Blueprints
  • it’s database agnostic thanks to its powerful Waterline ORM/ODM
  • it supports multiple data stores in the same project
  • it has good documentation.

There are currently a few important cons, such as:

  • no support for JOIN query in Waterline
  • no support for SQL transactions until Sails v1.0 (in beta at the time of writing)
  • until version 1.0, it still uses Express.js v3, which is EOL (end of life)
  • development is very slow.

Sails.js vs Express.js

Software development is all about building abstractions. Sails.js is a high-level abstraction layer on top of Express.js (which itself is an abstraction over Node’s HTTP modules) that provides routing, middleware, file serving and so on. It also adds a powerful ORM/ODM, the MVC architectural pattern, and a powerful generator CLI (among other features).

You can build web applications using Node’s low-level HTTP service and other utility modules (such as the filesystem module) but it’s not recommended except for the sake of learning the Node.js platform. You can also take a step up and use Express.js, which is a popular, lightweight framework for building web apps.

You’ll have routing and other useful constructs for web apps, but you’ll need to take care of pretty much everything from configuration, file structure and code organization to working with databases.

Express doesn’t offer any built-in tool to help you with database access, so you’ll need to bring together the required technologies to build a complete web application. This is what’s called a stack. Web developers, using JavaScript, mostly use the popular MEAN stack, which stands for MongoDB, ExpressJS, AngularJS and Node.js.

MongoDB is the preferred database system among Node/Express developers, but you can use any database you want. The most important point here is that Express doesn’t provide any built-in APIs when it comes to databases.

The Waterline ORM/ODM

One key feature of Sails.js is Waterline, a powerful ORM (object relational mapper) for SQL-based databases and ODM (object document mapper) for NoSQL document-based databases. Waterline abstracts away all the complexities when working with databases and, most importantly, with Waterline you don’t have to make the decision of choosing a database system when you’re just starting development. It also doesn’t intimidate you when your client hasn’t yet decided on the database technology to use.

You can start building you application without a single line of configuration. In fact, you don’t have to install a database system at all initially. Thanks to the built-in sails-disk NeDB-based file database, you can transparently use the file system to store and retrieve data for testing your application functionality.

Once you’re ready and you have decided on the convenient database system you want to use for your project, you can then simply switch the database by installing the relevant adapter for your database system. Waterline has official adapters for popular relational database systems such as MySQL and PostgreSQL and the NoSQL databases, such as MongoDB and Redis, and the community has also built numerous adapters for the other popular database systems such as Oracle, MSSQL, DB2, SQLite, CouchDB and neo4j. In case when you can’t find an adapter for the database system you want to use, you can develop your own custom adapter.

Waterline abstracts away the differences between different database systems and allows you to have a normalized interface for your application to communicate with any supported database system. You don’t have to work with SQL or any low-level API (for NoSQL databases) but that doesn’t mean you can’t (at least for SQL-based databases and MongoDB).

There are situations when you need to write custom SQL, for example, for performance reasons, for working with complex database requirements, or for accessing database-specific features. In this case, you can use the .query() method available only on the Waterline models that are configured to use SQL systems (you can find more information about query() from the docs).

Since different database systems have common and database-specific features, the Waterline ORM/ODM can only be good for you as long as you only constrain yourself to use the common features. Also, if you use raw SQL or native MongoDB APIs, you’ll lose many of the features of Waterline, including the ability to switch between different databases.

Getting Started with Sails.js

Now that we’ve covered the basic concepts and features of Sails.js, let’s see how you can quickly get started using Sails.js to create new projects and lift them.

Prerequisites

Before you can use Sails.js, you need to have a development environment with Node.js (and npm) installed. You can install both of them by heading to the official Node.js website and downloading the right installer for your operating system.

Node.js Download page

Make sure, also, to install whatever database management system you want to use with Sails.js (either a relational or a NoSQL database). If you’re not interested by using a full-fledged database system, at this point, you can still work with Sails.js thanks to sails-disk, which allows you to have a file-based database out of the box.

Installing the Sails.js CLI

After satisfying the working development requirements, you can head over to your terminal (Linux and macOS) or command prompt (Windows) and install the Sails.js Command Line Utility, globally, from npm:

sudo npm install sails -g

If you want to install the latest 1.0 version to try the new features, you need to use the beta version:

npm install sails@beta -g

You may or may not need sudo to install packages globally depending on your npm configuration.

Scaffolding a Sails.js Project

After installing the Sails.js CLI, you can go ahead and scaffold a new project with one command:

sails new sailsdemo

This will create a new folder for your project named sailsdemo on your current directory. You can also scaffold your project files inside an existing folder with this:

sails new .

You can scaffold a new Sails.js project without a front end with this:

sails new sailsdemo --no-frontend

Find more information about the features of the CLI from the docs.

The Anatomy of a Sails.js Project

Here’s a screenshot of a project generated using the Sails.js CLI:

A project generated using the Sails.js CLI

A Sails.js project is a Node.js module with a package.json and a node_modules folder. You may also notice the presence of Gruntfile.js. Sails.js uses Grunt as a build tool for building front-end assets.

If you’re building an app for the browser, you’re in luck. Sails ships with Grunt — which means your entire front-end asset workflow is completely customizable, and comes with support for all of the great Grunt modules which are already out there. That includes support for Less, Sass, Stylus, CoffeeScript, JST, Jade, Handlebars, Dust, and many more. When you’re ready to go into production, your assets are minified and gzipped automatically. You can even compile your static assets and push them out to a CDN like CloudFront to make your app load even faster. (You can read more about these points on the Sails.js website.)

You can also use Gulp or Webpack as your build system instead of Grunt, with custom generators. See the sails-generate-new-gulp and sails-webpack projects on GitHub.

For more community generators, see this documentation page on the Sails.js site.

The project contains many configuration files and folders. Most of them are self explanatory, but let’s go over the ones you’ll be working with most of the time:

  • api/controllers: this is the folder where controllers live. Controllers correspond to the C part in MVC. It’s where the business logic for your application exists.
  • api/models: the folder where models exist. Models correspond to the M part of MVC architecture. This is where you need to put classes or objects that map to your SQL/NoSQL data.
  • api/policies: this is the folder where you need to put policies for your application
  • api/responses: this folder contains server response logic such as functions to handle the 404 and 500 responses, etc.
  • api/services: this where your app-wide services live. A service is a global class encapsulating common logic that can be used throughout many controllers.
  • ./views: this folder contains templates used for displaying views. By default, this folder contains the ejs engine templates, but you can configure any Express-supported engine such as EJS, Jade, Handlebars, Mustache and Underscore etc.
  • ./config: this folder contains many configuration files that enable you to configure every detail of your application, such as CORS, CSRF protection, i18n, http, settings for models, views, logging and policies etc. One important file that you’ll frequently use is config/routes.js, where you can create your application routes and map them to actual actions in the controllers or to views directly.
  • ./assets: this is the folder where you can place any static files (CSS, JavaScript and images etc.) for your application.

Continue reading %An Introduction to Sails.js%


Source: Sitepoint

Building Apps and Services with the Hapi.js Framework

Hapi.js is described as “a rich framework for building applications and services”. Hapi’s smart defaults make it a breeze to create JSON APIs, and its modular design and plugin system allow you to easily extend or modify its behavior.

The recent release of version 17.0 has fully embraced async and await, so you’ll be writing code that appears synchronous but is non-blocking and avoids callback hell. Win-win.

The Project

In this article, we’ll be building the following API for a typical blog from scratch:

# RESTful actions for fetching, creating, updating and deleting articles
GET    /articles                articles#index
GET    /articles/:id            articles#show
POST   /articles                articles#create
PUT    /articles/:id            articles#update
DELETE /articles/:id            articles#destroy

# Nested routes for creating and deleting comments
POST   /articles/:id/comments   comments#create
DELETE /articles/:id/comments   comments#destroy

# Authentication with JSON Web Tokens (JWT)
POST   /authentications         authentications#create

The article will cover:

  • Hapi’s core API: routing, request and response
  • models and persistence in a relational database
  • routes and actions for Articles and Comments
  • testing a REST API with HTTPie
  • authentication with JWT and securing routes
  • validation
  • an HTML View and Layout for the root route /.

The Starting Point

Make sure you’ve got a recent version of Node.js installed; node -v should return 8.9.0 or higher.

Download the starting code from here with git:

git clone https://github.com/markbrown4/hapi-api.git
cd hapi-api
npm install

Open up package.json and you’ll see that the “start” script runs server.js with nodemon. This will take care of restarting the server for us when we change a file.

Run npm start and open http://localhost:3000/:

[{ "so": "hapi!" }]

Let’s look at the source:

// server.js
const Hapi = require('hapi')

// Configure the server instance
const server = Hapi.server({
  host: 'localhost',
  port: 3000
})

// Add routes
server.route({
  method: 'GET',
  path: '/',
  handler: () =&gt; {
    return [{ so: 'hapi!' }]
  }
})

// Go!
server.start().then(() =&gt; {
  console.log('Server running at:', server.info.uri)
}).catch(err =&gt; {
  console.log(err)
  process.exit(1)
})

The Route Handler

The route handler is the most interesting part of this code. Replace it with the code below, comment out the return lines one by one, and test the response in your browser.

server.route({
  method: 'GET',
  path: '/',
  handler: () =&gt; {
    // return [{ so: 'hapi!' }]
    return 123
    return `&lt;h1&gt;&lt;marquee&gt;HTML &lt;em&gt;rules!&lt;/em&gt;&lt;/marquee&gt;&lt;/h1&gt;`
    return null
    return new Error('Boom')
    return Promise.resolve({ whoa: true })
    return require('fs').createReadStream('index.html')
  }
})

To send a response, you simply return a value and Hapi will send the appropriate body and headers.

  • An Object will respond with stringified JSON and Content-Type: application/json
  • String values will be Content-Type: text/html
  • You can also return a Promise or Stream.

The handler function is often made async for cleaner control flow with Promises:

server.route({
  method: 'GET',
  path: '/',
  handler: async () =&gt; {
    let html = await Promise.resolve(`&lt;h1&gt;Google&lt;h1&gt;`)
    html = html.replace('Google', 'Hapi')

    return html
  }
})

It’s not always cleaner with async though. Sometimes returning a Promise is simpler:

handler: () =&gt; {
  return Promise.resolve(`&lt;h1&gt;Google&lt;h1&gt;`)
    .then(html =&gt; html.replace('Google', 'Hapi'))
}

We’ll see better examples of how async helps us out when we start interacting with the database.

The Model Layer

Like the popular Express.js framework, Hapi is a minimal framework that doesn’t provide any recommendations for the Model layer or persistence. You can choose any database and ORM that you’d like, or none — it’s up to you. We’ll be using SQLite and the Sequelize ORM in this tutorial to provide a clean API for interacting with the database.

SQLite comes pre-installed on macOS and most Linux distributions. You can check if it’s installed with sqlite -v. If not, you can find installation instructions at the SQLite website.

Sequelize works with many popular relational databases like Postgres or MySQL, so you’ll need to install both sequelize and the sqlite3 adapter:

npm install --save sequelize sqlite3

Let’s connect to our database and write our first table definition for articles:

// models.js
const path = require('path')
const Sequelize = require('sequelize')

// configure connection to db host, user, pass - not required for SQLite
const sequelize = new Sequelize(null, null, null, {
  dialect: 'sqlite',
  storage: path.join('tmp', 'db.sqlite') // SQLite persists its data directly to file
})

// Here we define our Article model with a title attribute of type string, and a body attribute of type text. By default, all tables get columns for id, createdAt, updatedAt as well.
const Article = sequelize.define('article', {
  title: Sequelize.STRING,
  body: Sequelize.TEXT
})

// Create table
Article.sync()

module.exports = {
  Article
}

Let’s test out our new model by importing it and replacing our route handler with the following:

// server.js
const { Article } = require('./models')

server.route({
  method: 'GET',
  path: '/',
  handler: () =&gt; {
    // try commenting these lines out one at a time
    return Article.findAll()
    return Article.create({ title: 'Welcome to my blog', body: 'The happiest place on earth' })
    return Article.findById(1)
    return Article.update({ title: 'Learning Hapi', body: `JSON API's a breeze.` }, { where: { id: 1 } })
    return Article.findAll()
    return Article.destroy({ where: { id: 1 } })
    return Article.findAll()
  }
})

If you’re familiar with SQL or other ORM’s, the Sequelize API should be self explanatory, It’s built with Promises so it works great with Hapi’s async handlers too.

Note: using Article.sync() to create the tables or Article.sync({ force: true }) to drop and create are fine for the purposes of this demo. If you’re wanting to use this in production you should check out sequelize-cli and write Migrations for any schema changes.

Our RESTful Actions

Let’s build the following routes:

GET     /articles        fetch all articles
GET     /articles/:id    fetch article by id
POST    /articles        create article with `{ title, body }` params
PUT     /articles/:id    update article with `{ title, body }` params
DELETE  /articles/:id    delete article by id

Add a new file, routes.js, to separate the server config from the application logic:

// routes.js
const { Article } = require('./models')

exports.configureRoutes = (server) =&gt; {
  // server.route accepts an object or an array
  return server.route([{
    method: 'GET',
    path: '/articles',
    handler: () =&gt; {
      return Article.findAll()
    }
  }, {
    method: 'GET',
    // The curly braces are how we define params (variable path segments in the URL)
    path: '/articles/{id}',
    handler: (request) =&gt; {
      return Article.findById(request.params.id)
    }
  }, {
    method: 'POST',
    path: '/articles',
    handler: (request) =&gt; {
      const article = Article.build(request.payload.article)

      return article.save()
    }
  }, {
    // method can be an array
    method: ['PUT', 'PATCH'],
    path: '/articles/{id}',
    handler: async (request) =&gt; {
      const article = await Article.findById(request.params.id)
      article.update(request.payload.article)

      return article.save()
    }
  }, {
    method: 'DELETE',
    path: '/articles/{id}',
    handler: async (request) =&gt; {
      const article = await Article.findById(request.params.id)

      return article.destroy()
    }
  }])
}

Import and configure our routes before we start the server:

// server.js
const Hapi = require('hapi')
const { configureRoutes } = require('./routes')

const server = Hapi.server({
  host: 'localhost',
  port: 3000
})

// This function will allow us to easily extend it later
const main = async () =&gt; {
  await configureRoutes(server)
  await server.start()

  return server
}

main().then(server =&gt; {
  console.log('Server running at:', server.info.uri)
}).catch(err =&gt; {
  console.log(err)
  process.exit(1)
})

Continue reading %Building Apps and Services with the Hapi.js Framework%


Source: Sitepoint