WP_Query Arguments: Author, Search, Password, Permissions, Caching and Return Fields

So far in this series you’ve learned about a selection of arguments you can use with the WP_Query class, to select posts by post type, category, tag, metadata, date, status and much more.

In this final tutorial on WP_Query arguments, I’ll run through some less frequently used parameters which can give your queries even more flexibility.

The parameters we’ll cover here are for:

  • author
  • search
  • password
  • permissions
  • caching
  • return fields

Before we start, let’s a have a quick recap on how you code your arguments with WP_Query.

A Recap on How Arguments Work in WP_Query

When you code WP_Query in your themes or plugins, you need to include four main elements:

  • the arguments for the query, using parameters which will be covered in this tutorial
  • the query itself
  • the loop
  • finishing off: closing if and while tags and resetting post data

In practice this will look something like the following:

The arguments are what tells WordPress what data to fetch from the database and it’s those that I’ll cover here. So all we’re focusing on here is the first part of the code:

As you can see, the arguments are contained in an array. You’ll learn how to code them as you work through this tutorial.

Coding Your Arguments

There is a specific way to code the arguments in the array, which is as follows:

You must enclose the parameters and their values in single quotation marks, use => between them, and separate them with a comma. If you get this wrong, WordPress may not add all of your arguments to the query or you may get a white screen.

Author Parameters

There are four parameters you can use for querying by author:

  • author (int): use author ID
  • author_name (string): use ‘user_nicename’ (NOT name)
  • author__in (array): use author ID
  • author__not_in (array)

The first one, author, lets you query posts by one or more authors, by supplying the author’s ID:

The code above queries all posts by the author whose ID is 2.

You could also use a string to query posts by more than one author:

If you wanted to query by name, you would use the author_name parameter:

This parameter takes the value from the user_nicename field in the database as its argument, which is displayed as the nickname in the Users admin screen:

User admin screen showing nickname

Note that as this is editable by users, you’ll be safer to use the author parameter if you think your users might change it.

You can also query for posts by an array of authors:

The above will query for posts by two authors: those with ID 1 and 2, giving you the same results as the string I used with the author parameter above.

Finally, you can exclude posts by one or more authors using the author__not_in parameter. The argument below queries for all posts except those by author 1:

Or you can exclude multiple authors:

Alternatively you can use the author parameter and use a minus sign in front of the author ID to exclude an author:

Search Parameter

There is just one parameter for searching, which is s. Use it to query for posts which match a search term. So for example to query for posts containing the keywords ‘my favorite food’, you’d use this:

You might find this useful to search for related posts with similar keywords, for example.

Password Parameters

You can use the two password parameters to query posts with and without password protection:

  • has_password (bool)
  • post_password (string)

The first parameter, has_password, lets you query for posts with or without password protection. So to query for posts which are password-protected:

And for posts which don’t have passwords:

You can also query by the password itself, using the post_password parameter:

Permissions Parameter

There is just one parameter available for permissions, perm, which you use to query posts which the current user has permission to read. It takes the 'readable' value and is designed to be combined with other arguments.

So to query password protected posts and display them only if the user has the appropriate permissions, you would use this:

Or to display draft posts if the current user has permission to view them, you’d use this:

Caching Paramaters

There are three caching parameters, which prevent the data retrieved by the query from being added to the cache:

  • cache_results (boolean): post information cache
  • update_post_meta_cache (boolean): post meta information cache
  • update_post_term_cache (boolean): post term information cache

The default value of all three is true: you don’t need to use them if you want the data to be added to the cache.

So to display all posts of the product post type but not add post information to the cache, you’d use this:

Normally you shouldn’t use these parameters, as it’s good practice to add post data to the cache. However you might sometimes want to retrieve posts so that you can just use some of the post data, in which case you don’t need the rest of the post data in the cache. An example might be when you just want to output a list of post titles with links, in which case you don’t need the post term data or metadata to be added to the cache:

Return Fields Parameter

You can use the fields parameter to specify which fields to return from your query. This can save returning data from fields in the database which you don’t need when outputting the data in your loop.

The default is to return all fields, but you have two options with the fields parameter to restrict this. First, the 'ids' argument:

This would just return an array of post IDs and no other fields. If you wanted to output anything in your loop (such as the post title) you’d then have to use functions such as get_the_title ($post->ID); to output the title, which would be a long-winded way of going about things.

The other argument you can use fetches an associative array of post IDs with child post IDs:

You would use this to query for posts according to your other arguments plus their children.

Summary

This part of the series on WP_Query introduces the final set of parameters for the WP_Query class. You can use these to query posts by author, password protected status or the password itself and search terms, and to set whether the results of the query are added to the cache and which fields are returned by the query.

In the next part of this series, you’ll see some worked examples of using WP_Query in your themes or plugins. 


Source: Nettuts Web Development

The Beginners Guide to WooCommerce: Customer Reports

Customers are the backbone for any business especially when you run an online store. As in this case the contact with your customers is a virtual one so, it’s critical to be able to monitor, track and reward your customers. To do that it is important for online store owners to keep a strict tab on their customers’ activities who visit their store and purchase items. WooCommerce provides online store owners with pretty amazing reports option in ref to their customers which is the topic of my today’s article where I plan to explain the section of Customers reports. So, let’s get started.

Customers Reports

When you enter the section of reports in WooCommerce you get to see four different tabs for different kinds of reports. The first tab is for the Orders reports which I have already discussed in my previous articles. Next tab is for the Customers reports and here online store owners can find every bit of detail about their customers who have made a purchase from the store over a specific time period.

Customers Reports

An online store owner can find this part of the plugin via WooCommerce > Reports > Customers.

Customers reports WooCommerce

In this section there are two additional sub-sections for reports which help in providing more filtered and enhanced results. These sub-sections are:

  1. Customers vs. Guests
  2. Customer List

Online store owners will find various set of details in each of these sub-sections which I will explain shortly. Other than this the remaining layout is very much similar to the one which I explained in the section of Orders reports. E.g. You can see the customers’ activities w.r.t to year, last month, this month, last 7 days & a custom date range.

Having said that now let’s jump on to the details of the sub-sections which are found here.

Customers vs. Guests

There are basically two types of people who visit any online store. There are the customers who have actually signed up and purchased something. On the contrary, there are guests who haven’t signed up but have taken a tour of your store and have purchased something.

 So, having this diversity in the nature of your clients, it becomes easy if you can view each of these two types separately. This is the actual purpose of Customers vs. Guests.

This way you can closely monitor that how many customers are visiting your store and what efforts can you take to increase the number of signups so that your guests can be converted into the customers type. Why? Because once a potential client has signed up at your store, chances are he/she will revisit the store and will buy again and again. That means the more people sign up at your store, the more profit/sales you can expect. 

Customers vs Guests

The above figure shows a Customers vs. Guests report for a custom date i.e. 2015-01-20. Here you can see that during this time period 2 sign ups have been made which is obviously not very impressive though it is a demo. So, store owners should offer incentives and impressive calls-to-action buttons to take a good start for getting more signups. Likewise, you can study the reports for other time filters as well.

Customer List

Moving on to the next sub-section which is for the Customer List. As its name explains, this option will allow the online store owners to view the list of all the customers with all their details all at one place. 

This list is very important, as it shows all your top customers in terms of how much they are spending. This way you can prioritize them when it comes to showing your loyalty towards them which can be in the form of awarding coupons and discounts.

Customer List

The customer’s details which online store owner can view are all arranged in the first row below which all customers are stacked accordingly. These details are:

  • Name (Last, First): Displays the first and last name of the customer.
  • Username: The actual username of the customer.
  • Email: Active email ID of the customers with which they have signed up.
  • Location: Place from where the customer belongs.
  • Orders: No. of orders placed so far.
  • Spent: Total amount which a customer has spent so far.
  • Last order: Date on which the last order was placed.
  • Actions: Here there are two buttons one is for Edit and the other is for View orders. Edit will allow you to change the customer’s personal details while View orders direct you to the orders page in WooCommerce for that particular order. 

Conclusion

This completes the section of Customers reports in WooCommerce. I will discuss the next set of reports in my upcoming article. Till then if you are having any queries relevant to today’s article you can comment in the field below.


Source: Nettuts Web Development

The Beginners Guide to WooCommerce: Order Reports – Part 4

So far in this series, out of four I have discussed the first three sub-sections which an online store owner finds in the Orders reports. The fourth and last option deals with the reports which are categorized on the basis of Coupons by date. By now I am sure you must be familiar with the importance of coupons in any e-commerce business. Broadly speaking, on one side coupons can be termed as a good way to reward your customers but from a store owner’s perspective coupons can be a source of loss for him as he is selling the product at a discounted rate. The fact is that considering their importance, WooCommerce has reserved an entire section of reports for the coupons separately. This section can prove helpful to sum up the benefits and overall results certain coupon based promotions brought to your business So, let’s begin with the details that can be viewed here. 

Coupons by date

As the name suggests Coupons by date will display all the details regarding the discounts and the total number of coupons which you gave to your customers during a specific period of time. This sub-section can be called analogous to the first one i.e. Sales by date which also displayed the reports date wise though here these date wise sales are the ones which had discounts through coupons.

Orders reports

You can find this part of the plugin via WooCommerce > Reports > Orders > Coupons by date.

Orders report in WooCommerce

Again, the layout is similar to the previous sections of reports. Same time filters can be seen here in the first row which can be used to display reports only in that particular time span. These time duration are are Year, Last Month, This Month, Last 7 Days & a Custom date range. Likewise, if you want to, you can click the Export CSV button displayed at the end of this first row to download the reports in .csv format. 

Coupons by date

There are two rows in the column at the left which are displayed one after the other. The entries of the first row are the details like discounts in total and the coupons used in total. The second row allows an online store owner to view the results on the basis of three different categories which are:

  • Filter by coupon: Display the reports w.r.t some exclusively selected coupons  
  • Most popular coupons: Display only those coupons which are popular among customers 
  • Most discounts: Display coupons which led to maximum discounts

Filter by coupon

Filter by coupon

The first option is for Filter by coupon which allows you to view the performance reports of some individual coupon exclusively. Here you can find a select box which shows All coupons written on it. This is a default setting and to see the reports of all the coupons simply click Show button.

However, for some particular value of coupon click on this select box and a drop down menu will be displayed from where you can choose your desired coupon. Let’s discuss each of these cases separately.

All coupons

All coupons option

With this setting the online store owner will get to see reports somewhat like this: 

Last 7 days report for coupons

The figure above shows that in the Last 7 Days there was a discount of £300.00 with 43 coupons being used in total.

Some specific coupon

Report for some specific coupon

Let’s suppose I chose a coupon named as lmpz132 from the menu. 

Coupon report

Now the report will display the details about this particular coupon only.

Most Popular

Most popular coupon

When online store owners will click to toggle this option it will expand into a new menu which will list the names of all those coupons which were popular among customers during a specific time period. 

In the above case I am considering the time span of Last 7 Days during which two coupons i.e. lmpz132 and free were used the most. With each coupon name a number is being displayed which shows how many times this coupon was used. The above figure shows coupon lmpz132 was used twice while free was used only once.

You can click any of these coupon names to see their reports individually.

Most Discount

Most discount coupons

The last row is for the Most Discount and it will display the list of all those coupon names which gave the maximum discount to the customers in decreasing order.

The coupon names are listed in the same manner i.e. the discount amount followed by the coupon name. In the figure shown above coupon lmpz132 gave maximum discount of £250.00 in the last 7 days.

The above mentioned details correspond to the reports of the Last 7 Days. However, if you want to generate the coupon reports for the rest of the time filters you can do it on your own following the entire procedure which I have explained above.

Conclusion

Yay! we are done with complete section of Order reports in WooCommerce. Now you are going to enjoy each one of these reports which in turn will benefit your business. Next up, I will start with the second tab which deals with the Customers reports. Till then if you have any queries regarding the Orders reports you may ask in the comment field below.


Source: Nettuts Web Development

Building a Product CSV Import Tool in OpenCart – Part 1

Suppose you have a list of products for your store which prices needs to be updated on site. But you hate manual jobs? You want a solution where you can directly import that list and all of your products & prices are updated in just a few clicks. Yes! this is what we are going to do. By default, OpenCart doesn’t provide the facility to import the products from any outer source, in this case we have to develop a module which can be used for import.

In our previous article, we have exported out some products and their information (product id, model, product name & price) so carrying on our previous work, lets start building an import tool!

What Are We Going To Do?

Today we are going to add an import
system in OpenCart. As we know that OpenCart is a free e-commerce solution which also let it developers to customize it accordingly. Let’s talk about Shopping Stores. On daily bases things change very often e.g. change in quantity, change in price, change in description, etc. 

For any business to grow, it is quite essential to update the store and an owner should be aware of its competitors, so things change…! Now question
is if one is running a shop online and he wanted to change the prices of
products. What will he do? For this purpose we provide a way through which users can alternate things as per their business needs. So if you’re running an e-store and you
want to make some alternations, this import system will help you out in a best
way. So in this first part we will be making a form / interface where an admin user can upload the CSV file. For reference please visit Building a Product CSV Export Tool – OpenCart .

Step # 1: Adding a Link

  1. Navigate to (store_path)/admin/controller/catalog/product.php

  2. Find the code line: $this->data['products']
    = array();

  3. Insert the
    given code after it: 
     

(The Above code we parsed the link so we can assign that to a button)

Step # 2: Adding a Button in a View

  1. Go to (store_path)/admin/view/template/catalog/product_list.tpl
  2. You’ll find
    some HTML code.
  3. Just find
    class called “button”
  4. In class you
    will see further buttons like “insert”, “copy” etc.
  5. Just paste
    the given code on the top of all buttons 
  6. <a onclick="location = '<?php echo $import_csv;
    ?>'" class="button">Import CSV</a>
OpenCart Dashboard

Step #3: The Controller Function

As we created a button above now we’ll create a public function in that same controller file i.e., (store_path)/admin/controller/catalog/product.php. Make sure that the function name should
match with the name you mention above in link. So we wrote a public function named as importCSV()

Inside the function, there are few lines of code need to be written

3.1 Setting Titles & Headings

3.2 Loading Model

The following line loads the model for our later use:

3.3 Action & Cancel URLs

As we are creating a form now, we’re going to parse the “Upload” and “Cancel” links for the user.

3.4 Breadcrumbs

3.5 Setting Up Template

We are going to tell the controller that render the import_csv.tpl for the view.

Step # 4: CSV Upload Form

Now we need
to create another template which will be displayed after clicking the button

  1. Simply
    follow the above path (store_path)/admin/view/template/catalog
  2. Create a
    file name as import_csv.tpl 
  3. Open the template in your favorite IDE and paste the following simple HTML code.
OpenCart Catalog

You can make your own template, the above code is a simple version of it.

Conclusion

In this Part of Tutorial we followed some steps to create an “Import Tool”, in this regard, we modified a template, created a form to provide a better feasibility for the user. In our next part of this article. The purpose of portioning this article in two (2) series is to make you clear about the “Layouts” and the “Business Logic” of this module. So in our next article, CSV will directly co-ordinate with the database and import the data accordingly. Thank you for taking interest, please provide with your suggestion & comments. Till next article, Happy Coding!


Source: Nettuts Web Development

WP_Query Arguments: Status, Order and Pagination

In this part of the series on Mastering WP_Query, you’ll learn about some of the arguments you can use with the WP_Query class, namely those for:

  • status
  • order
  • pagination

You can use these arguments to fetch scheduled posts from the database, to query attachments, to amend the way posts are ordered and what they’re ordered by, to specify how many posts are displayed, and much more

But before you can do this, you need to understand how arguments work in WP_Query.

A Recap on How Arguments Work in WP_Query

Before we start, let’s have a quick recap on how arguments work in WP_Query. When you code WP_Query in your themes or plugins, you need to include four main elements:

  • the arguments for the query, using parameters which will be covered in this tutorial
  • the query itself
  • the loop
  • finishing off: closing if and while tags and resetting post data

In practice this will look something like the following:

The arguments are what tells WordPress what data to fetch from the database and it’s those that I’ll cover here. So all we’re focusing on here is the first part of the code:

As you can see, the arguments are contained in an array. You’ll learn how to code them as you work through this tutorial.

Coding Your Arguments

There is a specific way to code the arguments in the array, which is as follows:

You must enclose the parameters and their values in single quotation marks, use => between them, and separate them with a comma. If you get this wrong, WordPress may not add all of your arguments to the query or you may get a white screen.

Status Parameters

As you’ll know if you’ve ever converted a post’s status from Draft to Published, or maybe put it in the trash, WordPress assigns a status to each post. You can use the post_status parameter to query for posts with one or more statuses.

The arguments available are:

  • publish: A published post or page.
  • pending: Post is pending review.
  • draft: A post in draft status.
  • auto-draft: A newly created post, with no content.
  • future: A post to publish in the future.
  • private: Not visible to users who are not logged in.
  • inherit: A revision.
  • trash: Post is in trashbin.
  • any: Retrieves any status except those from post statuses with 'exclude_from_search' set to true (i.e. trash and auto-draft).

If you don’t specify a status in your query arguments, WordPress will default to publish; if the current user is logged in, it will also include posts with a status of private. If you run a query in the admin pages, WordPress will also include the protected statuses, which are futuredraft and pending by default.

So let’s say that you’re running an events site and you’re using a custom post type of event, using the publication date as the date that the event takes place. By default WordPress won’t display any events that haven’t happened yet: although you’ve scheduled them, their scheduled date is in the future so their post status is future.

To get around this you simply use these arguments:

This will only display those events which haven’t happened yet, as those will have the publish status. But if you also want to display events which have happened, you can use an array of post statuses to include more than one:

The post_status parameter is essential when you’re querying for attachments. This is because they have a status of inherit, not publish. So to query for all attachments, you’d use this:

Alternatively you could replace inherit with any which would have the same effect.

Order Parameters

There are two parameters you use to order posts retrieved by WP_Query: order and orderby. As you’d expect, order defines the order in which posts will be output in the loop, and orderby defines which field in the database they’ll be sorted by.

Let’s start by looking at the arguments for order.

The order Parameter

There are just two arguments you can use for this:

  • ASC: ascending order from lowest to highest values (1, 2, 3; a, b, c).
  • DESC: descending order from highest to lowest values (3, 2, 1; c, b, a).

These are fairly self-explanatory. If you don’t include an argument for order, WordPress will default to DESC.

The orderby Parameter

You can sort your posts by a range of fields:

  • none: No order (available with Version 2.8).
  • ID: Order by post id. Note the capitalization.
  • author: Order by author.
  • title: Order by title.
  • name: Order by post slug.
  • type: Order by post type.
  • date: Order by date.
  • modified: Order by last modified date.
  • parent: Order by post/page parent id.
  • rand: Random order.
  • comment_count: Order by number of comments.
  • menu_order: Order by Page Order. Used most often for Pages (using the value you add to the metabox in the Edit Page screen) and for Attachments (using the integer fields in the Insert / Upload Media Gallery dialog), but could be used for any post type with menu_order enabled.
  • meta_value: Sort by the value for a meta key (or custom field). This only works if you also include a meta_key parameter in your arguments. Meta values are sorted alphabetically and not numerically (so 34 will come before 4, for example). 
  • meta_value_num: Order by numeric meta value. As with meta_value, you must also include a meta_key argument in your query.
  • post__in: Preserve post ID order given in the post__in array.

The default is date, i.e. the date a post was published.

So for example if you want to sort your posts by title in ascending order, you would use these arguments:

Ordering by Multiple Fields

You don’t have to stick to just one field to sort your posts by. To sort by multiple fields, you use an array with the orderby parameter and (optionally) with the order parameter if you want to sort each field in a different order.

So let’s say you have a ratings custom field which you want to use for sorting in an e-commerce site. You could sort by rating and then title, both in ascending order, this way:

Note that I’ve included the meta_key argument to let WordPress know which custom field I’m using. You do this because of the way WordPress stores post metadata: not in the wp_posts table, but in the wp_postmeta table.

But what if you wanted to order by rating in descending order and then title in ascending order? You simply use another array:

You could also sort by multiple fields when not using post metadata, for example to sort by post type and then date:

This would sort by post type in ascending order and then within each post type, by date in descending order.

Pagination Parameters

The next set of parameters we come to are for pagination. These help you define how many posts will be queried and how pagination will work when they are output.

The available parameters are:

  • nopaging (boolean): Show all posts or use pagination. The default is 'false', i.e. use pagination.
  • posts_per_page (int): Number of posts to show per page.
  • posts_per_archive_page (int): Number of posts to show per page—on archive pages only.
  • offset (int): Number of posts to displace or pass over.
  • paged (int): The page in the archive which posts are shown from.
  • page (int): Number of pages for a static front page. Show the posts that would normally show up just on page X of a Static Front Page.
  • ignore_sticky_posts (boolean): Ignore post stickiness—defaults to false.

Let’s take a look at some examples. 

Number of Posts and Offsetting Posts

For example, to display the five most recent posts:

Or to display five recent posts excluding the most recent one:

Note that although you’re fetching posts from the most recent six posts in the database, you still use 'posts_per_page' => '5' as that’s the number of posts which will be output.

Taking this a bit further, you can write two queries: one to display the most recent post and a second to display ten more posts excluding that post:

You can also use posts_per_page to display all posts:

Sticky Posts

Normally your sticky posts will show up first in any query: if you want to override this, use ignore_sticky_posts:

The above arguments will return the most recent five posts regardless of whether they are sticky or not.

Note that if you want to display just sticky posts, you’ll need to use the get_option() function and the post__in argument as follows:

The above would display the last five sticky posts: if there are less than five (e.g. three) sticky posts, it won’t display non-sticky posts but just the most recent three sticky posts.

Pagination in Archives

As well as defining how many posts are fetched from the database, you can also use pagination parameters to define how the resulting posts will be paginated on archive and search pages.

So for example on an archive page you could use this code to display 20 posts per page in the archive:

Note: the posts_per_archive_page argument will override posts_per_page.

You can also choose to output just the pages which would appear on a given page in paginated pages. So for example if you wanted to show the 20 posts that would appear on the third page in the example above, you’d use this:

An alternative way to query the same posts would be to use the offset argument:

This skips the first 40 posts (which would be on the first two archive pages) and fetches the next 20 posts (which would be on the third archive page. One of the things I love about WordPress is how it so often gives you more than one way to achieve something!

You can also turn pagination off altogether, to ensure that all posts will show on the same page:

Summary

The WP_Query class gives you plenty of flexibility when it comes to determining how many posts you want to query, what order you want to display them in, and what status of post you want to show.

Some of these arguments are essential for querying certain kinds of post (for example 'post_status' => 'inherited' for attachments), while others simply give you more control over the way your queries run.

By using these parameters you can create custom queries that do a lot more than simply outputting the most recent published posts.


Source: Nettuts Web Development

How to Monitor Docker-Based Applications Using New Relic

Introduction

Docker is one of the fastest-growing new technologies at the moment. A solution for deploying software and building scalable web service architectures, it allows you to split your application’s architecture into containers with specific roles and responsibilities. Using Docker, you also get to specify the application’s dependencies on the operating-system level, bringing us the closest we’ve ever been to Java’s original promise: “Write once, run anywhere”.

On the downside, encapsulating your code inside a set of containers can lead to a lack of visibility: the containers become black boxes and leave the developer with little or no visibility into their inner workings.

To fix this, New Relic took up the task and made its server-side monitoring tools (Servers and APM) support Docker. In June 2015, the Docker support became available for all New Relic customers. 

In the post announcing the Docker support, Andrew Marshall from New Relic writes:

“You can now drill down from the application (which is really what you care about) to the individual Docker container, and then to the physical server. No more blind spots!”

By monitoring your Docker-based application using New Relic’s tool set, you can now analyze the application as a whole and then, when you find issues in your application, look up the containers where the problems occur and fix the issues inside them. 

Also, by monitoring the application at the Docker level, you will get valuable information about your setup: are you using your containers wisely, and does the division of resources between containers work as it should? 

In This Tutorial

In this tutorial, I will show you how to get started with monitoring a simple Docker-based application using New Relic’s tools.

You will learn:

  • how to set up New Relic monitoring on a web server running a set of Docker containers to gather information about the overall Docker environment
  • how to set up New Relic monitoring for a PHP application running inside one or more Docker containers to monitor the state of the application as well as the individual Docker container

To achieve this, we will create a simple prototype for a hosted WordPress solution: three WordPress sites each running in a Docker container and a MySQL container shared between them. 

With the setup in place, we will then enable New Relic monitoring and go through the monitoring tools’ views and explore the data you’ll find in them.

Requirements

While I hope you will learn something about using Docker by reading this tutorial, it’s not meant to be a tutorial about Docker as much as it is about using New Relic’s monitoring tools together with your Docker-based applications. 

That’s why, to get the most out of the tutorial, you should be at least somewhat familiar with Docker. Also, a basic understanding of the Linux command line is expected. 

Set Up the Docker Containers

In short, Docker is a tool for packaging your application’s components and their dependencies into well-documented, self-contained units that can be deployed quickly and reliably into different environments—be it a development machine or a cluster of production servers.

These building blocks are called containers—a concept close to a virtual machine, but not quite. Whereas a virtual machine runs its own operating system, containers all run on the same host operating system but each have their own dependencies and libraries. Containers can also be linked to each other to allow them to share, for example, their resources. 

To learn more about Docker, the best place to start is the tool’s online documentation. It is well written and will guide you by the hand. 

Then, once you feel comfortable about trying Docker in action, let’s get started with our setup!

What Will We Build?

Before we get started with implementing our Docker and New Relic setup, here’s an image showing an overview of the basic architecture that we’re going to build in this tutorial:

A simple WordPress stack build on Docker containers and reporting to New Relic

As I mentioned in the introduction, this is a simplified hosted WordPress service: In three containers (an arbitrary number), we’ll run a WordPress installation in each. The WordPress containers are all linked to a container running MySQL. All of this will run on one server—which in the example will be hosted on Amazon Web Services, but can be any server capable of running Docker and New Relic’s monitoring tools.

The server and the containers running inside it will be monitored by New Relic’s APM (Application Performance Monitoring) and Servers tools.

I designed this setup mostly for demonstration purposes, but it is actually modeled after something I’m building in my own project: just by adding an admin tool, a proper DNS configuration, and an Nginx reverse proxy—all of these running in their own Docker containers—the setup can be built into a fully functional hosted WordPress service. This, however, is outside the scope of this tutorial, so let’s get back to running some WordPress containers.

Step 1: Start a New EC2 Machine

I like to use Amazon Web Services when I need a web server for experiments like this: the servers are cheap to start, and you only pay for the time you use them (just remember to stop them when you’re done!).

However, there’s nothing Amazon-specific about using Docker with New Relic, so if you prefer some other virtual server provider or have a server of your own on which you can install Docker and the New Relic monitoring tools, these are all good options. 

In that case, you can skip this step and move straight to installing Docker on the server machine (Step 2).

In my earlier tutorial, Get Started With Monitoring Your Web Application Using New Relic Alerts, I included detailed, step-by-step instructions for starting a machine in the Amazon cloud. Follow those instructions, in the tutorial’s Step 1, but don’t install PHP or start Apache just yet as this time, because Apache and PHP will go inside the Docker containers.

If you’re already familiar with AWS and just want a quick reminder of which options to choose when starting the machine, this short list will guide you through the process:

  1. Sign in to the AWS console, creating an account if you don’t have one yet.
  2. In the main menu, choose EC2. You’ll notice that Amazon has recently launched a new service for running Docker containers in the cloud (EC2 Container Service), but this time, we’ll want to manage Docker ourselves.
  3. Select Launch Instance to launch a new machine. Go with the Quick Start option and pick the default, 64-bit Amazon Linux server. For testing, a t2.micro machine size is just fine.
  4. On the security group page, open the HTTP port and make SSH available only from your IP by selecting the My IP option.
  5. Click Review and Launch to start the server. When you’re asked to create a key pair, create one (name it for example docker_test) and download it.

After a minute or so, you’ll notice that your machine is up. 

Right click on its name and choose the Connect option. Copy the machine’s IP address from the popup and use it to connect to the machine using SSH (place the IP address where it says INSERT_IP_HERE in the commands below):

Step 2: Install Docker on the Server

Now that you have started a server and you are connected to it over SSH, it’s time to install the Docker engine.

The instructions in this step are for the Amazon Linux server we created in Step 1. If you are using a different operating system, see the Docker documentation for installation instructions specific to your environment.

Start by installing the docker package:

The -y option makes yum automatically answer yes to any questions about the installation. It makes things quicker, but if you want to play safe, feel free to leave it out.

Once the Docker package has been installed, start it as a service:

Now, you’ll find Docker running for the rest of this server’s runtime (or until you stop it yourself).

To test the installation, you can use the docker ps command which lists the running Docker containers:

At this time, the list is still empty, as we haven’t yet started any containers:

The Docker process list shows no running containers

Finally, add the default user to the docker group so you can control Docker without having to prefix all your Docker commands with sudo. It’s just four extra letters, but makes a big difference when you have to write it often! 

To apply the change to your connection, close the ssh connection and connect again. 

Now, you’re all set to start some Docker containers.

Step 3: Start the Docker Containers

If you search for WordPress and Docker (or MySQL and Docker), you’ll notice that there are many different Docker images to choose from. In your own application, it’s a good idea to go through the most popular ones and see which works best for you—or if you should actually write one from scratch. 

In this tutorial, I decided to go with official images: the MySQL image provided by MySQL, and Docker’s official WordPress image.

First, start a container using the MySQL imagemysql/mysql-server:5.5

The command downloads the package and its dependencies and then starts your MySQL container, naming it db. The name is important as we’ll use it to link our WordPress containers to the database. 

If you like, use docker ps to check that the new container is running. For some more information about the startup, you may check the container’s log file with docker logs db. You should see something like this as the last few lines of the output:

Next, start a WordPress container using Docker’s WordPress image, wordpress:latest (using the latest tag, you’ll always get the most up-to-date Apache-based version). 

In the command above, you’ll notice that we link the container to the MySQL image above (--link db:mysql) and pass the name of the database to use as an environment variable (-e WORDPRESS_DB_NAME="wordpress_1"). This container listens to the standard HTTP port 80

The root user’s database password is shared automatically when we link the db container.

Once the image has been loaded and the container is running, visit the machine’s URL to check that the container is working as it should. When working on AWS, use the Public IP from Step 1.

Visiting the site youll find WordPress running

Finish the WordPress installation if you like—it only takes a minute or two. 

You have now started the first of the three WordPress containers. Before we add the rest, let’s set up New Relic monitoring for the current setup.

Install New Relic Server Monitoring With Docker Support

New Relic’s Docker support is divided into two parts: 

  • In the Servers part, you’ll find overall information about Docker: things like how many containers of different types are running, or how much of the resources they are using on each of your servers. 
  • In the APM part, you can access the Docker containers as parts of your web applications, monitoring them together as well as individually on the application level.

Let’s begin with the Servers part. 

Step 1: Install the New Relic Server Monitor

As New Relic runs as an online service, to use the New Relic monitoring tools, you’ll first need an active New Relic account. You can start with a 14-day free trial or use the free version—all of the functionality presented in this tutorial is available in the free version. 

If you don’t have a New Relic account yet, start by signing up.

Once signed up, select Servers from the top menu to access the server monitoring tool.

The New Relic menu bar

If you are already using New Relic Servers, you’ll see a list of your servers monitored with New Relic. Click on the Add more button for setup instructions.

If you just signed up, you’ll be sent straight to the Get started with New Relic Servers page. 

On this page, you’ll find instructions specific to the different environments. Pick the one that matches your server. For the Amazon Linux setup used in this tutorial, we’ll go with the Red Hat or CentOS option.

Get started with New Relic Servers

Scroll down, and finish the installation according to the instructions matching your server environment.

In the case of Amazon Linux, start by adding the New Relic yum repository. Notice that this needs to be done as root, so we’ll use sudo.

Next, with the yum repository added, use it to install the Server Monitor package:

Again, the -y parameter makes yum answer yes to every prompt. If you want to be more careful, feel free to go without it and accept the prompts manually.

To complete the installation, configure the monitoring agent and set your license key. You’ll find a version of the command with your license key in place on the Get started with New Relic Servers page. Remember to prefix the command with sudo.

Before starting the monitoring daemon, we’ll divert a bit from the setup instructions on the New Relic Servers page and make a few additional configurations for Docker monitoring.

Step 2: Apply Docker Specific Configuration

First, to allow New Relic to collect data about your Docker setup, add the newrelic user to the Docker group on your server:

Then, restart Docker. 

Notice that to do the restart gracefully and to make sure nothing breaks in your containers, it’s a good idea to stop all running containers first.

Once the restart is complete, we’re all set and you can start the server monitoring daemon.

The setup for the New Relic server daemon is now complete. Start the Docker containers again, and you’ll soon see information about your server in the tool’s dashboard. 

Let’s take a look at what you can find there.

What You’ll Find Inside New Relic Servers

Inside New Relic, click on the Servers menu item to reload the list of servers. If all went well, you should now see your server listed:

The server listing now shows our new server

Click on the machine’s name to see more information about it. If no data is shown yet, give the tool some time and try again.

The first thing you’ll see is the overview page.

Overview

On the overview page, everything looks pretty much the same as it would on a New Relic Servers overview page when monitoring a regular server without Docker. 

The New Relic Servers overview page

There is the CPU usage, the server’s load average, the amount of physical memory used, some information about Disk I/O and network usage. As we just started the server and there are no users visiting the site, most of these figures are still low.

If you look at the list of processes on the bottom right, you’ll notice that Docker is indeed running on this server:

The list of processes show Docker running on the server

The Docker Images View

The overall view is important when thinking about the server’s health as a whole, but now, what we are really interested in is what we can learn about the Docker setup. For this, the more interesting part of Servers is the Docker menu. 

Click on the Docker menu item on the left. 

The Docker Images view on New Relic Servers

The Docker screen shows the percentage of the server resources your Docker containers are using, grouped by their images. 

In this case, for example, you’ll see the two images, wordpress:latest and mysql/mysql-server:5.5, that we used to start our two containers in the previous steps. There isn’t much activity on the server, but we can see that WordPress is using more of the CPU, and both use about the same amount of memory. 

You can use the drop-down on the top left corner to sort the list by either CPU or Memory.

Later, as your application matures, the findings from this page will tell you more about your setup and what you should do with it. 

For example, if you notice that MySQL is using a lot of the system’s memory and processing power, it might be a good time to consider moving it to its own dedicated server. Or maybe you’ll notice that there are more WordPress containers running on the server than what really fits and decide to split them into two…

A Closer Look at One Docker Image

If you want to learn more about a specific image, click on its name on the list on the left. This will open a view that shows information about just the containers using this image. 

A closer look at the WordPress image

The screen shows you the CPU and memory usage from containers using this image and the number of containers over time. 

To see how adding more containers affects the view, add a couple more WordPress containers. We’ll use the same docker run command from earlier, with a few small changes:

  • The new containers will be named wordpress-2 and wordpress-3 respectively.
  • The new containers will use databases wordpress_2 and wordpress_3.
  • The new containers will listen to different ports, 8000 and 8001 instead of 80.

Here are the two run commands with the changes from the list above in place:

On Amazon, to make the new WordPress sites accessible, you’ll still need to edit your EC2 instance’s Security Group settings to allow incoming traffic from ports 8000 and 8001. Use the Custom TCP Rule option and select Anywhere as traffic source.

Now, visit the two new WordPress sites, clicking on some of their menu items. 

Then come back to New Relic Servers to see how the data on the Docker page changed.

The first, and most visible change is the number of containers running. Instead of one, there are now three:

The Containers graph show the change in the number of running containers

CPU and memory usage is still low—although the use of memory shows growth:

CPU and memoery usage after adding two more WordPress containers

Adding Support for New Relic APM in Your Docker Containers

So far, we have looked at the Docker containers from the outside, through their resource usage, and how they look when explored from the server running them. 

But even though this Servers view to Docker does provide useful overall information about your server and what’s running inside it, the true power of New Relic lies in its application-first approach: by monitoring the application level, you can gain more detailed information about what is happening on your server and what to do about it.

Normally, when running the PHP code directly on your web server, you’d simply install the APM monitoring agent. For an example of this, you can look at the right there on the server on which you’re running the application (as we did in the New Relic Alerts tutorial I mentioned earlier). 

Now, things are a bit different: We have split the architecture into separate pieces—”micro services”—each running in their own containers, and so the host server itself isn’t running Apache or PHP. That’s why, if you were to install the APM agent directly on the host server, you wouldn’t see any activity whatsoever.

The monitoring needs to go inside the containers.

Step 1: Create Your Own Version of the WordPress Image

One of the great things about Docker is the way it lets you build upon existing images, using them as a base and making them your own without having to change the original image or write a new one from scratch.

We will modify the WordPress image we used earlier in the tutorial by enabling New Relic monitoring inside it. 

You can download the source file for the Docker image from the linked GitHub repository, or you can create it yourself by following the instructions in this tutorial. When working with this example, it’s probably easiest to do it right there on your server.

Start by creating a directory for the new image definition, for example in ec2-user‘s home directory. 

Then, in that directory, add a new text file, naming it Dockerfile. This is the file that will contain the definition of what goes into your Docker image.

Inside the file, add the following code. We’ll go through the contents line by line right after the snippet.

Now, let’s go through the file and see what it does.

Line 1: The file starts by specifying the Docker image that our image should be built on. I picked a specific version of the wordpress image so that if we build this image again after a while, it will not have changed in between. 4.3.1-apache is the latest version at the time of writing.

Lines 3–5: Include a script for initializing the New Relic monitor. The script will be run at container startup so that we can use it to define the application’s name and license key separately for every container. 

We’ll look at the contents of this file in the next step.

Lines 7–8: If you look closely at the code defining the base WordPress image, you’ll notice that the script, apache-entrypoint.sh, defined to run as its ENTRYPOINT ends with a line that executes the command passed in the CMD option (by default, this is apache2-foreground). 

The line looks like this:

As I didn’t want to replace the original “entrypoint” script but needed to have it finish by calling the new run.sh script we just mentioned above, I decided to edit the ENTRYPOINT file at compile time: the sed command (on line 8) replaces the exec line from above with an exec command that executes run.sh instead.

Lines 10–11: This is probably not something you’ll want in a production setup. However, for testing it’s very handy to include a simple PHP file that we can use to check if New Relic was set up correctly. 

Make sure to also create a file named test.php with the following content:

Lines 13–20: This is the part where we install the New Relic monitoring package into the image.

If you look at the installation instructions on New Relic’s Get Started with New Relic page (the first page you’ll see when you choose the APM option after signing in), you’ll notice that the commands on these lines are closely following the installation instructions for the Debian environment. This is because the default WordPress image is based on an image that uses Debian.

First, the script installs wget and uses it to download the New Relic key and add it to APT. 

Next, the script adds the New Relic repository to APT.

And then, finally, on lines 19 and 20, it installs the newrelic-php5 package from that repository.

Lines 22–25: The New Relic installation instructions on the Get started with New Relic page continue with finalizing the installation and putting the license key in place. 

As the license key and application name are quite container-specific information, we can’t include this final step in the image. Its place is only at startup, in our run.sh script. 

To prepare for this, the script defines three environment variables:

  • NR_INSTALL_SILENT: This variable, if present (and set to any value) will cause the New Relic installation script to skip all user prompts and use defaults instead.
  • NR_INSTALL_KEY: This one specifies the license key. As we don’t want to hard-code a license key in the image definition, it’s set to **ChangeMe** for now. This variable is also read automatically by the install script.
  • NR_APP_NAME: This is a variable I added myself, which will be used in the run.sh script to edit the New Relic daemon’s configuration to use the container’s actual application name instead of the default ("PHP Application").

Step 2: Create the Custom Startup Script

With the Dockerfile ready, we still need to implement the startup script, run.sh, mentioned above. The script will finish the New Relic setup and only then start the server, now with New Relic monitoring in place. 

Create the file and add the following code into it:

Now, let’s go through the code line by line.

Line 1: Specify that the script should be run with the bash interpreter.

Line 2: Specify that the script should exit if an error occurs.

Lines 4–5: Run the New Relic installation script. Notice that we don’t need to pass in any parameters as the configuration is done using environment variables. NR_INSTALL_SILENT was already specified in the Dockerfile and NR_INSTALL_KEY should be passed in with the docker run command.

Lines 7–8: As the installation script doesn’t have an environment variable for specifying the name of the application, we’ll have to edit the configuration manually—or actually, programmatically—in this script. 

In the configuration file, the application name is specified as:

The sed command on line 8 looks for this string and replaces it with a version that includes the name defined in NR_APP_NAME.

Line 10: Finally, to complete the startup, call apache2-foreground to run Apache in the foreground and keep the container running. As you remember from earlier, this was originally done at the end of the entrypoint script.

Now, all the pieces for augmenting the WordPress image with New Relic monitoring are in place. We can now build the image and give it a try.

Step 3: Compile and Run the New Image

In the directory where you placed the files created during the past two steps, enter the command: 

This creates a new image, named tutsplus/wordpress-newrelic with the tag 4.3.1-apache, and adds it to your local Docker image repository.

Once the build completes, it’s time to see if everything is working properly.

First, stop and remove your existing WordPress containers:

Then, run new ones, starting with just one and adding more once you’ve checked that everything is running smoothly.

The command is mostly the same as what we saw earlier when starting the default WordPress image, with the following changes: 

-e NR_INSTALL_KEY="YOUR KEY HERE": This is where you specify the New Relic license key that should be used in the install script. You can find yours by going to APM’s Get started with New Relic page and choosing the PHP option.

On this page, right before the installation instructions, there’s a red button that says Reveal License Key. Click on it and then replace YOUR KEY HERE with the key shown.

Get your license key

-e NR_APP_NAME="wordpress-cloud": This part of the command sets the environment variable used for specifying the application’s name. This name is used to group the data inside APM. 

The “correct” value for a container’s name depends on your setup and how you want to monitor it. If all of your containers are running pieces of the same application, it will make sense to use the same name for them and collect them all in the same view in the monitoring tool. On the other hand, if the containers are clearly separate, distinct names are the way to go. In this example, I decided to use the same name, wordpress-cloud, for all three containers.

Check the New Relic documentation for a more detailed discussion on naming your applications.

tutsplus/wordpress-newrelic:4.3.1-apache: At the end of the command, you’ll notice that we now use the newly created image instead of wordpress:latest.

Once you’ve run the command and your server has started, visit the WordPress test page to verify that the server is running and New Relic is included. You may also want to check the value of newrelic.appname

The PHP test page showing New Relic configuration

Then, you can again go to the main page to find WordPress running OK. You now have a Docker container running WordPress and reporting its state to New Relic APM. 

Add the remaining two containers, wordpress-2 and wordpress-3, and then let’s take a look at the data we’ll find in APM.

What You’ll Find Inside New Relic APM

Now that you have set up the monitoring for each of your Docker containers, let’s take a look at the information we can find about them in APM.

As we added all three WordPress containers with the same name, when you now open the APM page, you’ll find just that one application, wordpress-cloud. When you place your mouse pointer on top of the application’s name, you’ll see a popup saying “Php application running on 3 hosts (3 instances).”

APM showing our Docker based application

Click on the name to access the analytics data for this application and its containers.

Overview

The first view you’ll see in APM is Overview. This page shows a summary of the performance of all of the containers in this application: how long requests made to the application take on average, the application’s Apdex score, its throughput, and a list of the application’s slowest transactions .

Overview of the wordpress-cloud application

At the bottom of the screen, you’ll find a summary of the structure of the application: what servers it’s running on, and the containers in use inside them. 

In a more complex, real-life application, you might see multiple servers as well as multiple containers. Now, you’ll find our three WordPress containers. As we didn’t install New Relic to the database image, we can’t see it on this list.

Filtering Data by Container

Similar to the overview page, by default, the rest of the screens in APM will show you a summary of the entire application’s performance in the various metrics presented in them. 

However, if you want to drill down to a specific container, you can at any time use the drop-down menu at the top of the screen to choose the one you are interested in:

Dropdown menu to choose a specific container

If you are not sure what the different server codes on this list mean, you can use the docker ps command to find the mapping between these Docker IDs and your containers.

Finding and Fixing PHP Errors in Docker Containers

APM is not only useful for tracking the performance of your web-based application. It is also a great help in noticing when something goes wrong in your application and locating the issues. 

To test this in a Docker-based environment, I created a simple WordPress plugin, intentionally placing a couple of errors in the code, and installed it on one of the containers. Then, after browsing the site for a while, this is what I saw on the Errors page in APM:

The Errors page shows the applications errors

Looking at the picture, you’ll quickly see that the error rate in your application is on the rise.

Below the graph, you’ll find a list showing the recent errors: a syntax error that has occurred three times and a division by zero, taking place at various places in the code. Both need to be addressed, but for now, let’s look at the first one.

Clicking on the error message will bring you to a page that shows more information about the error:

More information about the error

Looking at this screen, you’ll see that the error is occurring on one of the Docker containers, 402c389c0661, which just happens to be the ID of wordpress-3, the container on which I installed the broken plugin.

Now that we have found the container on which the error happens, we can use the stack trace to fix the problem and update a working version of the plugin to the affected container. Issue solved. 

Conclusion

You have now implemented a simple Docker-based web server setup and enabled New Relic monitoring on it. You have also seen how you can use the monitoring tools to gain better visibility into the Docker-based application.

However, we have only scratched the surface of what you can do with the New Relic monitoring tools, so if you are not yet a New Relic user, take a look at the Docker Monitoring page at New Relic, and get started.


Source: Nettuts Web Development

The Beginners Guide to WooCommerce: Order Reports – Part 3

The first tab of reports in WooCommerce is for the “Orders” which an online store owner receives during a particular duration of time. This section is further divided into four sub-sections which provides a store owner with more and well refined filtered results. Of these four, I discussed the first two sub-sections (i.e. Sales by date & Sales by product) in my last two articles. So today, I will explain the details and functioning of the third part of Order reports which is Sales by category

Sales by category

As the name explains its meaning Sales by category will display all the order reports w.r.t categories which are a part of your online store. You can see the reports for the entire category as a whole. If I talk in terms of a much detailed view, Sales by date and Sales by product allow online store owners to study their store performance much deeply as compared to Sales by category

However, if you want to get a quick overview of your store’s performance then this option serves best for this purpose. Just in a glance you can closely observe the total earnings, loss & profit which is generated from the sales.

Sales by category

To access to this part of the plugin go to via WooCommerce > Reports > Orders > Sales by category.

Sales by category layout

You will find a layout similar to one which I discussed in case of Sales by date and Sales by product. Which means that various filters which display results for reports with different time spans are available in the first row. These filters are based upon Year, Last Month, This Month, Last 7 days and a Custom date. Towards the end of this first row is a button for Export CSV which allows the online store owners to download any of the reports in .csv file format.

There again exists a column below the first row to the left. Here is where you can select a category. Enter the name select the category and its reports will be displayed.

Categories

In this search bar you can add single or multiple categories or subcategories. A button called Show to display the reports. To make things simpler and consume less time next to the Show button are the options for All & None, which allow you to add or remove all the categories from the Search bar in a single click. 

All categories in sales by category

Let’s me explain this scenario by considering that I clicked All and the Search bar displayed all the categories I had in my online store. In my case, the store offered mainly 7 different categories & subcategories i.e. Clothing, Hoodies, T-shirts, Music, Albums, Singles & Posters. You can remove any of these entries simply by clicking the cross.

sales by category report

To view the order reports click Show. As a default, the time filter is set to Last 7 Days so the first set of reports will be displayed for this time period.

sales by category report last 7 days

The layout and format of the reports is also the same with a column displaying all the details being individually stacked up. However, the contents are different. Now, there are no graphs, points or lines instead online store owners will view reports of his categories in the form of bar charts. 

Likewise, the entries in the column lists the amount generated by each category or sub-category individually. Last but not the least, in case you hover your mouse over any of these rows, it will turn that section in purple on the bar chart. 

The above figure shows that the rightmost bar appears purple which indicates that the mouse is positioned on the last row i.e. sales in Posters category. In the previous figure this bar was originally red in color. 

Based on these facts I am adding the reports for other time filters as well.

This Month

sales by category report this month

I am taking January 2015 as the reference month so the above figure shows report for This Month. All the previous explanation applies in this case as well.

Last Month

sales by category report last month

Sales by category order reports for the Last Month will correspond to the results for December 2014.

Year

sales by category report yearly

Last but not the least is the yearly report of sales by category. All the results are beautifully displayed in vibrant colors making it quite easy for online store owners to monitor his store’s performance in a single glance. 

Conclusion

This is it for today, l hope to have made things pretty clear by now. In the next article I will discuss the last part of order reports. If you have any queries regarding today’s post you may ask in the comments.


Source: Nettuts Web Development

WP_Query Arguments: Date

In this series on WP_Query, you’ve been learning how to use the WP_Query class to create custom queries in your theme files or plugins.

This part of the series will take you through the arguments you can use to create both simple and complex date queries, to output posts published on, before, after or between given dates.

I’ll show you what parameters are available to you and how to use them to write your queries. But first, a reminder of how arguments work in WP_Query.

A Recap on How Arguments Work in WP_Query

Before we start, let’s have a quick recap on how arguments work in WP_Query. When you code WP_Query in your themes or plugins, you need to include four main elements:

  • the arguments for the query, using parameters which will be covered in this tutorial
  • the query itself
  • the loop
  • finishing off: closing if and while tags and resetting post data

In practice this will look something like the following:

The arguments are what tells WordPress what data to fetch from the database and it’s those that I’ll cover here. So all we’re focusing on here is the first part of the code:

As you can see, the arguments are contained in an array. You’ll learn how to code them as you work through this tutorial.

Coding Your Arguments

There is a specific way to code the arguments in the array, which is as follows:

You must enclose the parameters and their values in single quotation marks, use => between them, and separate them with a comma. If you get this wrong, WordPress may not add all of your arguments to the query or you may get a white screen.

Date Parameters

You can also use parameters to query for posts with a publish date on a given date. You can be as specific as you like with dates, using years and months for example to retrieve a number of posts.

You can write a simple set of arguments or you can use date_query to create nested arrays and run more complex queries. Let’s start with the simpler arguments.

Simple Date Arguments

The parameters you can use to query by date are:

  • year (int): Four-digit year (e.g. 2015).
  • monthnum (int): Month number (from 1 to 12).
  • w (int): Week of the year (from 0 to 53). The mode is dependent on the "start_of_week" option which you can edit in your Settings page in the admin.
  • day (int): Day of the month (from 1 to 31).
  • hour (int): Hour (from 0 to 23).
  • minute (int): Minute (from 0 to 60).
  • second (int): Second (0 to 60).
  • m (int): YearMonth (e.g. 201502).

So imagine you’re running an events site that uses the publish date for each event to denote the event start date. To want to display all events, past and future, happening in 2015, here are the arguments you’d need:

Note that I’ve used future and publish for the post status, as posts scheduled for a future date aren’t queried by default.

Or if you wanted to automatically display events happening this year, and not update your query every year, you could first get the current year and then pass that in your query arguments:

Complex Date Arguments

To use multiple date parameters to create more complex queries, you use the date_query parameter. This gives you access to more parameters:

  • year (int): Four-digit year (e.g. 2015).
  • month (int): Month number (from 1 to 12).
  • week (int): Week of the year (from 0 to 53).
  • day (int): Day of the month (from 1 to 31).
  • hour (int): Hour (from 0 to 23).
  • minute (int): Minute (from 0 to 59).
  • second (int): Second (0 to 59).
  • after (string/array): Date to retrieve posts after. 
  • before (string/array): Date to retrieve posts before. 
  • inclusive (boolean): For after/before, whether exact value should be matched or not.
  • compare (string): An operator you use to compare data in the database to your arguments. Possible values are '=', '!=', '>', '>=', '<', '<=', 'LIKE', 'NOT LIKE', 'IN', 'NOT IN', 'BETWEEN', 'NOT BETWEEN', 'EXISTS', and 'NOT EXISTS'.
  • column (string): Database column to query against: the default is 'post_date'.
  • relation (string): OR or AND, how the sub-arrays should be compared. The default is AND.

The date_query parameter is formatted like this:

You can also create multiple arrays and define how they will be compared using the relation parameter. The below example will return queries that match the arguments in both arrays:

While the code below will fetch posts that match the arguments in either array (or both):

Let’s illustrate this with an example. Let’s say you’re working on a college website and want to display posts from this academic year. The academic year runs from 1 September 2014 to 31 August 2015, so you’d need to find posts in the relevant months and years:

Note that the month parameter takes a string for its arguments, not an array.

The before and after Parameters

An alternative to the example above is to define the dates before and/or after which you want to display posts, using the before and after parameters. These take three arguments:

  • year (string): Accepts any four-digit year: empty by default.
  • month (string): The month of the year (1 to 12). The default is 12.
  • day (string): The day of the month (1 to 31). The default is the last day of the month.

You can also use a string for the date, as long as it’s compatible with the php strtotime format.

So returning to my example of displaying posts for this academic year, I have two more options. Firstly, I could use a nested array with the year and month parameters:

There are a couple of things to note here:

  • I’ve used 'relation' => 'AND' because the posts need to have been published after my start date and before my end date.
  • For each of the nested arrays, I’ve used 'inclusive' => true to ensure that WordPress fetches posts published during September 2014 and August 2015.

I could also write this query using a string for the dates:

Note that because of the way date strings work, it’s more reliable to use exclusive dates. This is because if you use a date string, this will be converted to 00:00 on that date. So to make it work, either use the time in your string as well, or do as I’ve done and use the day before the date you want to show posts from (and after the date you want to show posts until).

Something else you can do with date parameters is display posts published today. Going back to my events site, let’s say I want to display a big banner on my home page on the day when an event is happening. I can write a query for this and then output details of the event if one is found. Here are the arguments:

Using the date() function returns the current date—I’ve used this three times to ensure I get the correct day, month and year. Note that I’ve also included the post_status argument to ensure that an event happening later today is included.

Summary

Sometimes you don’t just want to query all the published posts. By using the WP_Query class you can create much more specific queries to output posts by date, including the posts you published on a given date, before a date, after a date or between a pair of dates.

The date_query arguments combine with other parameters such as post_status, which is covered in more detail elsewhere in this series.


Source: Nettuts Web Development

Getting Started With Raygun: Insights and Crash Reporting for App Developers

Raygun home page

Introducing Raygun

Raygun is a powerful new crash reporting service for your applications. Raygun automatically detects, discovers and diagnoses errors and crashes that are happening in your software applications, notifying you of issues that are affecting your end users.

If you’ve ever clicked Don’t Send” on an operating system crash reporting dialog then you know that few users actively report bugs—most simply walk away in frustration. In fact, a survey by Compuware reported that only 16% of users try a crashing app more than twice. It’s vital to know if your software is crashing for your users. Raygun makes this easy.

With just a few short lines of code, you can integrate Raygun into your development environment in minutes. Raygun supports all major programming languages and platforms, so simply select the language you want to get started with. You’ll instantly begin receiving reports of errors and crashes and will be able to study diagnostic information and stack traces on the Raygun dashboard. For this tutorial, I’ll show you examples of tracking JavaScript apps such as Ghost and PHP-based WordPress.

By pinpointing problems for you and telling you exactly where to look, Raygun helps you build healthier, more reliable software to delight your users and keep them coming back. 

More importantly, Raygun is built for teams and supports integrations for workplace software such as team chat, e.g. Slack and Hipchat, project management tools, e.g. JIRA and Sprintly, and issue trackers, e.g. GitHub and Bitbucket. Raygun gives your team peace of mind that your software is performing as you want it to—flawlessly.

In this tutorial, I’ll walk you through setting up your application with Raygun step by step so you can begin using their 30-day free trial.

If you have any requests for future tutorials or questions and comments on today’s, please post them below. You can also reach me on Twitter @reifman or email me directly.

Getting Started

Signing Up for the Raygun 30-Day Free Trial

Trying out Raygun is easy (and free). When you visit the home page (shown above), just click the green Free Trial button which will take you to the signup form:

Raygun Account Sign Up

As soon as you sign up, you’ll begin receiving a helpful daily guided email as you learn to use the product:

Raygun Sign Up Email

One of the most powerful features of Raygun is that it works with all the major programming languages and platforms. And it’s amazingly easy to integrate. Just copy and paste the code into your application and Raygun will start monitoring for errors. In the case of WordPress, they provide a pre-built plugin.

How Much Does It Cost?

Pricing plans for Raygun start at $49 monthly but can be discounted nearly 10% when paid annually.

Raygun Pricing

You might also be interested in Raygun Enterprise which includes either massive cloud support or the ability to securely self-host a version of the service.

Integrating Raygun With Your Application

After signing up, you’ll be presented with a short Raygun integration wizard. It starts with selecting your language of choice. Here’s the initial dashboard that you’ll see:

The Initial Raygun Setup Wizard

Here’s an example of integrating for use with any JavaScript code or platform.

Using Raygun With JavaScript

Once you select JavaScript, you’ll be shown your Application API Key (the key is the same for all platforms you choose).

Raygun is easy to use regardless of which JavaScript package management system you prefer:

JavaScript setup page

For example, with Bower, run:

From NuGet, open the console and run:

But, you can also just load the library from Raygun’s CDN within your application:

You can also download the minified production version or full development source and integrate your own way.

To begin catching exceptions in your application, call Raygun.init with your API key and then call attach:

If you’d like to set up a quick JavaScript-based application to try Raygun, you might try my Tuts+ tutorial walk-through of the open-source Ghost blogging platform.

For this tutorial, I’m going to use Raygun with a WordPress blog—both PHP and JavaScript errors can be caught in this manner.

Debugging With WordPress

To install Raygun for WordPress, you need the Raygun WordPress plugin:

The Raygun WordPress Setup

Once you’ve installed the plugin, you load the configuration menu from the WordPress dashboard and provide your API key:

Raygun4WP Configuration

Within a minute, you’ll start seeing errors collect in the Raygun dashboard. If not, click the Send Test Error button to trigger one.

The Raygun Dashboard

Initially, you’ll see an empty dashboard:

The Initial Raygun Dashboard

But, once you’ve chosen your language and integrated your application, you’ll see a dashboard like this—oh, theme developers—in which Raygun helped me discover a plethora of WordPress theme code that hadn’t been kept up to date with the latest versions of PHP.

The Raygun Dashboard with data coming in

Tracking Errors Across Code Deployments

When you integrate Raygun with your deployment tools, it can track errors according to specific versions of released software. This can help you identify and repair bad deployments quickly and easily:

Raygun Deployment Management

You can read about how to integrate your deployment scripts with Raygun tagging in the documentation. Raygun provides guides for working with: Octopus DeployBashPowershellCapistranoRakeGruntAtlassian Bamboo and FAKE – F# Make.

Managing Raygun Error Statuses

Raygun currently lets you assign error groups to one of five statuses. These are:

  • Active
  • Resolved
  • Resolved In Version x.xx
  • Ignored
  • Permanently Ignored

When an error is first received it is assigned to Active and is visible in the first tab. You can then take action to change it to another status. 

For example, as soon as I activated Raygun with WordPress and discovered a plethora of theme-related PHP compatibility issues, my email queue began to fill—but this was easily resolved by asking Raygun to only notify me of new reports.

You can also filter and manage issues by status through the interface quite easily. For example, it would be easy to delete all the errors resolved in WordPress version 4.3.

Raygun Error Detailed Views

When you click on errors, Raygun shows you their detail view with stack trace and a summary of which users and browsers or devices are being affected:

Raygun Error Detail View with Strack Trace

In detail view, Raygun also allows you and your team to comment and discuss specific issues:

Raygun Developer Comments

Raygun User Tracking

If you implement user tracking with your Raygun integration, you can see exactly which of your authenticated users have run into specific errors and how often:

Raygun user tracking

Raygun offers easy documentation for linking error reports to the current signed-in user. Here’s an example for JavaScript:

By default, Raygun4JS assigns a unique anonymous ID for the current user. This is stored as a cookie. If the current user changes, to reset it and assign a new ID you can call:

To disable anonymous user tracking, call Raygun.init('apikey', { disableAnonymousUserTracking: true });.

You can provide additional information about the currently logged in user to Raygun by calling: Raygun.setUser('unique_user_identifier');.

This method takes additional parameters that are used when reporting over the affected users. The full method signature is:

Managing Your Team

Raygun is built around tracking issues across development teams. Through the settings area, it’s easy to add applications that you’re tracking and invite team members to participate:

Raygun Team Management

As mentioned above, Raygun easily integrates with other team-based tools such as chat (Slack, Hipchat, etc.), project management (JIRA, Sprintly, etc.) and issue trackers (GitHub, Bitbucket, etc.).

Helpful Customer Support

Raygun support is excellent. In addition to the web-based documentation and email welcome guides, there’s helpful support personnel (like Nick) ready to guide you deeper into the service—Nick’s tips and availability just popped up as I was reviewing the service:

Raygun Customer support popups

The Raygun API

If you’d like to tailor or customize event triggers, you can post errors via the Raygun API however you’d like from your application. This can be helpful for developers wishing to integrate monitoring or specialized reporting across their services or to make the development process easier.

In Summary

I hope you’ve found Raygun easy to use and helpful to your development requirements. To recap, here are some of the major benefits of the service:

  • Raygun provides a complete overview of problems across your entire development stack. Intelligent grouping of errors lets you see the highest priority issues rather than flooding you with notifications for every error.
  • Raygun supports all major programming languages and platforms. Every developer can use it. Developer time is expensive, so stop wasting time trying to hunt down bugs. Fix issues faster and build more features instead!
  • Raygun is built for teams. You can invite unlimited team members to your account—no restrictions. Raygun helps you create a team workflow for fixing bugs and provides custom notifications and a daily digest of error events for all of your team.
  • For large corporate entities, Raygun Enterprise can provide cloud support or the ability to securely self-host a version of the service for your needs.

When you give Raygun a try, please let us know your questions and comments below. You can also reach me on Twitter @reifman or email me directly. Or, if Raygun saves you a ton of time right away, you can browse my Tuts+ instructor page to read the other tutorials I’ve written.

Related Links


Source: Nettuts Web Development

Set Up Scheduled Tasks in Magento

Cron is an important utility which allows you to execute scripts at certain regular intervals. It has become an important aspect for web­-based applications as well. There are lots of ways in which cron is useful to websites, from sending regular newsletter mails to synchronizing the database with third-party systems. You can also use cron to clean up the back­-end storage to improve the overall performance of an application.

Magento supports cron in the core itself, as it does with several other utilities! It allows you to set up scheduled tasks in the module, so that they can run at regular intervals. Magento runs all the cron tasks using the “cron.sh” and “cron.php” files located in the root of the site. So you’ll need to make sure that you’ve set up the system-level cron to run the “cron.sh” file at regular intervals, which eventually triggers the Magento cron system. And finally, Magento gathers all the cron jobs located in the modules, and runs them if needed in that particular cron run.

Although Magento has already provided lots of cron jobs in the core modules itself, you can create a custom cron task in your module as well. And creating a custom module is exactly what we’ll be talking about in the upcoming sections.

A Glance at the File Setup

We’ll create a simple custom module named “Customcron”. Here’s the list of files required for the desired setup:

  • app/etc/modules/Envato_All.xml: It’s a file which is used to enable our custom module.
  • app/code/local/Envato/Customcron/etc/config.xml: It’s a module configuration file in which we’ll declare the custom cron job.
  • app/code/local/Envato/Customcron/Model/Customcron.php: It’s a model file in which we’ll define the cron job logic.

Custom Module: Set Up the Files and Folders

First, we need to create a module enabler file. Create a file “app/etc/modules/Envato_All.xml” and paste the following contents in that file. We’ve used “Envato” as our module namespace and “Customcron” as our module name. It’ll enable our “Customcron” module by default.

Next, we need to create a module configuration file. Create “app/code/local/Envato/Customcron/etc/config.xml” and paste the following contents in that file.

The “config.xml” file looks fairly simple—it declares the version number and model classes as per the Magento conventions. However, the important tag for us is <crontab>, which is used to declare all the jobs. It’s one of the “event observers” which is used by Magento to gather all the cron jobs in the modules.

Further, under the <jobs> tag, we’ve declared our custom crontab job using the <custom_cron_task> tag. It’s a sort of unique identifier for the cron job. Although in the above file we’ve only created a single task, you can set up multiple cron jobs under the <jobs> tag. Next, under <custom_cron_task> we’ve defined <schedule> and <run> tags.

The <schedule> tag defines cron intervals inside the <cron_expr> tag at which the job will run regularly. In our case, the custom cron task will run every five minutes. But wait, what will it do every five minutes? That’s exactly what the <run> tag stands for! It declares the “Model method” which will be called by Magento during the custom cron job run.

Next, we’ll create a model “Cronjob.php” file. Create “app/code/local/Envato/Customcron/Model/Customcron.php” with the following contents.

So as we declared earlier, we’ve defined the “customcrontask” model method. In this method, we’re simply sending an email using the Magento email class utility. But more importantly, this method will be called regularly, at every cron job run, of course every five minutes.

And finally, you should make sure that you’ve created a cronjob entry in your system. For Linux, you simply need to add the following line to your crontab file.

You just need to replace “/path/to/magento/site” with the actual path of the Magento installation. And for Windows, you can do the same by using scheduled tasks. However, in Windows, you need to use the “/path/to/magento/site/cron.php” file, as “cron.sh” is not supported.

So it’s really straightforward to plug your custom cron jobs into the Magento cron system! That’s it for today, and I hope you’ve learned something useful in Magento. Share your thoughts using the feed below!


Source: Nettuts Web Development