Improving Performance Perception with Pingdom and GTmetrix

This article is part of a series on building a sample application — a multi-image gallery blog — for performance benchmarking and optimizations. (View the repo here.)


In this article, we’ll analyze our gallery application using the tools we explained in the previous guide, and we’ll look at possible ways to further improve its performance.

As per the previous post, please set up Ngrok and pipe to the locally hosted app through it, or host the app on a demo server of your own. This static URL will enable us to test our app with external tools like GTmetrix and Pingdom Tools.

GTmetrix first test

We went and scanned our website with GTmetrix to see how we can improve it. We see that results, albeit not catastrophically bad, still have room for improvement.

The first tab — PageSpeed — contains a list of recommendations by Google. The first item under the PageSpeed tab — a warning about a consistent URL — pertains to our application outputting the images randomly, so that is an item we will skip. The next thing we can do something about is browser caching.

Browser Caching

Browser caching

We see that there is a main.css file that needs its Expires headers set, and the images in the gallery need the same thing. Now, the first idea for these static files would be to set this in our Nginx configuration:

location ~* .(?:ico|css|js|gif|jpe?g|png)$ {
    expires 14d;
}

We can simply put this inside our server block and leave it to Nginx, right?

Well, not really. This will take care of our static files, like CSS, but the /raw images we are being warned about aren’t really that static. So this snippet in our Nginx configuration won’t exactly fix this issue so easily. For our images, we have an actual controller that creates these on the fly, so it would be ideal if we could set our response headers right there, in the controller. For some reason, these weren’t being set properly by Glide.

Maybe we could set our Nginx directive in a way to include the raw resources, but we felt the controller approach to be more future-proof. This is because we aren’t sure what other content may end up with an raw suffix eventually — maybe some videos, or even audio files.

So, we opened /src/ImageController.php in our image gallery app, and dropped these two lines inside of our serveImageAction(), just before the line return $response:

// cache for 2 weeks
$response->setSharedMaxAge(1209600);
// (optional) set a custom Cache-Control directive
$response->headers->addCacheControlDirective('must-revalidate', true);

This will modify our dynamic image responses by adding the proper Cache Control and Expires headers.

Symfony has more comprehensive options for the caching of responses, as documented here.

Having restarted Nginx, we re-tested our app in GTmetrix, and lo and behold:

Browser Caching

The post Improving Performance Perception with Pingdom and GTmetrix appeared first on SitePoint.


Source: Sitepoint

How Analytics Can Explain Your Abandoned Checkouts

If you’re in the ecommerce business, you already know that users often abandon their carts at the checkouts. In fact, studies indicate that the average rate of cart abandonment is 67.91%, although this can be as high as 80% in some cases. Annoying, right?

Why are users doing that? Well, the answer lies in your analytics.

People abandon checkouts for various reasons, and those reasons can vary by user demographic, geographic location and more. It can even be the shock of unexpected shipping costs, or that users aren’t convinced their sensitive details are secure on your website.

And of course, UX plays a huge role in that too. Users hate:

  • broken functionality
  • confusing checkout flows
  • signing up before checking out.

Reasons for abandonments during checkouts

Three out of four checkout abandonments are due to a sub-optimal user experience, but which of these causes is eating into your revenue?

Tools like Google Analytics, Hotjar, Fullstory, Crazy Egg and Optimizely can help us decipher what’s causing our shopping cart abandonments, and come up with effective solutions to improve UX.

In fact, you can easily recover 5–10% of your abandoned carts. Isn’t that cool?

First we’ll detect where users are leaving our website, then why they’re leaving our website, then how we can improve the UX so that future users don’t leave our website. The result? More revenue!

It’s also worth noting that while 59% of shopping experiences happen on mobile, only 15% of dollars are spent on mobile. This indicates that some of the most common UX shortcomings are mobile-specific.

What Tools Can We Use to Analyze User Behavior?

By analyzing user demographics and user behavior, you can improve your website’s user experience. Even though there are many tools that can allow you to do this, I’ll run through this tutorial using Google Analytics (free) and Crazy Egg (affordable) today.

With a quick setup, Google Analytics will help you to understand your visitors as they interact with your website, and from this we can identify where exactly visitors are abandoning our websites.

Crazy Egg is an advanced heatmap tool that allows you to observe those drop-offs in more detail, and to establish why users left and didn’t convert. With Crazy Egg, you’re able to dig deeper into that.

Setting up Conversion Funnels in Google Analytics

A conversion funnel is the journey a customer takes as they convert. We can use Google Analytics to record and analyze these conversion funnels, which begins with you inserting a few lines of JavaScript code on your website (to enable the tracking), although some ecommerce platforms help you to set this up without any code.

Now we need to set up a funnel to measure the conversion rate.

Step 1: Goals

First, log in to your Google Analytics dashboard and click on the Admin tab in the left-hand sidebar, then Goals.

Step 2: Creating a New Goal

Click the + New Goal button, and choose the relevant Goal template, which in this case is Place an order.

After that, click on the Next Step button.

Step 3: Describing the Goal

Next, give your Goal a name, and under the Type heading, choose what needs to be happen to trigger this Goal. In this case, the Goal is triggered when the user reaches the checkout confirmation screen, so choose Destination as the Goal Type.

Step 4: Goal Details

In Goal Details, you’ll need to reference the destination URL. This is the web page that visitors are taken to after they complete their checkout. If you’re using Shopify as your ecommerce CMS, this will most likely be /checkout/thank_you/, although it’ll vary by CMS.

Leave the monetary Value turned off and the Funnel option on.

Next, list all of the web pages (and their URLs) that shoppers navigate through during the checkout flow. The example below represents a typical setup for an ecommerce website hosted on Shopify.

A typical setup for an ecommerce website

Click the Verify this Goal button before the Save Goal button.

The post How Analytics Can Explain Your Abandoned Checkouts appeared first on SitePoint.


Source: Sitepoint

MySQL Performance Boosting with Indexes and Explain

Techniques to improve application performance can come from a lot of different places, but normally the first thing we look at — the most common bottleneck — is the database. Can it be improved? How can we measure and understand what needs and can be improved?

One very simple yet very useful tool is query profiling. Enabling profiling is a simple way to get a more accurate time estimate of running a query. This is a two-step process. First, we have to enable profiling. Then, we call show profiles to actually get the query running time.

Let’s imagine we have the following insert in our database (and let’s assume User 1 and Gallery 1 are already created):

INSERT INTO `homestead`.`images` (`id`, `gallery_id`, `original_filename`, `filename`, `description`) VALUES
(1, 1, 'me.jpg', 'me.jpg', 'A photo of me walking down the street'),
(2, 1, 'dog.jpg', 'dog.jpg', 'A photo of my dog on the street'),
(3, 1, 'cat.jpg', 'cat.jpg', 'A photo of my cat walking down the street'),
(4, 1, 'purr.jpg', 'purr.jpg', 'A photo of my cat purring');    

Obviously, this amount of data will not cause any trouble, but let’s use it to do a simple profile. Let’s consider the following query:

SELECT * FROM `homestead`.`images` AS i
WHERE i.description LIKE '%street%';

This query is a good example of one that can become problematic in the future if we get a lot of photo entries.

To get an accurate running time on this query, we would use the following SQL:

set profiling = 1;
SELECT * FROM `homestead`.`images` AS i
WHERE i.description LIKE '%street%';
show profiles;

The result would look like the following:

Query_IdDurationQuery
10.00016950SHOW WARNINGS
20.00039200SELECT * FROM homestead.images AS i nWHERE i.description LIKE ‘%street%’nLIMIT 0, 1000
30.00037600SHOW KEYS FROM homestead.images
40.00034625SHOW DATABASES LIKE ‘homestead
50.00027600SHOW TABLES FROM homestead LIKE ‘images’
60.00024950SELECT * FROM homestead.images WHERE 0=1
70.00104300SHOW FULL COLUMNS FROM homestead.images LIKE ‘id’

As we can see, the show profiles; command gives us times not only for the original query but also for all the other queries that are made. This way we can accurately profile our queries.

But how can we actually improve them?

We can either rely on our knowledge of SQL and improvise, or we can rely on the MySQL explain command and improve our query performance based on actual information.

Explain is used to obtain a query execution plan, or how MySQL will execute our query. It works with SELECT, DELETE, INSERT, REPLACE, and UPDATE statements, and it displays information from the optimizer about the statement execution plan. The official documentation does a pretty good job of describing how explain can help us:

With the help of EXPLAIN, you can see where you should add indexes to tables so that the statement executes faster by using indexes to find rows. You can also use EXPLAIN to check whether the optimizer joins the tables in an optimal order.

The post MySQL Performance Boosting with Indexes and Explain appeared first on SitePoint.


Source: Sitepoint

Getting Started With Google Cloud Functions and MongoDB

This article was originally published on Code Barbarian. Thank you for supporting the partners who make SitePoint possible.

Serverless architectures are becoming increasingly popular, and with good reason. In my experience, container-based orchestration frameworks have a steep learning curve and are overkill for most consumer-facing companies. With FaaS architectures, like AWS Lambda and Azure Functions, in theory the only devops you need is bundling and uploading your app.

This article will walk you through setting up a Google Cloud Function in Node.js that connects to MongoDB. However, one major limitation of stateless functions is the need to establish a separate database connection every time the stateless function runs, which incurs a major performance penalty. Unfortunately, I haven’t been able to figure out how to reuse a database connection in Google Cloud Functions, the trick that works for IBM Cloud, Azure Functions, and AWS Lambda does not work for Google Cloud Functions.

“Hello, World” in Google Cloud Functions

Go to the Google Cloud Functions landing page and click “Try it free”.

Click on the hamburger icon in the upper left and find the “Cloud Functions” link in the sidebar, then click “Create function”.

Name your function “hello-world” and leave the rest of the options in the “Create function” form unchanged. Leave “Function to execute” as “helloWorld”, because that needs to match the name of the function your code exports. Below is the code you should enter in, so you can confirm what version of Node.js your function is running on.

exports.helloWorld = (req, res) => {
  res.send('Hello from Node.js ' + process.version);
};

Click “Create” and wait for Google to deploy your cloud function. Once your function is deployed, click on it to display the function’s details.

Click the “Trigger” tab to find your cloud function’s URL.

Copy the URL and use curl to run your cloud function.

$ curl https://us-central1-test21-201718.cloudfunctions.net/hello-world
Hello from Node.js v6.11.5
$

Google Cloud Functions don’t give you any control over what version of Node.js you run, they run Node.js v6.11.5 currently. You can’t use async/await natively on Google Cloud Functions at the time of this writing.

Connecting to MongoDB Atlas

Click on the “Source” tab in the function details and hit the “Edit” button. You’ll notice there’s 2 files in your source code, one of which is package.json. Edit package.json to match the below code.

{
  "name": "sample-http",
  "version": "0.0.1",
  "dependencies": {
    "co": "4.6.0",
    "mongodb": "3.x"
  }
}

Once you redeploy, Google Cloud will automatically install your npm dependencies for you. Now, change your index.js to match the below code, replacing the uri with your MongoDB Atlas URI.

The post Getting Started With Google Cloud Functions and MongoDB appeared first on SitePoint.


Source: Sitepoint

15 Tools and Resources That Will Help You Grow as a Designer

This article was created in partnership with BAWMedia. Thank you for supporting the partners who make SitePoint possible.

It’s hard to stay in your web design comfort zone when trends and technologies are in a continual state of change. Many of your design and development tools might continue to serve you well for some time. The same may be true for the resources you rely on.

There will come a time however when a favored tool or a resource is no longer up to the task. Investing in new tools or resources is generally the easiest way to keep up with the changing times. This is especially when a tool or resource is easily affordable, and in some cases, free.

This might be a good time to take stock of what you have in your designer’s toolkit. See whether some changes might be in order. This list of 15 of 2018’s top tools and resources should get you off to a good start.

1. Mason

Mason

Requirements are always subject to change. These changes can be a headache to designers and developers as they usually involve repetitive cycles of work. Many of today’s software tools are equipped to handle requirements changes only to the extent that they can repeat prior tasks.

Mason has a different approach.

Mason is a combination design/development, and maintenance/collaboration tool that can put an end to repetitive deployment cycles by relieving designers or developers the task of instituting changes or fixes they shouldn’t have to be bothered with.

Mason has a wealth of software design features, including pre-packaged building blocks that address common requirements. What Mason does that is different is to allow downstream users (software maintenance individuals or teams, and even clients) to make changes in these building blocks in response to changing requirements or needs for fixes or product updates.

Mason’s login and user registration protocols ensure that you always have total control over product changes, even though as a team leader or designer you’re no longer required to make them yourself.

2. Mobirise

Mobirise

The ability to create mobile-friendly websites and apps is no longer a nice option to work with. In today’s world it’s mandatory. Some themes still treat device-friendliness as if it were a good design option to have. Mobirise on the other hand, was created with mobile devices in mind.

Not only does Mobirise contain everything you need to build device-friendly websites and apps, but it does so without any cost to you and without any restrictions whatsoever. Mobirise is free to use for both your personal and commercial pursuits. It’s simply a matter of downloading it now and getting started.

Mobirise is an offline app, so you’ll have total control over product design and hosting. It’s also an excellent choice for smaller projects such as small websites, portfolios, landing pages, and promo sites.

3. Elementor

Elementor

If you don’t believe Elementor is the #1 WordPress page builder on the market, you might take a close look at the numbers. 900,000 or so users downloaded this free, open source and feature rich page-building platform in a little less than 2 years.

Performance and ease of use account, in part, for Elementor’s popularity, but its users also love its superior workflow features, visual form builder, custom CSS, and the menu builder.

Things are only going to get better for this product’s users — and for you as well if you choose to download it. The Elementor 2.0 release, with a wealth of powerful new tools is already underway and will continue in increments throughout the rest of the year.

New features include enhanced WooCommerce shop product pages, single post page builders, new eCommerce page-building options and more. Users still can enjoy their favorite features of 1.0 version, too.

4. Goodiewebsite

Goodiewebsite

Goodiewebsite has helped hundreds of clients with website development. This is a platform, which specializes in websites on the order of 1-10 pages in size, design to code conversion (PSD, Sketch, Figma, XD, etc.), and simple WordPress sites.

Goodiewebsite services are cost effective and the tasks assigned to them are always performed professionally and reliably.

5. monday.com

monday.com

Whether you’re a team of two, or a team of 20,000 scattered around the globe, and whether it is tech on non-tech oriented, if you’re looking for a high-performance team management tool, monday.com will suit your needs to perfection.

This team management tool allows you to accomplish tasks without spreadsheets or white boards and avoids any need for scheduling an unending series of meetings. monday.com promotes project transparency and empowers team members.

6. A2’s Fully Managed WordPress Hosting

A2

A2 Hosting adjusts to your specific hosting requirements instead of the other way around. You can expect to receive precisely the hosting experience you want and need at an affordable price other services simply cannot match. Site staging, automated backups, blazing fast servers, 24/7 Guru support – it’s all there!

7. The Hanger

The Hanger

Whether the plan is to create an online presence for an existing clothing retailer or open a strictly eCommerce business, you might as well do it with a touch of pizzaz to draw the customers in.

The Hanger is a modern-classic WordPress theme that’s just the cup of tea for building a high-quality online store in no time at all and customizing it to fit your brand or your client’s.

The post 15 Tools and Resources That Will Help You Grow as a Designer appeared first on SitePoint.


Source: Sitepoint

PHP-level Performance Optimization with Blackfire

Throughout the past few months, we’ve introduced Blackfire and ways in which it can be used to detect application performance bottlenecks. In this post, we’ll apply it to our freshly started project to try and find the low-points and low-hanging fruit which we can pick to improve our app’s performance.

If you’re using Homestead Improved (and you should be), Blackfire is already installed. Blackfire should only ever be installed in development, not in production, so it’s fine to only have it there.

Note: Blackfire can be installed in production, as it doesn’t really trigger for users unless they manually initiate it with the installed Blackfire extension. However, it’s worth noting that defining profile triggers on certain actions or users that don’t need the extension will incur a performance penalty for the end user. When Blackfire-testing live, make the test sessions short and effective, and avoid doing so under heavy load.

While it’s useful to be introduced to Blackfire before diving into this, applying the steps in this post won’t require any prior knowledge; we’ll start from zero.

Setup

The following are useful terms when evaluating graphs produced by Blackfire.

  • Reference Profile: We usually need to run our first profile as a reference profile. This profile will be the performance baseline of our application. We can compare any profile with the reference, to measure the performance achievements.

  • Exclusive Time: The amount of time spent on a function/method to be executed, without considering the time spent for its external calls.

  • Inclusive Time: The total time spent to execute a function including all the external calls.

  • Hot Paths: Hot Paths are the parts of our application that were most active during the profile. These could be the parts that consumed more memory or took more CPU time.

The first step is registering for an account at Blackfire. The account page will have the tokens and IDs which need to be placed into Homestead.yaml after cloning the project. There’s a placeholder for all those values at the bottom:

# blackfire:
#     - id: foo
#       token: bar
#       client-id: foo
#       client-token: bar

After uncommenting the rows and replacing the values, we need to install the Chrome companion.

The Chrome companion is useful only when needing to trigger profiling manually — which will be the majority of your use cases. There are other integrations available as well, a full list of which can be found here.

Optimization with Blackfire

We’ll test the home page: the landing page is arguably the most important part of any website, and if that takes too long to load, we’re guaranteed to lose our visitors. They’ll be gone before Google Analytics can kick in to register the bounce! We could test pages on which users add images, but read-only performance is far more important than write performance, so we’ll focus on the former.

This version of the app loads all the galleries and sorts them by age.

Testing is simple. We open the page we want to benchmark, click the extension’s button in the browser, and select “Profile!”.

Here’s the resulting graph:

In fact, we can see here that the execution time inclusive to exclusive is 100% on the PDO execution. Specifically, this means that the whole dark pink part is spent inside this function and that this function in particular is not waiting for any other function. This is the function being waited on. Other method calls might have light pink bars far bigger than PDO’s, but those light pink parts are a sum of all the smaller light pink parts of depending functions, which means that looked at individually, those functions aren’t the problem. The dark ones need to be handled first; they are the priority.

Also, switching to RAM mode reveals that while the whole call used almost a whopping 40MB of RAM, the vast majority is in the Twig rendering, which makes sense: it is showing a lot of data, after all.

RAM mode

In the diagram, hot paths have thick borders and generally indicate bottlenecks. Intensive nodes can be part of the hot path, but also be completely outside it. Intensive nodes are nodes a lot of time is spent in for some reason, and can be indicative of problems just as much.

By looking at the most problematic methods and clicking around on relevant nodes, we can identify that PDOExecute is the most problematic bottleneck, while unserialize uses the most RAM relative to other methods. If we apply some detective work and follow the flow of methods calling each other, we’ll notice that both of these problems are caused by the fact that we’re loading the whole set of galleries on the home page. PDOExecute takes forever in memory and wall time to find them and sort them, and Doctrine takes ages and endless CPU cycles to turn them into renderable entities with unserialize to loop through them in a twig template. The solution seems simple — add pagination to the home page!

By adding a PER_PAGE constant into the HomeController and setting it to something like 12, and then using that pagination constant in the fetching procedure, we block the first call to the newest 12 galleries:

$galleries = $this->em->getRepository(Gallery::class)->findBy([], ['createdAt' => 'DESC'], self::PER_PAGE);

We’ll trigger a lazy load when the user reaches the end of the page when scrolling, so we need to add some JS to the home view:

{% block javascripts %}
    {{ parent() }}

    <script>
        $(function () {
            var nextPage = 2;
            var $galleriesContainer = $('.home__galleries-container');
            var $lazyLoadCta = $('.home__lazy-load-cta');

            function onScroll() {
                var y = $(window).scrollTop() + $(window).outerHeight();
                if (y >= $('body').innerHeight() - 100) {
                    $(window).off('scroll.lazy-load');
                    $lazyLoadCta.click();
                }
            }

            $lazyLoadCta.on('click', function () {
                var url = "{{ url('home.lazy-load') }}";
                $.ajax({
                    url: url,
                    data: {page: nextPage},
                    success: function (data) {
                        if (data.success === true) {
                            $galleriesContainer.append(data.data);
                            nextPage++;
                            $(window).on('scroll.lazy-load', onScroll);
                        }
                    }
                });
            });

            $(window).on('scroll.lazy-load', onScroll);
        });
    </script>
{% endblock %}

Since annotations are being used for routes, it’s easy to just add a new method into the HomeController to lazily load our galleries when triggered:

/**
 * @Route("/galleries-lazy-load", name="home.lazy-load")
 */
public function homeGalleriesLazyLoadAction(Request $request)
{
    $page = $request->get('page', null);
    if (empty($page)) {
        return new JsonResponse([
            'success' => false,
            'msg'     => 'Page param is required',
        ]);
    }

    $offset = ($page - 1) * self::PER_PAGE;
    $galleries = $this->em->getRepository(Gallery::class)->findBy([], ['createdAt' => 'DESC'], 12, $offset);

    $view = $this->twig->render('partials/home-galleries-lazy-load.html.twig', [
        'galleries' => $galleries,
    ]);

    return new JsonResponse([
        'success' => true,
        'data'    => $view,
    ]);
}

The post PHP-level Performance Optimization with Blackfire appeared first on SitePoint.


Source: Sitepoint

How to Use Analytics to Create Targeted Email Campaigns

When we think UX, we think about the ways we help our users navigate our website. Among various types of conversions (subscriptions, submitting a form, etc.), the primary objectives are the conversions that directly translate to revenue. These conversions could be clicks that lead to advertising revenue, or checkouts that lead to sales.

So we optimize our visual design, we optimize our navigation, we reduce any clutter, and the user generally finds it easier to navigate your website. But … what happens when the user doesn’t find anything they like, despite how fantastic the user experience is? What happens when there’s so much to consume that the thing they need is like a needle in a haystack?

In this article, I’m going to talk to you about targeted user experiences, the careful art of finding out what users want, and delivering it. We can use these concepts to tailor content depending on the users’ needs. While some may consider this marketing, it’s actually a combination of both design and marketing. Design doesn’t always mean visual design.

What We’ll Learn Here

We’ll learn how to find out what users want using analytics, segment them into tailored email lists, then send them content recommendations via email. The same concepts can be applied to users visiting your website (i.e. “Popular content”, “You might also be interested in …”, etc.), but since email UX is a very neglected aspect of design, we’ll be using that as an example.

We’ll also talk about UX in regards to what happens when the user clicks on something they like in an email.

Email Marketing 101 (Optional Reading)

There are two main approaches to email marketing:

  • the traditional “batch-and-blast” approach, in which you lump all subscribers together and email them the same thing
  • the targeted approach, in which relevant emails are sent to segmented subscribers based on their interests.

Sending targeted emails is by far the most effective (although larger websites will benefit the most). In fact, it’s so effective that a study by MarketingSherpa found that it can boost email conversions by up to 208%. We’ll use a combination of email marketing and analytics to deliver this relevant content. (By “content” I mean products or articles.)

Step 1: Isolate Email Traffic

First things first: you’ll want to isolate your email traffic from any other traffic and make it so you can find out how individual email campaigns are doing. You need to know how your email efforts are contributing to your conversions in the grand scheme of things. With Google Analytics, this can be done with Tags and Advanced Segments.

Start by creating link-tracking tags using the Google Analytics Campaign URL Builder. This will help you create URLs with utm_parameters that will help you track clicks and campaigns.

Google campaign URL builder

Fill in the relevant details:

  • Website URL: this should be the exact web page you want to link to
  • Campaign Source: the source of the traffic (which in this case, is the name of your newsletter)
  • Campaign Medium: the medium in this case is “email”
  • Campaign Name: the unique, friendly name of the campaign (used for conducting analysis/identification)
  • Campaign Term: we don’t need this right now
  • Campaign Content: this is for A/B testing reasons—for example, if you wanted to create multiple versions of the same link inside the same email, this can be used to differentiate them (e.g. “text logo”, “image logo”).

Once you’ve filled in all the relevant details, scroll down, where your link will look something like this:

Google UTM link

You’ll send your email subscribers to this link, where the utm_parameters you’ve defined will automatically be tagged as coming from your email marketing efforts. For every email link you want to track, you’ll repeat these steps. Although some email marketing software does this automatically, it’s important to know the underlying concept of how it works.

The post How to Use Analytics to Create Targeted Email Campaigns appeared first on SitePoint.


Source: Sitepoint

Don’t Use The Placeholder Attribute

Don’t Use The Placeholder Attribute

Don’t Use The Placeholder Attribute

Eric Bailey

2018-06-20T13:45:26+02:00
2018-06-20T13:49:27+00:00

Introduced as part of the HTML5 specification, the placeholder attribute “represents a short hint (a word or short phrase) intended to aid the user with data entry when the control has no value. A hint could be a sample value or a brief description of the expected format.”

This seemingly straightforward attribute contains a surprising amount of issues that prevent it from delivering on what it promises. Hopefully, I can convince you to stop using it.

Technically Correct

Inputs are the gates through which nearly all e-commerce has to pass. Regardless of your feelings on the place of empathy in design, unusable inputs leave money on the table.

The presence of a placeholder attribute won’t be flagged by automated accessibility checking software. However, this doesn’t necessarily mean it’s usable. Ultimately, accessibility is about people, not standards, so it is important to think about your interface in terms beyond running through a checklist.

Call it remediation, inclusive design, universal access, whatever. The spirit of all these philosophies boils down to making things that people—all people—can use. Viewed through this lens, placeholder simply doesn’t hold up.

The Problems

Translation

Browsers with auto-translation features such as Chrome skip over attributes when a request to translate the current page is initiated. For many attributes, this is desired behavior, as an updated value may break underlying page logic or structure.

One of the attributes skipped over by browsers is placeholder. Because of this, placeholder content won’t be translated and will remain as the originally authored language.

If a person is requesting a page to be translated, the expectation is that all visible page content will be updated. Placeholders are frequently used to provide important input formatting instructions or are used in place of a more appropriate label element (more on that in a bit). If this content is not updated along with the rest of the translated page, there is a high possibility that a person unfamiliar with the language will not be able to successfully understand and operate the input.

This should be reason enough to not use the attribute.

While we’re on the subject of translation, it’s also worth pointing out that location isn’t the same as language preference. Many people set their devices to use a language that isn’t the official language of the country reported by their browser’s IP address (to say nothing of VPNs), and we should respect that. Make sure to keep your content semantically described—your neighbors will thank you!

Interoperability

Interoperability is the practice of making different systems exchange and understand information. It is a foundational part of both the Internet and assistive technology.

Semantically describing your content makes it interoperable. An interoperable input is created by programmatically associating a label element with it. Labels describe the purpose of an input field, providing the person filling out the form with a prompt that they can take action on. One way to associate a label with an input, is to use the for attribute with a value that matches the input’s id.

Without this for/id pairing, assistive technology will be unable to determine what the input is for. The programmatic association provides an API hook that software such as screen readers or voice recognition can utilize. Without it, people who rely on this specialized software will not be able to read or operate inputs.


A diagram demonstrating how code gets converted into a rendered input, and how the code’s computed properties get read by assistive technology. The code is a text input with a label that reads Your Name. The listed computed properties are the accessible name, which is Your Name, and a role of textbox.
How semantic markup is used for both visual presentation and accessible content. (Large preview)

The reason I am mentioning this is that placeholder is oftentimes used in place of a label element. Although I’m personally baffled by the practice, it seems to have gained traction in the design community. My best guess for its popularity is the geometrically precise grid effect it creates when placed next to other label-less input fields acts like designer catnip.


Facebook’s signup form. A heading reads, “Sign Up. It’s free and always will be.” Placeholders are being used as labels, asking for your first name, last name, mobile number or email, and to create a new password for your account Screenshot.
An example of input grid fetishization from a certain infamous blue website. (Large preview)

The floating label effect, a close cousin to this phenomenon, oftentimes utilizes the placeholder attribute in place of a label, as well.

A neat thing worth pointing out is that if a label is programmatically associated with an input, clicking or tapping on the label text will place focus on the input. This little trick provides an extra area for interacting with the input, which can be beneficial to people with motor control issues. Placeholders acting as labels, as well as floating labels, cannot do that.

Cognition

The 2016 United States Census lists nearly 15 million people who report having cognitive difficulty — and that’s only counting individuals who choose to self-report. Extrapolating from this, we can assume that cognitive accessibility concerns affect a significant amount of the world’s population.

Self-reporting is worth calling out, in that a person may not know, or feel comfortable sharing that they have a cognitive accessibility condition. Unfortunately, there are still a lot of stigmas attached to disclosing this kind of information, as it oftentimes affects things like job and housing prospects.

Cognition can be inhibited situationally, meaning it can very well happen to you. It can be affected by things like multitasking, sleep deprivation, stress, substance abuse, and depression. I might be a bit jaded here, but that sounds a lot like conditions you’ll find at most office jobs.

Recall

The umbrella of cognitive concerns covers conditions such as short-term memory loss, traumatic brain injury, and Attention Deficit Hyperactivity Disorder. They can all affect a person’s ability to recall information.

When a person enters information into an input, its placeholder content will disappear. The only way to restore it is to remove the information entered. This creates an experience where guiding language is removed as soon as the person attempting to fill out the input interacts with it. Not great!

An input called “Your Birthdate” being filled out. The placeholder reads, “MM/DD/YYY” and the animation depicts the person filling it out getting to the year portion and having to delete the text to be able to go back and review what the proper formatting is.
Did they want MM/DD/YY, or MM/DD/YYYY? (Large preview)

When your ability to recall information is inhibited, it makes following these disappearing rules annoying. For inputs with complicated requirements to satisfy—say creating a new password—it transcends annoyance and becomes a difficult barrier to overcome.

An input called “Create a Password” being filled out. The placeholder reads, “8-15 characters, including at least 3 numbers and 1 symbol.” and the animation depicts the person filling it out having to delete the text to be able to go back and review what the password requirements are.
Wait—what’s the minimum length? How many numbers do they want again? (Large preview)

While more technologically-sophisticated people may have learned clever tricks such as cutting entered information, reviewing the placeholder content to refresh their memory, then re-pasting it back in to edit, people who are less technologically literate may not understand why the help content is disappearing or how to bring it back.

Digital Literacy

Considering that more and more of the world’s population is coming online, the onus falls on us as responsible designers and developers to make these people feel welcomed. Your little corner of the Internet (or intranet!) could very well be one of their first experiences online — assuming that the end user “will just know” is simple arrogance.


Source: Smashing Magazine

Building an Image Gallery Blog with Symfony Flex: Data Testing

In the previous article, we demonstrated how to set up a Symfony project from scratch with Flex, and how to create a simple set of fixtures and get the project up and running.

The next step on our journey is to populate the database with a somewhat realistic amount of data to test application performance.

Note: if you did the “Getting started with the app” step in the previous post, you’ve already followed the steps outlined in this post. If that’s the case, use this post as an explainer on how it was done.

As a bonus, we’ll demonstrate how to set up a simple PHPUnit test suite with basic smoke tests.

More Fake Data

Once your entities are polished, and you’ve had your “That’s it! I’m done!” moment, it’s a perfect time to create a more significant dataset that can be used for further testing and preparing the app for production.

Simple fixtures like the ones we created in the previous article are great for the development phase, where loading ~30 entities is done quickly, and it can often be repeated while changing the DB schema.

Testing app performance, simulating real-world traffic and detecting bottlenecks requires bigger datasets (i.e. a larger amount of database entries and image files for this project). Generating thousands of entries takes some time (and computer resources), so we want to do it only once.

We could try increasing the COUNT constant in our fixture classes and seeing what will happen:

// src/DataFixtures/ORM/LoadUsersData.php
class LoadUsersData extends AbstractFixture implements ContainerAwareInterface, OrderedFixtureInterface
{
    const COUNT = 500;
    ...
}

// src/DataFixtures/ORM/LoadGalleriesData.php
class LoadGalleriesData extends AbstractFixture implements ContainerAwareInterface, OrderedFixtureInterface
{
    const COUNT = 1000;
    ...
}

Now, if we run bin/refreshDb.sh, after some time we’ll probably get a not-so-nice message like PHP Fatal error: Allowed memory size of N bytes exhausted.

Apart from slow execution, every error would result in an empty database because EntityManager is flushed only at the very end of the fixture class. Additionally, Faker is downloading a random image for every gallery entry. For 1,000 galleries with 5 to 10 images per gallery that would be 5,000 – 10,000 downloads, which is really slow.

There are excellent resources on optimizing Doctrine and Symfony for batch processing, and we’re going to use some of these tips to optimize fixtures loading.

First, we’ll define a batch size of 100 galleries. After every batch, we’ll flush and clear the EntityManager (i.e., detach persisted entities) and tell the garbage collector to do its job.

To track progress, let’s print out some meta information (batch identifier and memory usage).

Note: After calling $manager->clear(), all persisted entities are now unmanaged. The entity manager doesn’t know about them anymore, and you’ll probably get an “entity-not-persisted” error.

The key is to merge the entity back to the manager $entity = $manager->merge($entity);

Without the optimization, memory usage is increasing while running a LoadGalleriesData fixture class:

> loading [200] AppDataFixturesORMLoadGalleriesData
100 Memory usage (currently) 24MB / (max) 24MB
200 Memory usage (currently) 26MB / (max) 26MB
300 Memory usage (currently) 28MB / (max) 28MB
400 Memory usage (currently) 30MB / (max) 30MB
500 Memory usage (currently) 32MB / (max) 32MB
600 Memory usage (currently) 34MB / (max) 34MB
700 Memory usage (currently) 36MB / (max) 36MB
800 Memory usage (currently) 38MB / (max) 38MB
900 Memory usage (currently) 40MB / (max) 40MB
1000 Memory usage (currently) 42MB / (max) 42MB

Memory usage starts at 24 MB and increases for 2 MB for every batch (100 galleries). If we tried to load 100,000 galleries, we’d need 24 MB + 999 (999 batches of 100 galleries, 99,900 galleries) * 2 MB = ~2 GB of memory.

After adding $manager->flush() and gc_collect_cycles() for every batch, removing SQL logging with $manager->getConnection()->getConfiguration()->setSQLLogger(null) and removing entity references by commenting out $this->addReference('gallery' . $i, $gallery);, memory usage becomes somewhat constant for every batch.

// Define batch size outside of the for loop
$batchSize = 100;

...

for ($i = 1; $i <= self::COUNT; $i++) {
    ...

    // Save the batch at the end of the for loop
    if (($i % $batchSize) == 0 || $i == self::COUNT) {
        $currentMemoryUsage = round(memory_get_usage(true) / 1024);
        $maxMemoryUsage = round(memory_get_peak_usage(true) / 1024);
        echo sprintf("%s Memory usage (currently) %dKB/ (max) %dKB n", $i, $currentMemoryUsage, $maxMemoryUsage);

        $manager->flush();
        $manager->clear();

        // here you should merge entities you're re-using with the $manager
        // because they aren't managed anymore after calling $manager->clear();
        // e.g. if you've already loaded category or tag entities
        // $category = $manager->merge($category);

        gc_collect_cycles();
    }
}

As expected, memory usage is now stable:

> loading [200] AppDataFixturesORMLoadGalleriesData
100 Memory usage (currently) 24MB / (max) 24MB
200 Memory usage (currently) 26MB / (max) 28MB
300 Memory usage (currently) 26MB / (max) 28MB
400 Memory usage (currently) 26MB / (max) 28MB
500 Memory usage (currently) 26MB / (max) 28MB
600 Memory usage (currently) 26MB / (max) 28MB
700 Memory usage (currently) 26MB / (max) 28MB
800 Memory usage (currently) 26MB / (max) 28MB
900 Memory usage (currently) 26MB / (max) 28MB
1000 Memory usage (currently) 26MB / (max) 28MB

Instead of downloading random images every time, we can prepare 15 random images and update the fixture script to randomly choose one of them instead of using Faker’s $faker->image() method.

Let’s take 15 images from Unsplash and save them in var/demo-data/sample-images.

Then, update the LoadGalleriesData::generateRandomImage method:

private function generateRandomImage($imageName)
    {
        $images = [
            'image1.jpeg',
            'image10.jpeg',
            'image11.jpeg',
            'image12.jpg',
            'image13.jpeg',
            'image14.jpeg',
            'image15.jpeg',
            'image2.jpeg',
            'image3.jpeg',
            'image4.jpeg',
            'image5.jpeg',
            'image6.jpeg',
            'image7.jpeg',
            'image8.jpeg',
            'image9.jpeg',
        ];

        $sourceDirectory = $this->container->getParameter('kernel.project_dir') . '/var/demo-data/sample-images/';
        $targetDirectory = $this->container->getParameter('kernel.project_dir') . '/var/uploads/';

        $randomImage = $images[rand(0, count($images) - 1)];
        $randomImageSourceFilePath = $sourceDirectory . $randomImage;
        $randomImageExtension = explode('.', $randomImage)[1];
        $targetImageFilename = sha1(microtime() . rand()) . '.' . $randomImageExtension;
        copy($randomImageSourceFilePath, $targetDirectory . $targetImageFilename);

        $image = new Image(
            Uuid::getFactory()->uuid4(),
            $randomImage,
            $targetImageFilename
        );

        return $image;
    }

It’s a good idea to remove old files in var/uploads when reloading fixtures, so I’m adding rm var/uploads/* command to bin/refreshDb.sh script, immediately after dropping the DB schema.

Loading 500 users and 1000 galleries now takes ~7 minutes and ~28 MB of memory (peak usage).

Dropping database schema...
Database schema dropped successfully!
ATTENTION: This operation should not be executed in a production environment.

Creating database schema...
Database schema created successfully!
  > purging database
  > loading [100] AppDataFixturesORMLoadUsersData
300 Memory usage (currently) 10MB / (max) 10MB
500 Memory usage (currently) 12MB / (max) 12MB
  > loading [200] AppDataFixturesORMLoadGalleriesData
100 Memory usage (currently) 24MB / (max) 26MB
200 Memory usage (currently) 26MB / (max) 28MB
300 Memory usage (currently) 26MB / (max) 28MB
400 Memory usage (currently) 26MB / (max) 28MB
500 Memory usage (currently) 26MB / (max) 28MB
600 Memory usage (currently) 26MB / (max) 28MB
700 Memory usage (currently) 26MB / (max) 28MB
800 Memory usage (currently) 26MB / (max) 28MB
900 Memory usage (currently) 26MB / (max) 28MB
1000 Memory usage (currently) 26MB / (max) 28MB

Take a look at the fixture classes source: LoadUsersData.php and LoadGalleriesData.php.

The post Building an Image Gallery Blog with Symfony Flex: Data Testing appeared first on SitePoint.


Source: Sitepoint

How to Create a Mall Map with Real-time Data Using WRLD

As a web developer, you sometimes find yourself in a position where you are required to implement a map. Your first choice is to use Google Maps, right?

google-maps

This looks okay. However, you may be required to overlay additional information over the map with the help of markers. You can use this method, or you can find a better solution that allows you to create markers inside an indoor 3D map! How cool is that? With indoor markers, you can provide unique experiences for users where they will be able to access information and interact with UIs right inside the map.

completed-mall-map

In this tutorial, we’ll create two demos illustrating the power of WRLD maps. You’ll learn how to create custom apps that can overlay real-time information over a 3D map. In the first demo, we’ll add interactive markers to an existing indoor map of a mall. In the second demo, we’ll place colored polygons over parking areas, indicating capacity.

You can find the completed project for both demos in this GitHub repository.

Prerequisites

For this article, you only need to have a fundamental understanding of the following topics:

I’ll assume this is your first time using WRLD maps. However, I do recommend you at least have a quick read of the article:

You’ll also need a recent version of Node.js and npm installed on your system (at the time of writing, 8.10 LTS is the latest stable version). For Windows users, I highly recommend you use Git Bash or any other terminal capable of handling basic Linux commands.

This tutorial will use yarn for package installation. If you prefer to use npm, please refer to this guide if you are unfamiliar with yarn commands.

Acquire an API Key

Before you get started, you’ll need to create a free account on WRLD. Once you’ve logged in and verified your email address, you’ll need to acquire an API key. For detailed instructions on how to acquire one, please check out the Getting Started section on Building Dynamic 3D Maps where it’s well documented.

Approach to Building the Map

The creation of WRLD maps is a major technological achievement with great potential benefits for many industries. There are two main ways of expanding the platform’s capabilities:

  • Using built-in tools, e.g. Map Designer and Places Designer
  • Building a custom app

Let me break down how each method can be used to achieve the desired results.

1. Using Map Designer and Places Designer

For our first demo, we can use Places Designer to create Store Cards. This will require us to create a Collection Set where all Point of Interest markers will be held. This set can be accessed both within the WRLD ecosystem, and externally via the API key. We can pass this data to a custom map created using the Map Designer. With this tool, we can share the map with others using its generated link. If you would like to learn more about the process, please watch the video tutorials on this YouTube playlist.

map-tools

The beauty of this method is that no coding is required. However, in our case, it does have limitations:

  • Restrictive UI design – we can only use the UI that comes with Places Designer
  • Restrictive data set – we can’t display additional information beyond what is provided

In order to overcome these limitations, we need to approach our mall map challenge using the second method.

2. Building a Custom App

Building custom apps is the most flexible option. Although it takes some coding effort, it does allow us to comprehensively tap into the wealth of potential provided by the WRLD platform. By building a custom app, we can create our own UI, add more fields and access external databases in real-time. This is the method that we’ll use for this tutorial.

Building the App

Let’s first create a basic map, to which we’ll add more functionality later. Head over to your workspace directory and create a new folder for your project. Let’s call it mall-map.

Open the mall-map folder in your code editor. If you have VSCode, access the terminal using Ctrl + ` and execute the following commands inside the project directory:

# Initialize package.json
npm init -f

# Create project directories
mkdir src
mkdir src/js src/css

# Create project files
touch src/index.html
touch src/js/app.js
touch src/css/app.css
touch env.js

This is how your project structure should look:

 project-structure

Now that we have our project structure in place, we can begin writing code. We’ll start with index.html. Insert this code:

<!DOCTYPE html>
<html lang="en">
<head>
  <meta charset="UTF-8">
  <meta name="viewport" content="width=device-width, initial-scale=1.0">
  <meta http-equiv="X-UA-Compatible" content="ie=edge">
  <link rel="stylesheet" href="./css/app.css" />
  <title>Shopping Mall</title>
</head>
<body>
  <div id="map"></div>
  <script src="js/app.js"></script>
</body>
</html>

Next, let’s work on css/app.css. I’m providing the complete styling for the entire project so that we don’t have to revisit this file again. In due time you’ll understand the contents as you progress with the tutorial.

@import "https://cdnjs.cloudflare.com/ajax/libs/leaflet/1.0.1/leaflet.css";
@import "https://cdn-webgl.wrld3d.com/wrldjs/addons/resources/latest/css/wrld.css";
@import "https://cdnjs.cloudflare.com/ajax/libs/semantic-ui/2.3.0/semantic.min.css";

html,
body {
  margin: 0;
  padding: 0;
  width: 100%;
  height: 100%;
}

#map {
  width: 100%;
  height: 100%;
  background-color: #000000;
}

/* -------- POPUP CONTENT -------- */
.main-wrapper > .segment {
  padding: 0px;
  width: 300px;
}

.contacts > span {
  display: block;
  padding-top: 5px;
}

Now we need to start writing code for app.js. However, we need a couple of node dependencies:

yarn add wrld.js axios

As mentioned earlier, we’ll be taking advantage of modern JavaScript syntax to write our code. Hence, we need to use babel to compile our modern code to a format compatible with most browsers. This requires installing babel dependencies and configuring them via a .babelrc file. Make sure to install them as dev-dependencies.

yarn add babel-core babel-plugin-transform-runtime babel-runtime --dev
touch .babelrc

Copy this code to the .babelrc file:

{
  "plugins": [
    [
      "transform-runtime",
      {
        "polyfill": false,
        "regenerator": true
      }
    ]
  ]
}

We’ll also need the following packages to run our project:

  • Parcel bundler – it’s like a simplified version of webpack with almost zero configuration
  • JSON Server – for creating a dummy API server

Install the packages globally like this:

yarn global add parcel-bundler json-server

# Alternative command for npm users
npm install -g parcel-bundler json-server

That’s all the node dependencies we need for our project. Let’s now write some JavaScript code. First, supply your WRLD API key in env.js:

module.exports = {
  WRLD_KEY: '<put api key here>',
 };

Then open js/app.js and copy this code:

const Wrld = require('wrld.js');
const env = require('../../env');

const keys = {
  wrld: env.WRLD_KEY,
};

window.addEventListener('load', async () => {
  const map = await Wrld.map('map', keys.wrld, {
    center: [56.459733, -2.973371],
    zoom: 17,
    indoorsEnabled: true,
  });
});

The first three statements are pretty obvious. We’ve put all our code inside the window.addEventListener function. This is to ensure our code is executed after the JavaScript dependencies, that we’ll specify later in index.html, have loaded. Inside this function, we’ve initialized the map by passing several parameters:

  • map – the ID of the div container we specified in index.html
  • keys.wrld – API key
  • center – latitude and longitude of the Overgate Mall located in Dundee, Scotland
  • zoom – elevation
  • indoorsEnabled – allow users to access indoor maps

Let’s fire up our project. Go to your terminal and execute:

parcel src/index.html

Wait for a few seconds for the project to finish bundling. When it’s done, open your browser and access localhost:1234. Depending on your Internet speed, the map shouldn’t take too long to load.

building-map

Beautiful, isn’t it? Feel free to click the blue icon. It will take you indoors. Navigate around to see the different stores. However, you’ll soon realize that you can’t access other floors. There’s also no button for exiting the indoor map. Let’s fix that in the next chapter.

Create Indoor Controls

To allow users to switch between different floors, we’ll provide them with a control widget that will allow them to do this. Simply add the following scripts to the head section of the public/index.html file:

<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/jqueryui/1.12.1/jquery-ui.min.js"></script>
<script src="https://cdn-webgl.wrld3d.com/wrldjs/addons/indoor_control/latest/indoor_control.js"></script>

Still within the html file, add this div in the body section, right before the #map div:

<div id="widget-container" class="wrld-widget-container"></div>

Now let’s update js/app.js to initialize the widget. Place this code right after the map initialization section:

const indoorControl = new WrldIndoorControl('widget-container', map);

Now refresh the page, and click the ‘Enter Indoors’ icon. You should have a control widget that will allow you to switch between floors. Just drag the control up and down to fluidly move between floors.

indoor-controls

Amazing, isn’t it? Now let’s see how we can make our map a little bit more convenient for our users.

Enter Indoors Automatically

Don’t you find it a bit annoying that every time we need to test our map, we need to click the ‘Indoors’ icon? Users may start navigating to other locations which is not the intention of this app. To fix this, we need to navigate indoors automatically when the app starts without any user interaction. First, we require the indoor map id to implement this feature. We can find this information from the indoormapenter event. You can find all Indoor related methods here.

Add the following code in the js/app.js file.

...
// Place this code right after the Wrld.map() statement
map.indoors.on('indoormapenter', async (event) => {
  console.log(event.indoorMap.getIndoorMapId());
});
...

Refresh the page then check out your console. You should get this ID printed out: EIM-e16a94b1-f64f-41ed-a3c6-8397d9cfe607. Let’s now write the code that will perform the actual navigation:

const indoorMapId = 'EIM-e16a94b1-f64f-41ed-a3c6-8397d9cfe607';

map.on('initialstreamingcomplete', () => {
  map.indoors.enter(indoorMapId);
});

After saving the file, refresh the page and see what happens.

The indoor mall map should navigate automatically. Next, we’ll look at how we can create cards for each store. But first, we need to determine where to source our data.

Mall Map Planning

To create store cards for our map, we need several items:

  • Exact Longitude/Latitude coordinates of a store
  • Store contact information and opening hours
  • Design template for the store card

Store Card Coordinates

To acquire Longitude/Latitude coordinates, we need to access maps.wrld3d.com. Wait for the map to finish loading then enter the address 56.459733, -2.973371 in the search box. Press enter and the map will quickly navigate to Overgate Mall. Click the blue indoor icon for Overgate Mall and you should be taken to the mall’s indoor map. Once it’s loaded, locate the ‘Next’ store and right-click to open the context menu. Click the ‘What is this place? option. The coordinate popup should appear.

place-coordinates

Click the ‘Copy Coordinate’ button. This will give you the exact longitude/latitude coordinates of the store. Save this location address somewhere temporarily.

Store Card Information

You’ll also need to gather contact information from each store which includes:

  • image
  • description
  • phone
  • email
  • web
  • Twitter
  • opening times

You can source most of this information from Google. Luckily, I’ve already collected the data for you. For this tutorial, we’ll only deal with four stores on the ground floor. To access the information, just create a folder at the root of the project and call it data. Next save this file from GitHub in the data folder. Make sure to save it as db.json. Here is a sample of the data we’ll be using:

{
  "id":1,
  "title": "JD Sports",
  "lat": 56.4593425,
  "long": -2.9741433,
  "floor_id": 0,
  "image_url": "https://cdn-03.belfasttelegraph.co.uk/business/news/...image.jpg",
  "description":"Retail chain specialising in training shoes, sportswear & accessories.",
  "phone": "+44 138 221 4545",
  "email": "customercare@jdsports.co.uk",
  "web": "https://www.jdsports.co.uk/",
  "twitter": "@jdhelpteam",
  "tags": "sports shopping",
  "open_time":[
    { "day": "Mon",
      "time": "9:30am - 6:00pm"
    },]
}

The data is stored in an array labeled ‘pois’. POI stands for Places of Interest. Now that we have the data available, we can easily make it accessible via an API REST point by running the JSON server. Just open a new terminal and execute the command:

json-server --watch data/db.json

It should take a few seconds for the API to start. Once it’s fully loaded, you can test it with your browser at localhost:3000/pois. You can also fetch a single POI using this syntax:

- localhost:3000/pois/{id}

For example, localhost:3000/pois/3 should return a poi record with ID 3 in JSON format.

Store Card Design

We’ll use a clean elegant theme to neatly display contact information and opening times using a couple of tabs. We’ll create markers that will display a popup when clicked. This popup will have the following UI.

store-card-template

The code for this HTML design is a bit long to put here. You can view and download the file from this link. The design only has three dependencies:

  • Semantic UI CSS
  • jQuery
  • Semantic UI JS

Now that we have the data required and the design, we should be ready to start working on our indoor map.

Implementing Store Cards in Indoor Map

First let’s create a service that will allow us to access data from the JSON REST APIs. This data will be used for populating the Store Cards with the necessary information. Create the file js/api-service.js and copy this code:

const axios = require('axios');

const client = axios.create({
  baseURL: 'http://127.0.0.1:3000',
  timeout: 1000,
});

module.exports = {
  getPOIs: async () => {
    try {
      const response = await client.get('/pois');
      return response.data;
    } catch (error) {
      console.error(error);
    }
    return [];
  },
  getPOI: async (id) => {
    try {
      const response = await client.get(`/pois/${id}`);
      return response.data;
    } catch (error) {
      console.error(error);
    }
    return {};
  },
}

Here we are making use of the library axios to request data from the JSON server.

Next, we’ll convert our static HTML design for the Store Card to a format that will allow us to render data. We’ll be using JsRender for this. We’ll break down our static design into three templates:

  • Base Template – has containers for menu, info and time tabs.
  • Info Template – tab for store contact information.
  • Time Template – tab for store opening hours.

First, open index.html and add these scripts to the head section, right after the jQuery and indoor control scripts:

<head>
  ...
  <script src="https://cdnjs.cloudflare.com/ajax/libs/jsrender/0.9.90/jsrender.min.js"></script>
  <script src="https://cdnjs.cloudflare.com/ajax/libs/semantic-ui/2.3.0/semantic.min.js"></script>
  ...
</head>

Next, copy this section of code right before the widget-container div:

  ...
  <!-- Menu Tabs UI -->
 <script id="baseTemplate" type="text/x-jsrender">
    <div class="main-wrapper">
      <div class="ui compact basic segment">
        <div class="ui menu tabular"> </div>
        <div id="infoTab" class="ui tab active" data-tab="Info"></div>
        <div id="timeTab" class="ui tab" data-tab="Time"></div>
      </div>
    </div>
  </script>

  <!-- Info Data Tab -->
  <script id="infoTemplate" type="text/x-jsrender">
    <div class="ui card">
      <div class="image">
        <img src={{:image_url}}>
      </div>
      <div class="content">
        <div class="header">{{:title}}</div>
        <div class="description">
          {{:description}}
        </div>
      </div>
      <div class="extra content contacts">
        <span>
          <i class="globe icon"></i>
          <a href="{{:web}}" target="_blank">{{:web}}</a>
        </span>
        <span>
          <i class="mail icon"></i>
          {{:email}}
        </span>
        <span>
          <i class="phone icon"></i>
          {{:phone}}
        </span>
      </div>
    </div>
  </script>

  <!-- Opening Times Data Tab -->
  <script id="timeTemplate" type="text/x-jsrender">
    <table class="ui celled table">
      <thead>
        <tr>
          <th>Day</th>
          <th>Time</th>
        </tr>
      </thead>
      <tbody>
        {{for open_time}}
        <tr>
          <td>{{:day}}</td>
          <td>{{:time}}</td>
        </tr>
        {{/for}}
      </tbody>
    </table>
  </script>
  ...

This is how the full code for index.html should look.

Next, let’s create another service that will manage the creation of Popups. Create the file js/popup-service.js and copy this code:

const Wrld = require('wrld.js');
const { getPOI } = require('./api-service');

const baseTemplate = $.templates('#baseTemplate');
const infoTemplate = $.templates('#infoTemplate');
const timeTemplate = $.templates('#timeTemplate');

const popupOptions = {
  indoorMapId: 'EIM-e16a94b1-f64f-41ed-a3c6-8397d9cfe607',
  indoorMapFloorIndex: 0,
  autoClose: true,
  closeOnClick: true,
  elevation: 5,
};

Let me explain each block step by step:

  • Block 1: WRLD is required for creating the Popup, getPOI function is required for fetching data
  • Block 2: The templates that we discussed earlier are loaded using jsrender
  • Block 3: Parameters that will be passed during Popup instantiation. Here is the reference documentation.

Next, let’s add tab menus that will be used for switching tabs. Simply add this code to js/popup-service.js:

const createMenuLink = (linkName, iconClass) => {
  const link = document.createElement('a');
  link.className = 'item';
  const icon = document.createElement('i');
  icon.className = `${iconClass} icon`;
  link.appendChild(icon);
  link.appendChild(document.createTextNode(` ${linkName}`));
  link.setAttribute('data-tab', linkName);
  link.addEventListener('click', () => {
    $.tab('change tab', linkName);
    $('.item').toggleClass('active');
  });
  return link;
};

const createMenu = (menuParent) => {
  const infoLink = createMenuLink('Info', 'info circle');
  infoLink.className += ' active';
  menuParent.appendChild(infoLink);
  const timeLink = createMenuLink('Time', 'clock');
  menuParent.appendChild(timeLink);
};

You might be wondering why we are using a complicated method of creating menu links. Ideally, we should be able to create them using HTML, then add a small JavaScript script to activate the tabs. Unfortunately, this doesn’t work within the context of a Popup. Instead, we need to create clickable elements using DOM manipulation methods.

The post How to Create a Mall Map with Real-time Data Using WRLD appeared first on SitePoint.


Source: Sitepoint