Angular Starter Project with Basic Routing & System.js

Angular

Angular logo - routing An Angular starter project with basic routing, System.js as the loader and a local Express.js web server.

Starting an Angular project from scratch can be tedious, but the Github repo links at the end of this post provide a starting point that you can easily clone, edit and shape as needed, and should save you some work. An important note: this project is built using System.js, and in this approach, the Typescript code is compiled on the fly. This starter project is not recommended for production application, but just to provide a quick and easy way to spin-up an Angular application for local testing. You can also accomplish this using the Angular CLI, but I wanted to offer another option.

Note, also, that when you clone the code from GitHub, there is a local web server provided. This server allows you to make true HTTP requests from the local web page (i.e. you don’t want to load this code into your browser using the file:/// protocol; that simply won’t work). And just be aware that the Angular CLI is usually an even quicker and easier way to spin-up an Angular application for local testing.

Example # 1

In Example # 1, we have app.module.ts. Note the RouteXComponent references (i.e. “Route1Component“, “Route2Component” and “Route3Component“). These are the components that make up the application and that need to be defined as routes. There’s not too much more to discuss here; this code exists simply to boot up the application.

Example # 2

In Example # 2, we have routes.ts, which is where the routing is configured. Right now, the routes are route1, route2, and route3, and they will instantiate the “Route1Component“, “Route2Component” and “Route3Component” components accordingly. You can change them as needed for your project, you’ll just need to rename each component, and its associated files.

Now take a close look at line # 8. This line tells the router what to do when a specific route is not selected (i.e. when the user requests the root of the application: “/”). So, here we are saying to the router: when the user browses to the root of the application, take them to route1.

Example # 3 – A

Example # 3 – B

In Example # 3, we have our Route1Component component. The other components are identical, and they’re named accordingly. To use this for your project, just rename the route1 references to whatever you want to call your component. And this change would need to be made in app.module.ts, routes.ts and then each route. These routes do not do too much, as you can see in Example # 3 – A and Example # 3 – B. They’re just meant to provide an empty shell that you can use to quickly spin-up a local test Angular application. The easiest way to do this is to clone the code in github, run npm install, and then npm start.

Video Example Code

If you want to download the example code, visit this Github page, and then follow the instructions: bit.ly/kcv-angular-routing-basic

What is the difference between general sibling and adjacent sibling combinators in CSS?

Combinators

css-logo When making the differentiation between general sibling and adjacent sibling combinators, ask yourself if you want to target every sibling of the target element, or just the very next one.

In CSS, HTML element relationships play an important role in targeting. It’s true that you can use IDs, which means that your CSS selector can potentially be very simple. In most cases, however, IDs are not recommended. So, if you want to write CSS that is expressive and reusable, the relationship between HTML elements starts to matter.

Consequently, the concept of sibling relationships is an important one in CSS. In fact, other than parent-child relationships, the concept of siblings is possibly the one that you will need to consider most. So, with that in mind, let’s begin with the concept that there is more than one kind of sibling. And because HTML elements have order in the markup, you’ll have to decide whether you want to target ALL siblings of an element, or just the very NEXT one. The difference between these two scenarios is: when targeting ALL siblings of an element, you will be styling only one or many HTML elements. But when targeting the adjacent sibling of an element, you are styling one element. This is the difference between general sibling and adjacent sibling combinators in CSS: it’s a question of targeting one sibling or multiple siblings.

Let’s say you have 10 elements, and they all have black text. If you wanted to make every sibling of the element red text, then there would be nine elements with red text. If you wanted to just target the very next sibling after the first element, then you would have just one element with red text. That is, when you target just the very next sibling, you are targeting only one element.

Example # 1 – General Sibling Combinators

See the Pen CSS General Sibling Combinator by Front End Video (@frontendvideo) on CodePen.

In Example # 1, we have an unordered list of days. We use the general sibling combinator, which targets every sibling of: li:first-child.

Example # 2 – Adjacent Sibling Combinators

See the Pen CSS Adjacent Sibling Combinator by Front End Video (@frontendvideo) on CodePen.

In Example # 2, we use the adjacent sibling combinator. This targets only the very next sibling of li:first-child. As a result, only one of the list items has a blue background.

This is a case in which you are styling multiple HTML elements. Keep in mind that there could be only one general sibling combinator. For example, there could be two siblings total, but you target all general siblings of the first sibling. In that case, there is only one general sibling.  But if there was a total of 10 siblings, and you targeted all general siblings of the first sibling, then you would wind up styling nine HTML elements. In Example # 1, there is a total of seven siblings, so you wind up styling six HTML elements. And if you targeted all general siblings of the second sibling, then you’d wind up styling five HTML elements.

This is a case in which you are styling a single HTML element, and here, there can only be one adjacent sibling of any element. A similar concept would be an array element, in which there can only be one element that is right AFTER a given array element. Likewise, with HTML elements, there can only be one adjacent sibling. Therefore, the adjacent sibling combinator will always style exactly one element.

But it is important to keep in mind that when you use the adjacent sibling combinator, you could wind up styling multiple elements. Let’s say, for example, that you target the adjacent sibling of the first list item in an unordered list. That would result in styling one HTML element. But if you have two unordered lists, the net effect would be that TWO HTML elements are styled. This is because your selector applied to TWO places in your page. In other words, there are TWO places in your HTML code where your selector makes sense. So, while we say that the adjacent sibling combinator results in targeting one HTML element, that effect could take place multiple times in your web page.

Summary

Keep in mind that relationships matter in CSS selectors. For example, while the parent-child relationship is a common one, sibling relationships are as well. General sibling and adjacent sibling combinators both provide a powerful mechanism for targeting HTML elements, and the difference between these sibling combinators is how many HTML elements will be affected. With the general sibling combinator, one or potentially multiple elements will be styled. With the adjacent sibling combinator, only one element will be styled. But, don’t forget, the net effect of your adjacent sibling combinator targeting could wind up affecting multiple HTML elements if your selector connects with multiple locations in your web page.

What is the difference between inline and block in CSS ?

CSS

css-logo A block-level HTML element will always create a new line after the closing tag, whereas an inline HTML element will not

Inline vs block is one of the most important factors when choosing which HTML element to use in your markup. Semantics matter as well, and this should always be considered. But the display behavior will have a direct impact on the visual aspect of your page. With a block-level element, there will always be a new line after the closing tag. So, no matter how you organize your HTML, block-level elements always create a new line. With an inline element, there is never a new line. Therefore, no matter how you organize your inline-elements in the markup, they will always appear side-by-side.

Okay, so every HTML element that has a visual presence is either inline or block, by default. For example, HTML elements such as “SPAN”, “IMG”, and “LABEL” are inherently inline. On the other hand, HTML elements such as “DIV”, “P”, and “UL” are block by default. This default behavior can be changed, however; i.e., inline elements can be set to display:block, and block-level elements can be set to display:inline. There’s no reason why you can’t apply this kind of reverse display logic; it’s perfectly valid. Just keep in mind, though, that there may be visual ramifications, but, of course, that’s up to you. It’s just important for you to know that if you want to, you can change the default visual behavior of inline and block-level elements.

Example # 1 – Default Behavior

See the Pen CSS Block vs Inline Part 1 by Front End Video (@frontendvideo) on CodePen.

In Example # 1, there are three spans and three divs. As expected the spans all line-up side-by-side. In other words, because they are inline elements, there is no new-line after each element. Whereas with the div elements, each one appears on a new line. This is the default behavior of inline and block elements.

Example # 2 – Changing Default Behavior

See the Pen CSS Block vs Inline Part 2 by Front End Video (@frontendvideo) on CodePen.

In Example # 2, we have reversed the behavior of the elements in the page. Even though the spans are inline elements, they now stack on top of each other. This is because in the CSS, we set display:block for the spans. As a result, they behave like block-level elements. Also, the divs now line-up side-by-side. This is because in the CSS, we set display:inline.

Summary

Now while it may seem like overkill to discuss reversing the default visual behavior of inline and block-level elements, it is not at all unusual, so it’s worth having given it a closer look. There may be semantic reasons, for example, as to why you choose a particular HTML element, but need to change its display behavior. A typical example is a NAV element; you might want to use an unordered list for your web page navigation, but you need the navigation links to line up side-by-side. In this case, you would need to change the default block-level display of the list items to inline. So, this is just one small example of why it’s always nice to know where you have a little wiggle room.

JavaScript Array.prototype.join()

Array.prototype

javascript joinThe JavaScript join method converts an array into a string.

Converting an array to a string in JavaScript may seem like an unlikely scenario at first, since they are dissimilar types, but this is not as uncommon as one might think. I’ve found that when I want to build a complex string, organizing the various parts of that string into array elements can be a helpful approach. For example: I have to build many vanilla JavaScript applications that feature custom CSS. The CSS is completely managed by the JavaScript code, but I ultimately inject a new STYLE element into the page, with the custom CSS as the content of that element. In building these custom styles, I organize each CSS rule into a string that is an element of an array. Then, after all of the custom CSS rules are created, I use the join() method to combine all of those array elements into one long string, and make that string the content of the STYLE element that is injected into the page.

At first glance, it may seem a bit odd to use a space, hyphen or forward slash as the separator, but as a developer, you are likely to find yourself in many situations in which the business requirements require you to solve unexpected problems. Converting a number of array elements to a string and separating each element with an odd character will be a challenge you will run into, so be prepared; if it has not happened yet, it will! Fortunately, the Array.prototype.join() provides an elegant solution to this problem.

If you pass no arguments to the JavaScript join method, then there will be no space between the characters of the string. Or, you can pass an argument that determines how to join the elements of the array. Which character(s) you provide is up to you. The most common practice is to use the default, which is a comma (” , “ ), but again, the choice is completely yours.

Try it yourself !

Click the JavaScript tab in the above example. We have an array with six elements: [‘a’, ‘b’, ‘c’, ‘d’, ‘e’, ‘f’]. When we call the join method on that array in the first console.log statement, it returns the string: “a,b,c,d,e,f”. This is the default behavior. That is to say: when you do not provide a separator argument, the characters in the returned string are separated by a comma  (” , “ ). In the following examples we do provide a separator argument. In each case, you will see that the separator is used to create the returned string.

Video Example Code

If you want to download the example code, visit this Github page, and then follow the instructions: bit.ly/kcv-javascript-array-join-fiddle

Summary

String manipulation in JavaScript can be tedious, as converting an array to a string is a problem that might tempt one to use a for-loop as a solution. But the Array.prototype.join() method was specifically designed to solve this problem, as it negates the need for any kind of for-loop or other iteration patterns. You can simply chain the join() method from your array reference and then pass a character as an argument, which tells the join() method how you want to separate the elements of the array when converting to a string. In the long run, it really is a smooth way to go.

How do I use the Angular ngModel directive?

Angular

Angular logo -routing

The ngModel directive creates a two-way data binding between an HTML form element and the component.

In the most recent version of Angular, data binding is one-way by default. Notably, this is one of the key improvements over Angular 1.x in that it eliminates unnecessary performance issues that can crop up quickly. But the beauty of the ngModel directive is that it provides a way to explicitly give your template and your component’s data a two-way binding. So, when it is changed on one end, it is updated on the other. In other words, your data becomes “live”.

Now the most common case for two-way data binding is HTML forms. Here, as the user makes changes to a form, you want to capture those changes in your data. And conversely, you’ll also want any changes in your data to be reflected in the form. And here’s where Angular’s ngModel directive comes in; it’s the key to this live relationship. You simply assign a data point from your component to this directive in your template and your binding becomes two way. Take a look at the examples below, for further examination.

Example # 1

In Example # 1, we have our component. There are two properties: content and title. These properties will be connected to our form input elements via the ngModel directive. This two-way data-binding means that when the property is changed in one place, it is updated any place else that it is referenced, and vice verse.

Example # 2

In Example # 2, we have our Angular template. There is a text-input element and a textarea element. The text-input element has an ngModel directive and its value is: title. This means that two-way data binding is set up for the title property. The textarea element has an ngModel directive and its value is: content. This means that two-way data binding is set up for the content property.

Below the input elements is a div element with two placeholders: {{title}} and {{content}}. The net result of this is that when you enter any text in the text input, that text will update the {{title}} placeholder in the UI. Also, when you enter any text in the textarea, that text will update the {{content}} placeholder in the UI

Video Example Code

If you want to download the example code, visit this Github page, and then follow the instructions: github.com/kevinchisholm/video-code-examples/tree/master/angular/templates-and-data/ngModel

Summary

So, Angular’s default one-way data binding methodology is a key benefit of its 2+ upgrade. And, of course, in most cases, this approach is sufficient. But, as it turns out, in some cases, such as HTML forms, you want a “live” connection between your form and your data source. So this is where ngModel directive comes in. It provides the “live” connection otherwise known as two-way data binding.

Creating your First Node Module

Node.js

Node.js Logo - node moduleBy defining your node module in package.json, you do not need to be concerned about its physical location when referencing it in your code.

Sometimes your Node.js application depends on functionality that falls outside of Node’s core components. In this case, you’ll need to import that functionality. This can be achieved with a node module. Organizing your code into modules enforces best-practices such as separation of concerns, code-reuse and test-ability. When you create a custom Node module, you can reference that module’s code in package.json.

So, in this article, I’ll demonstrate how to create a custom Node module for local use. We’ll cover how to reference a local custom Node.js module in package.json, how to expose methods from within that module, and how to access the module from a JavaScript file. To get started, why don’t you go ahead and clone the following github repository: Creating your First Node.js Module

You’ll find instructions on how to run the code in the Git hub page.

package.json – Example # 1

Example # 1 is the contents of our package.json file. The name and version properties are for demonstration purposes, but the dependencies property is the one that’s important to us. The dependencies property is an object that contains one or more Node modules needed by an application. So, when running the npm install command, node package manager will download all required modules. And the information in the dependencies object will tell node package manager what to download.

Specifying a local file instead of a remote resource

For this project, we use a special syntax to import a module that’s in the local file system. Notice that the value of the dateTools property is: “file:my_modules/dateTools“. This tells node package manager that the dateTools is in the my_modules/dateTools folder.

Our Custom Node Module – Example # 2

In Example # 2, we have the contents of our dateTools module. Now, obviously, this module doesn’t do all that much. It simply shows that there are four methods: getDays, getMonths, getDay, and getMonth, and that there are two arrays: days and months. The idea is that the getDays and getMonths methods return the appropriate arrays, and the getDay, and getMonth methods return the specified day or month, based on the number you pass in as an argument.

So, while this module is not one you would use in a real-world application, it does provide simple methods so that we’ll have some structure to work with.

File Structure

What I really want to focus on for this article is the architecture of the module. So, when you look at the Github repo, you’ll notice that in the dateTools folder, there are two files: index.js and package.json. Now, you may be thinking: “hmmmm… didn’t we already have a package.json file in this application?” Well, yes, we did, but this is the beauty of Node.js: all modules can, in turn, have a package.json file. This way, a module may have its own dependencies, and those dependencies might each have their own dependencies, and so on. So, the idea is that this architecture allows for a highly modular approach to creating software.

package.json in Our Module – Example # 3

In Example # 3, we have the package.json file that sits in the root of our custom module. The private property indicates that we do not wish to publish this to the npm registry, and the main property indicates the name of the module’s JavaScript file. Our module is the result of this file’s execution.

module.exports

Now, take a look again at Example # 2. On line # 22, you’ll see module.exports = {…}. Every Node module has an object named module.exports, and this object allows the author to make one or more properties and methods available to the outside world. In our case, we provide an object with properties that correspond to four methods. So, this way, when any Node.js code references our dateTools module, it is accessing this module.exports object.

The Demonstration Code – Example # 4

In Example # 4, we have the JavaScript file that demonstrates our module. The most important part of the code is line # 2, where we use the Node.js require method to import our dateTools module. Notice how we reference the module by name: dateTools. This way, we are not concerned with the actual location of the module; the package.json file in the root of our application takes care of that for us. Thus, the name dateTools resolves to the my_modules/dateTools folder, and in that folder, the package.json file resolves the name dateTools to index.js.

Summary

The dateTools module in this article is simple and is designed, primarily, to offer four basic methods as the heart of an introduction to the creation of a custom Node module. The purpose of this custom module was to provide a context for the discussion, and that context was to provide an understanding of your file organization, and how you access your module. The key to much of this discussion was, of course, the file package.json, which allows you to keep your code modular and easy to reference.

I hope that you’ve found this article helpful and that you’re on your way to creating your first custom Node module.

Understanding the difference between scope and context in a JavaScript method

JavaScript

scope and contextSometimes the concepts of scope and context are misunderstood in JavaScript. It is important to understand that they are not the same thing. This is particularly important inside of a method.

In JavaScript, the concept of scope refers to the visibility of variables. On the other hand, the concept of context is used to mean: “the object to which a method belongs”. That may sound like an odd statement, but it is accurate. The only time we care about context is inside a function. Period. Inside a function, the “this” keyword is very important. It refers to the object to which that function belongs. In other words, every function is a property of some object. In client-side JavaScript (i.e. in a browser), if you declare a function at the top of your code, then that function is a property of the window object. So, inside of that function, the “this” keyword refers to the window object. If you create a new object (let’s call it: “myObject”) and add a method (i.e. a property that happens to be a function), then inside of that function, the “this” keyword refers to the object (i.e. “myObject”).

So the main issue is that inside of a method, object properties and variables can sometimes be confused. In short; when the JavaScript “var” keyword is used, that is a variable. A variable will not be a property of an object (except in the global scope, which is for another discussion). But inside a method, any variable created using the JavaScript “var” keyword will be private to that method. So this means that it is not possible to access that variable from outside the method. But inside of a method, you have access to all of the properties of the object to which that method belongs. And you access these properties using the JavaScript “this” keyword. So, for example; if myObject.greeting = “Hello” and myObject.greet is a method, then inside myObject.greet, if I reference this.greeting, I should get the string: “Hello”. And if I have declared a variable named “speed” inside of myObject.greet, I would access it simply by referring to “speed” (i.e. I would not use the JavaScript “this” keyword). Also, a big difference between variables and properties in a method is that properties are always public. That is to say: all object properties can be seen and in most cases modified. But a private variable inside of a method is completely hidden from the outside world. And only our code inside of the method has access to that variable.

Try it yourself !

In above example, we start out by creating a property on the window object named: “foo”. This “foo” object is the result of an immediately invoked function expression (aka: “IIFE“). The reason that we take this approach is so that we can have a private variable: count. Our getCount method as access to that private count variable.

There is also a count property on the “foo” object. This property is available publicly. That is to say: we are able to make changes to the count property, whereas the count variable is not available outside of the IIFE. Our getCount method has access to the count variable, but that is the only way we can reach it.

When we call foo.getCount() without passing any arguments, then it increments the count property and returns it. This is CONTEXT. By using the JavaScript “this” keyword inside of the getCount method, we are leveraging the concept of context. Conversely, when we call foo.getCount(“scope”), then the count variable is incremented and returned. This is SCOPE. It is very important to understand the difference between scope and context in JavaScript.

JavaScript Spread Syntax – Basics

JavaScript

JavaScript Logo - spread syntaxJavaScript spread syntax provides a way to convert an array into a comma-separated list.

In this article, I will cover the basics of JavaScript spread syntax. But first, let’s start by taking a step back and thinking about how functions work in JavaScript. A function expects a comma-separated list of arguments, so, when we call a function, we need to provide zero, one or more arguments, separated by a comma. But what happens when we don’t know exactly what all of these arguments are? Now it may be tempting to simply pass an array, but then this array would be seen by the called argument as simply the first argument. In other words, argument[0] in the function would be an array. But this is not what we want; we want to pass an array to a function and for that array to be interpreted by the function as a comma-separated list of arguments.

Why is JavaScript Spread Syntax So Helpful?

So here’s where the spread syntax comes in: it allows us to put the arguments in an array, and then pass that array to the function we are calling. And, actually, this is only one example of how the JavaScript spread syntax can be helpful, but it certainly is a great way to start the conversation.

Inspect Arguments in a Function – Example # 1 A

The Output from the inspectArguments function – Example # 1 B

Inside every JavaScript function, the “arguments” keyword provides a reference to all arguments that were passed into this execution of the function. The “arguments” keyword is not an array, but an array-like object with a “length” property. Fortunately, however, this “length” property allows us to iterate the “arguments” object as if it were an array. The “inspectArguments” function from Example # 1 A contains a for-loop, which iterates over all the arguments it receives. Inside of that for-loop, we output the value of each argument.

Nothing too special there.

On the last line of Example # 1 A, we call the “inspectArguments” function, passing it: “…letters”. What’s happening here is that instead of passing the letters array, we pass “…letters”, which spreads the letters array out into a comma-separated list. Example # 1 B contains the output from Example # 1 A, and as expected, we see the contents of the letters array.

Spreading Out the Arguments – Example # 2 A

The Output Has Changed – Example # 2 B

Example # 2 A is similar to Example # 1 A, except in the way that we call the “inspectArguments” function. In other words, instead of passing just “…letters”, we pass “x, y, …letters”. This allows us to specify that the first two arguments that the “inspectArguments” function receives are “x” and “y” and the rest of the arguments is the content of the letters array. The point here is that we can mix the use of literals and the spread syntax. So as expected, Example # 2 B shows the output, which is similar to Example # 1 B, except that “x” and “y” are the first two console.log statements.

Using Spread Syntax for Both Arguments – Example # 3 A

The Output – Example # 3 B

Now, in Example # 3 A, we take things a little further. We use the spread syntax twice, which calls the “inspectArguments” function, passing the contents of both the days and letters arrays, spread out into one comma-separated list. Consequently, the output that you see in Example # 3 B is exactly as expected: the contents of the days and letters arrays.

How to test HTTP POST with the Node.js request Module

Node.js

Node.js Logo - test HTTP POSTTesting HTTP POST requests is usually tedious. Bit with a few lines of JavaScript, you can spin-up your own HTTP POST testing tool.

In web development, GET and POST requests are quite common. GET requests are the ones more frequently seen, and in fact, when you load most web pages, the majority of the requests that make up what you see in the page are GET requests. For example, you request the initial HTML file, CSS files, JavaScript files and images. But sometimes, you need to make a POST request.

Making a GET request is easy, as is testing one. Testing a POST request is not always so simple, though, because the HTTP request body must include the data you want to send. One approach is to create a simple HTML page with a form. The problem here is that you need to create an input element for each data property that you want to send in the POST request, which can be tedious for a simple test. But then there’s Node.js, which can be leveraged to solve this problem.

In this article, we will see how a small JavaScript file can make an HTTP POST request. Now this approach may not be appropriate for use in a production application, but the idea behind this article is to point out that any time you need to test a POST endpoint, you can set up a quick test using Node.js.

Get the example code from GitHub

If you clone this repo: github.com/kevinchisholm/video-code-examples/tree/master/node/testing-http-post-with-request-module, you can clone the example code locally and edit the code yourself.

package.json

The package.json for this project contains references to the modules needed. We’re using the request module, the body-parser module, and the express module.

Example # 1 – The Web Server

In Example # 1, we have the server code. (Creating the server code is not the focus of this article, but it’s still good to review.) We need the express module and the body-parser module, and once we’ve set the references to those, we set up the POST route. So, when the user sends an HTTP POST request to /form, our code will handle this request. The requestAsJson variable allows us to set up the round-trip – that is, the exact same data from the POST request that we return to the user as JSON. We then set the Content-Type header to be application/json so that the HTTP header will be correct. Note the “log the output” comment; this is just for demonstration purposes. We then send the response using the res.end method.

Example # 2 – Testing the POST Request

In Example # 2, we have the test client, which is the focus of the article. We want an easy way to test POST requests, so instead of mocking up an HTML page with a form, we can use the file test-post.js to test an HTTP POST request. We set a reference to the request module, and no other module is needed in this file.

The postData variable is an object containing the data for the HTTP POST request. The postConfig variable contains the URL for the HTTP POST request, and a reference to the postData variable. The postSuccessHandler variable is a success handler for the HTTP POST request. Inside of that success handler, you can see a console.log statement, which completes the proof of concept. Whatever data sent for the HTTP POST request should be output in that console.log statement.

<h2>How to test the example code</h2>

Open two terminal windows (terminal A and terminal B), and make sure that you are in the root of the repository folder. In terminal A, execute this command: node post-server.js. In terminal B, execute this command: node test-post.js. In terminal A, you should see the message: The POST data received was XXX. In terminal A, you should see the message: JSON response from the server: XXX. (In each case, XXX represents the data from the HTTP POST request).

NOTE: Go ahead and change the properties of the postData object. You can create more properties if you wish. No matter what you do, you can see the data that you set in that object in the two console.log statements.

Fat Arrow Function Basics – Node Modules

Node.js

JavaScript LogoJavaScript Fat Arrow Functions solve the “this” problem by maintaining a reference to the object to which a method belongs. This is the case even with nested functions.

One of the most popular aspects of JavaScript is the fact that functions are first-class citizens. So, this aspect of the ECMAScript specification provides a great deal of power. Now when a function is a property of an object, it is also considered a method. That is, it is a method of that object. And inside of a method, the JavaScript “this” keyword is important, because it allows us to access the object to which the method belongs, as well as its other properties.

Now, when nesting functions, the JavaScript “this” keyword, one of the more frustrating aspects of the language, can be a bit tricky to deal with. So, in this article, I will discuss this very problem and how to solve it using fat arrow functions. If you’d like to run the code examples locally on your computer, clone the following github repository: Using fat arrow functions in your Node module.

(Instructions on how to run the code are available in the Github page.)

One important note about the code examples: the title of this article references “…Node Modules” to keep things simple, so I did not use a node module for the context of the code examples. Most Node applications keep the main file code minimal. Taking a modular approach is almost always a best practice, but for this article, I have put the code in the main JavaScript file.

The problem with “this” – Example # 1

Run Example # 1 in your terminal with the following command: node example-1.js. The result of this is: “THE MESSAGE IS: undefined“.

We have created a tools object in Example # 1, and that name is “tools“, which is arbitrary. It could have been any name, we just need an object to work with. The “tools” object has a “message” property, and there is also a method named “asyncTask“. The asyncTask method simulates an asynchronous task by using the setTimeout method. There is a reference to the JavaScript “this” keyword inside of the anonymous function passed to the setTimeout method. Now here’s where it gets a little dicey: the anonymous function passed to the setTimeout method is not executed in the context of the “tools” object, and therein lies the problem. The resulting console.log message is: “THE MESSAGE IS: undefined“.

So, we need a way to reference the “tools” object inside of the anonymous function that we passed to the setTimeout method. Well, the best approach is still to reference the “this” keyword. A common and popular approach in the past has been to set a reference to “this” before calling the setTimout method. For example: “var me = this;”. Okay, so while that is still a possible technique, there now is a far more elegant approach.

Fat arrow functions solve the “this” problem – Example # 2

Run Example # 2 in your terminal with the following command: node example-2.js. The result of this is: “THE MESSAGE IS: Hello from this.message!”

We made a small change in Example # 2. We converted the anonymous function passed to the setTimeout method to a fat arrow function. Fortunately, this action solved our problem. One of the advantages of fat arrow functions is that they preserve the meaning of the JavaScript “this” keyword. Because of this, when we reference this.message we no longer have an error, and we also see the expected message in the console.

Fat Arrow Function – One Argument – Example # 3A

Fat Arrow Function – Multiple Arguments – Example # 3B

A few things to keep in mind:

  • In Example # 2, the fat arrow function takes no arguments, but, it still has a pair of opening and closing parentheses. This is because when a fat arrow function takes no arguments, you must include a pair of opening and closing parentheses.
  • In Example # 3A, there are no parentheses in the fat arrow function. This is because when there is one argument, you do not need to include parentheses.
  • In Example # 3B, there are two arguments contained inside of parentheses. This is because when there is more than one argument, you must include parentheses.

Summary

In this article we saw that fat arrow functions solve the “this” problem because they provide access to the object to which the containing function belongs, and you can access that object at all times by using the “this” keyword. And even when nesting fat arrow functions, the “this” reference is preserved, eliminating the need to set a temporary reference to “this”. Just keep in mind the importance of how the syntax can differ, depending on the number of arguments that the fat arrow function takes. In other words, with zero or multiple arguments, parentheses are required, and with only one argument parentheses are not required. Pretty simple, once you get used to it.

Node.js Templating with EJS – Basics

Node.js Templating

JavaScript LogoEJS Makes Templating in your Node.js application a breeze. Just supply a template string or .ejs file and some data.

The moniker says it all: “Effective JavaScript templating.” If you haven’t already discovered it, you’ll soon find that as front-end web developers have been transitioning to more of a full-stack role, templating has quickly become an important topic. In other words, this is no longer an unusual front-end task for JavaScript developers. And when working with Node.js, EJS has become the standard for server-side templating.

In this article, I will cover the bare-bones steps needed to get up and running with EJS, and in doing so, I’ll show you how to render data in an EJS template. First, I’ll explain the vanilla JavaScript approach. Then, we’ll move on to rendering your EJS template when using the Express.js framework. And finally, we’ll cover the syntax for EJS template code as well as how to use “if” logic in your template.

Now the power in EJS templates is the separation of concerns. Your data is defined (and possibly manipulated) in your server-side code, and your template simply declares what data will be rendered. This approach embraces the concept of “loose coupling”. With EJS templates, you can leverage that same “loose coupling” design pattern in your Node application. This scenario is, of course, fairly common to back-end developers, who have experience with languages such as Java, PHP, Python or .NET. For a front-end developer, however, this may be new territory. So, to illustrate, let’s take a look at some examples.

Example # 1-A

Example # 1-B: The Rendered HTML

In Example # 1 – A we first require the ejs module. This will be the case with every example, so I won’t cover that again. Just know that we need the ejs module in order to render our EJS templates, so we set a variable named “ejs” via require first. Next, we set the days variable; it’s just an array that contains the five days of the work week. Here, too, this will be the case in every example, so no need to cover this again. Just know that in each code example, there is a days variable – an array that contains the five days of the work week. We also set a variable named “http” which is an instance of the Node http module. We’ll need this in order to run our web server.

Okay, so let’s take a look at line # 3 in Example # 1. We’re using the ejs.render method here to create HTML that we will send to the user. The ejs.render method takes two arguments: a string template and the data for that template. In this case, our string template has the “<%=” and “%>” delimiters to indicate to EJS the start and end points for our template. And inside of those delimiters, we can write JavaScript code. So, let’s use the join() method of the days array to convert the array to a string. Then, inside of the execution of the http.createServer method, we’ll call the end method of the result object (i.e. res.end), passing the html variable to that method. And since the res.end() will send the response to the client and end the connection, the contents of our html variable will be sent to the user’s browser. Now, in Example # 1 – B, we have the HTML that is rendered in the user’s browser. This HTML happens to be very simple, and in fact, is not markup that we’d want to use in production. But what I wanted to demonstrate here is that rendering HTML in an EJS template is as simple as defining the template, then providing data to that template.

Example # 2-A: Setting the view engine for Express.js

Example # 2-B

Example # 2-C: The Rendered HTML

In Example # 2-A we’re leveraging the Express.js framework, so there’s a new require statement at the top of our code which sets the Express variable. On line # 3, we create the app variable which is an instance of the Express.js framework. And on line # 9, we use the app.set method to tell Express that we’re using EJS as our view engine. Note that this is required when leveraging EJS templates in your Express application. Now, on line # 12, we set up a handler for the “/” route. And inside that handler callback, we use the render method of the response object. This render method is available to use because of what we did on line # 9: using the app.set method to let Express know that EJS is our view engine. Okay, so let’s go back to line # 13, where we’ll pass two arguments to the render method: the string “example-2” and the data that our EJS will consume.

Now, you may be scratching your head as to what the first argument in “Example # 2-A” means. Well, it’s important to note that when you leverage EJS as your view engine, Express.js assumes that you will have view templates. These view templates are text files with an “.ejs” extension. So, it’s also important to note that Express.js assumes that these files will be in a folder named “views” that resides in the same folder as the file that is currently being executed. You can specify a different folder for your views, but the default folder that Express will look for is “views”. And in the “views” folder, Express.js will look for a file named XXX.ejs, where “XXX” represents the string that you pass as the first argument to the render method. So in our example, we want to use a template that resides in the file: “views/example-2.ejs”.

Here in Example # 2-B, we have the contents of the file “views/example-2.ejs”. And in this template file, there are two locations for data; the title tag and the body tag. In the title tag, we have a binding for the headerTitle property. In other words: we’ve provided some data to the res.render() method on line # 13 of Example # 2-A. That data was an object literal, and it had a property named: “headerTitle”. So, on line # 3 of our “views/example-2.ejs” file, we’ve told the template to inject the value of the “headerTitle” property of the data object that was provided to it. And the same thing is happening in line # 6 of our “views/example-2.ejs” file. In other words, we’ve asked EJS to inject the value of the “welcomeMessage” property of the data that was provided to the template. And then in Example # 2-C, you see the HTML that is returned to the user’s browser as a result of our template in Example # 2 B. In this HTML, the “headerTitle” property binding is replaced by the actual value: “EJS Demo Page” and the “welcomeMessage” property binding is replaced by the actual value: “This message was rendered on the server.”

Now, Example # 3-A is very similar to Example # 2-A, except that the data we provide to the template is an array, instead of just an object literal. If you look at Example # 3-B, you’ll see that the way we bind to the data differs from example # 2-A. In example # 2-A, we bound to a single property: “welcomeMessage”, but here we are using a loop to iterate over each element in the “days” array. Specifically, we use the forEach() method of the “days” array and in each iteration of the callback function, we have access to a variable named “day”. Then we generate a list item and output the value of “day”. So, if you look at Example # 3-C, you’ll see the HTML that is rendered by the server and sent to the user’s browser. Voila! As expected, we have the HTML with the unordered list rendered with each day of the week (i.e. the “days” array).

Example # 4-A is virtually identical to Example # 3-A; the only difference is the value of the “welcomeMessage” property. Take a look at Example # 4-B. You’ll see that on line # 4, we have some custom CSS in a set of style tags. This will make more sense in a few minutes. Now look at line # 20. Here we are looping over the “days” array, just as we did in Example # 3-B. But on line # 22, we use a basic JavaScript “if” block, to determine if this is the fourth element in the array. We do that by using the index variable, which is the 2nd argument passed to the callback function that we provide to the days.forEach() method. So, if index is equal to 3, then we generate the following in our HTML: class=”selected”. What we are doing here is, we are telling our EJS template that the 4th element in the list (i.e. the element with the index of 3) should have the CSS class “selected”. So, in Example # 4-C, you can see in the rendered HTML that the fourth list item has class=”selected“. As a result, the CSS that we added at the top of the EJS template kicks-in and “Thursday” is dark red text with a yellow background.

Summary

So, in this article, you learned the most basic steps needed to leverage an EJS template in your Node.js application. You started by learning how to render data in an EJS template using vanilla JavaScript, and also when using the Express.js framework. Then we went on to cover how to bind a single data property, as well as how to iterate an array in your template. And finally, we wrapped it up by illustrating how to use “if” logic in your EJS template.

Now this article only scratched the surface of what is possible with EJS templates. My goal here was simply to provide the information needed to get up and running quickly, and to illustrate the most basic concepts so that you can dig in further on your own, because, believe me, there is plenty more to discover on this topic!

Node.js – What is the Difference Between response.send(), response.end() and response.write() ?

Express JS

JavaScript Logoresponse.send() sends the response and closes the connection, whereas with response.write() you can send multiple responses.

In this article, I will explain the difference between response.send(), response.end() and response.write(), and when to use each one. When you’re working with the Express.js framework, you’ll probably most frequently be sending a response to a user. This means that regardless of which HTTP verb you’re handling, you’ll pass a function as the handler for that endpoint. This function will receive two arguments: the request object and the response object. The response object has a send() method, an end() method and a write() method, and in this article we’ll get to know the differences between them.

So, let’s start with the main issue, which is that the response.send() method is used to send the response to the server. Now this makes sense and in some cases, it’s actually the perfect tool. Problems can arise, though, if you’re not entirely sure what the response.send() method actually does. Well, in a nutshell, it does two things; it writes the response and also closes the connection. So, this seems like a win-win, right? Well, in some cases it is, but if you don’t want to close the connection on your first write, then the response.send() method may not be the right tool. When this happens, you’ll need to use a combination of response.write() and response.close(). So, let’s take a look at a few examples, to see just how this works.

Get the example code from GitHub

If you clone this repo: github.com/kevinchisholm/video-code-examples/tree/master/node-express/response-send-end-write-difference, you can clone the example code locally and edit the code yourself.

Trying to use the response.send method more than once per request – Example # 1

Run Example # 1 in your terminal with the following command: node example-1.js, then point your browser to: http://localhost:5000/. Now you’ll see this: “This is the response #: 1“. There are two problems here, however, the first of which is that any responses after the first one are never sent. This is because the send method of the Express.js response object ends the response process. As a result, the user never sees the messages “This is the response #: 2” or “This is the response #: 3”, and so forth.

The second problem is that the send method of the Express response object sets the Content-Type header, which is an automatic action. So, on the first iteration of the for-loop, the Content-Type header is set (i.e. “This is the response #: 1”). Then on the next iteration of the for-loop, the Content-Type header is set again because once more, we are using the response.send() method (i.e. “This is the response #: 2). But, we have already set the Content-Type header in the first iteration of the for-loop.
Because of this, the send method will throw this error: “Error: Can’t set headers after they are sent”. So, our application is essentially broken, but we don’t want users to have an error in their consoles. And more importantly; our back-end logic is not working correctly.

Using the result.write method – Example # 2

So, using the result.write method run Example # 2 in your terminal with the following command: node example-2.js. Now point your browser to: http://localhost:5000/. As you can see, there is still a problem with our code. Depending on your browser, either you will see only the first message or you will see none of them. This is because the response has not been completed. So, I’ll just mention here, that not every browser handles this case the same, which is the reason why you may see one message, all of the messages or none of them. But you should see that the request is “hanging” as your browser will stay in that “loading” state.

So, open your developer tools (e.g. FireBug or Chrome Dev Tools), and then look at the network tab. You’ll see that all five responses did, in fact, come back to the client. The problem is, the browser is waiting for more responses.
At some point, the request should time out and you can see all messages in the browser. This behavior can vary between browsers, but it is not the correct experience.

result.end fixes the problem – Example # 3

Run Example # 3 in your terminal with the following command: node example-3.js, then point your browser to: http://localhost:5000/. You will now see all of the messages in the browser, which means that here, in Example # 3, the problem has been fixed. We see all of the messages generated by the for-loop and the response completes successfully with no console errors. So, we’ve solved the problem by using a combination of of response.write() and response.close().

First we set the Content-Type header, just to get that task out of the way. Then, in each iteration of the for-loop, we used response.write() to send a message back to the client. But since response.write() does not set any headers or close the connection, so we were free to call response.write(), to send another response to the client. And once the for-loop was completed, we used the result.end() method to end the response process (i.e. we closed the connection). This said to the browser: “we’re done; go ahead and render the response now and don’t expect anything more from me.”

Summary

In this article, we learned about the difference between response.send(), response.end() and response.write(). During this discussion, we found that response.send() is quite helpful in that it sends the response and closes the connection. We saw that this becomes problematic, however, when we want to send more than one response to the client. But, fortunately, we discovered that this is easily solved by using a combination of response.write() and response.close(). We used response.write() to send more than one response, and then used response.end() to manually end the response process and close the HTTP connection. So, useful steps and easily solved problems.!

An Introduction to NPM Scripts

NPM

Node.js LogoLearn how to leverage npm scripts to create commands that, in turn, execute more than one other npm script command, allowing you to simplify your builds.

As the default package manager for Node.js, npm has seen a rise in popularity because JavaScript’s is just everywhere! This certainly makes sense – npm is well-designed, well documented, and makes Node.js development more seamless. I think most web developers would have a hard time imagining using Node.js without npm, but they often have to turn to technologies such as grunt and gulp to simplify local development and front-end tooling. But with npm scripts, you have an opportunity to move some of your front-end tooling and local development tasks away from third party tools. The beauty of this approach is that it allows you to simplify your setup.

In order to explain npm scripts, I have created a simple project that leverages Gulp. So, to run the code locally, clone the following git hub repository: Getting started with npm scripts.

Instructions on how to run the code are available in the Git hub page.

This project has four features:

  1. It compiles a coffeescript file to JavaScript.
  2. It compiles a SASS file to CSS.
  3. It uglifies a JavaScript file.
  4. It starts a Node.js web server.

This is a very simple example and it’s mostly Gulp compiling and minifying files. I chose this project because it requires some manual steps. Now, it’s possible to automate these tasks using Gulp, but what if you needed to switch to tools such as Grunt, or Broccoli.js? In such a case, your commands would change. For example, “gulp coffee” would become “grunt coffee”. While this is not fatal, it be nice if we could have a consistent set of commands. So the question is, how can we build our local development assets and start the Node.js server with one command? Also, how can we ensure that this one command never changes? Well, this is where npm scripts come in!

Project Folder Structure – Example # 1

In Example # 1, we have the folder structure for our project. There is an src folder that contains three sub folders:

  • The coffee folder has a coffeescript file.
  • The js folder has a JavaScript file.
  • The sass folder has a SASS file.

These three files are used by our build. The built versions of these files are placed in the build/css and build/js folders accordingly.

package.json (no npm scripts) – Example # 2

The package.json so far allows us to use Gulp. We’re using the gulp-coffee module to compile coffeescript, the gulp-sass module to compile SASS, and the gulp-uglify module to uglify JavaScript. So, we have the following commands available to us:

  • gulp sass: This command will compile the file src/sass/main.scss and create build/css/main.css
  • gulp coffee: This command will compile the file src/coffee/global.coffee and create build/js/global.js
  • gulp uglify: This command will uglify the file src/js/main.js and create build/js/main.js
  • node index.js: This command will start the Node.js web server on port # 3000.

You can run each command and it will work just fine, but the problem is that each time you change any of the files, you will want to build them again, and then restart the web server.

Adding npm scripts to package.json – Example # 3

In Example # 3, we have added a scripts object to package.json. Here is a breakdown of the script commands:

  • build:sass : This command is a shortcut to: gulp sass.
  • build:coffee : This command is a shortcut to: gulp coffee.
    build:js : This command is a shortcut to: gulp uglify.
    build : This command will execute the previous three commands. It executes three steps in one command.
    serve : This command is a shortcut to: node ./index.js (it starts the Node.js web server).
    start : This command builds all three files, and then starts the web server
    clean : This command will delete every the built file (these are files created by all previous commands).

What to expect when you run the example code locally

  • npm start – The build places the three built files in the build/css and build/js folders accordingly. And then, it starts the Node.js web server. You will see messages in your terminal that indicating these outcomes.
    npm run clean – npm deletes the three built files in the build/css and build/js folders. (This is helpful if you want to “start from scratch” when running the npm start command. This way you see the built files created each time.

Summary

This article is a basic introduction to, and high-level overview of npm scripts and its ability to create commands that, in turn, execute more than one other npm script command. As you can see, there’s a great deal of power here, and depending on your needs, they can streamline your front-end tooling process significantly. There’s much more detail available about npm scripts, and a great place to start is: https://docs.npmjs.com/misc/scripts. In the meantime, I hope that this article has provided you with enough information to get you up and running.

Web Scraping with Node and Cheerio.js

Node.js

Node.js LogoCheerio.js allows you to traverse the DOM of a web page that you fetch behind the scenes, and easily scrape that page.

There are security rules that limit the reach of client-side JavaScript, and if any of these rules are relaxed the user may be susceptible to malicious activity. On the server side, however, JavaScript is not subject to these kinds of limitations. And, in fact, in the absence of them there’s a great deal of power, particularly in the area of web scraping, which, as it turns out, allows for one of the cool upsides of this awesome freedom.

To get started, clone the following github repository: Basic web scraping with Node.js and Cheerio.js.

You’ll find instructions on how to run this code in the Github.

The page we will target for web scraping

Lets’ take a moment to look at the example web page that we will scrape: http://output.jsbin.com/xavuga. Now, if you use your web developer tools to inspect the DOM, you’ll see that there are three main sections to the page. There’s a HEADER element, a SECTION element, and a FOOTER element, and we will target those three sections later, in some of the code examples.

The request NPM module

One of our key tools is the request NPM module, which allows you to make an HTTP request and use the return value as you wish.

The cheerio NPM module

The cheerio NPM module provides a server-side jQuery implementation, and its functionality mirrors the most common tasks associated with jQuery. There isn’t a 1:1 method replication; that was not their goal. The key point is: you can parse HTML with JavaScript on the server-side.

Caching an entire web page – Example # 1

In Example # 1, we set some variables. The fs variable references the file system node module, which provides access to the local file system. We’ll need this to write files to disk. The request variable refers to the request node module, which we discussed earlier, and the cheerio variable refers to that cheerio node module that we also discussed. The pageUrl variable is the URL of the web page that we will scrape. Now, at the highest level, there are two things that happen in this code: we define a function named scrapePage, and then we execute that function. So, now, let’s take a look at what happens inside of this function.

First, we call the request function, passing it two arguments, the first of which is the URL of the request. The second argument is a callback function, which takes three arguments. The first argument is an error object, and this “error first” pattern is common in Node.js. The second argument is the response object, and the third argument is the contents of the request, which is HTML.

Inside of the request callback, we leverage the file-system module’s writeFile method. The first argument we pass is the full path of the file name, which tells the fs module what file to write. For the second argument we pass the responseHtml variable, which is the content that we want to write to the file; this is what was returned by the request function. The third argument is a callback function, which we are using to log a message indicating that the file write to disk was successful. When you run Example # 1, you should see a new file in the HTML folder: content.html. This file contains the entire contents of the web page that we make a request to.

Caching only a part of a web page – Example # 2

In Example # 2, we have an updated version of the scrapePage function, and for the sake of brevity, I have omitted the parts of the code that have not changed. The first change to the scrapePage function is the use of the cheerio.load method, and I assigned it to the $ variable. Now we can use the $ variable much the same way we would jQuery. We create the $header variable, which contains the HTML of the HTML header element. We then use the file-system module’s writeFile method to write the HTML header element to the file: header.html.

Now, when you run Example # 2, you should see another new file in the HTML folder called header.html, which contains the entire contents of the web page that we make a request to.

Example # 3

In Example # 3, we have updated the scrapePage function again, and the new code follows the same pattern as the one in Example # 2. The difference is that we have also scraped the content and footer sections, and in both cases, we’ve written the associated HTML file to disk. So, now, when you run Example # 3, you should see four files in the HTML folder, and they are entire-page.html, header.html, content.html and footer.html.

Summary

In this article, took a look at what is possible when scraping web pages. Now, even though we only scratched the surface, we did work in some high-level areas, focusing on making a request and then parsing the HTML of that request. We used the request module to make the HTTP request, and the cheerio module to parse the returned HTML. We also used the fs (file-system) module, in order to write our scraped HTML to disk.

My hope is that this article has opened up some new possibilities in your work, and has pointed you in the right direction for pulling this all off. So, happy web page scraping!

How to Fly to a Location With the Mapbox Maps SDK for React Native

React Native

JavaScript LogoUsing the flyTo() method, you can zoom out, move to a target location, and then zoom back in. To do this, you only need to tell the MapBox api the coordinates of the target location and the animation duration.

Being able to move from one geo-specific location to another is a critical feature for any map application. At first, users may be happy just to know where they are. At some point, though, you’ll want to offer them the opportunity to go to another location. And you’ll be able to do this, since the react-native-mapbox-gl API has a flyTo() method which provides powerful animation for just this purpose!

At first glance, the moveTo() method may seem like a logical choice, but if you try to implement it, you’ll soon notice that it maintains the current zoom level, which is a major drawback. In some contexts, it may suffice, but for the most part, it is not the best solution for moving from one place to another when the distance is more than a few miles, and the zoom level is an issue. Take a look:

package.json

Above we have the contents of package.json, with dependencies on react, react-native and @mapbox/react-native-mapbox-gl. Now take a look at the examples below.

flyTo() method – Example # 1

We have the basic syntax for the flyTo() method in Example # 1, which belongs to the instance of the map. In our case, we have an instance property named “_map“. We’ll see how this instance property is declared in Example # 3. The flyTo() method takes two arguments: the target coordinates and animation duration. The target coordinates is an array and it should contain the longitude and latitude of an earth location to which you want to “fly” (and they must be in that order). The animation duration tells MapBox how long this “fly to” action should take, because the reality of calling the flyTo() method is that it animates the map. Now fortunately, MapBox takes care of the implementation details for this animation, and all we need to do is let MapBox know what the duration of this animation should be. The value you provide should be in milliseconds (i.e. a value of 1000 would equal one second).

Context for this._map.flyTo() – Example # 2

Putting a little context around the call to this._map.flyTo() in Example # 2, we’ve added a TouchableOpacity component to the application, which, in a nutshell, enables us to make something “touchable.” That is, it can react to a touch and we can add a handler for that touch event. We’ve assigned an anonymous function to the onPress method of the TouchableOpacity, and inside of that anonymous function, we call this._map.flyTo(), passing it the coordinates of the location on earth to which we’d like to “fly”. As a second argument, we provide the number 2500, which means that we want the “fyTo” animation to have a duration of 2.5 seconds. The coordinates that we have provided to this._map.flyTo() are for Columbus Circle in New York City, and the TouchableOpacity has a text component as a child, with the text: “NYC”. So, by pressing this button, the user “flies to” Columbus Circle in New York City.

Full Working App – Example # 3

The full code for our basic working app is in Example # 3. Everything we covered in Example # 2 is in play here, so I won’t take up time repeating that, but one thing to note is line # 13, where we take advantage of the MapboxGL.MapView ref” property. By assigning an anonymous function, we take the argument that is passed to that function (“c”), and assign it to the property: “this._map”. This provides a reference to the map instance. Let’s face it, we’d basically be stuck in the water without the MapboxGL.MapViewref” property, so I’m amazed that it’s not mentioned in the documentation (hint hint MapBox folks!).

TouchableOpacity – Example # 4

I’ve added three more instances of the TouchableOpacity component in Example # 4, giving us four “buttons”: we can fly to New York City, Boston, Paris or Rome. I’ve followed the exact same patterns in the code from Example # 3, so again, I won’t take up time going over all that. The main thing I do want to point out, though, is: instead of assigning an anonymous function to the onPress property of each TouchableOpacity, I’ve created methods for each (i.e. “flyToNyc”, “flyToBoston”, “flyToParis” and “flyToRome”). This makes for much cleaner code, especially for our render() method (lines 31 to 66). In each case, I’ve used this pattern: this.METHOD_NAME.bind(this). This allows us to bind each method to the class instance (i.e. this._map) so that we have access to the class’s this._map property.

styles.js

Above, I’ve included the styles for the working code, just in case you wanted to quickly do a copy-and-paste, and get this application up and running in your local environment.

Summary

So, it’s obvious that the folks at MapBox really had their thinking-caps on when they developed the .flyTo() method. It is essentially an animation method, but what is so impressive is the minimal level of effort required to leverage it. Overall, the animation is smooth and it really does provide a “fly to” feeling in that it first zooms out, then moves to the target destination, then zooms in. And you’re in control, since all of this is done in the amount of time you specify in the animation duration argument. And the performance is impressive.

MongoDB Shell vs MongoDB Node.JS Driver

MongoDB

MongoDB LogoThe mongo Shell and the MongoDB Node.JS Driver both provide a way to interact with a Mongo database. There are fairly significant differences in how they work, however, as well as the benefits they provide.

There are multiple ways to interact with MongoDB, and two of those are with the mongo shell and the MongoDB Node.js driver. Now at this point it might make sense to ask which approach is best. Well, the answer really depends on the scenario. So, perhaps the first question should be: “What is it that I need to do?” Once that question is answered, you can determine which tool is best suited for the task. In this article, I’ll demonstrate the differences between the mongo shell and the MongoDB Node.js driver when performing basic CRUD operations. My hope is that this will help you to decide which approach works best for what you need to do.

The mongo shell is an interactive JavaScript interface to MongoD, and it is a component of the MongoDB package. The mongo shell can be used to perform CRUD operations on data, as well as administrative operations. In other words, think of the mongo shell as a way to interact with a MongoDB database without the need to build or interact with an application.

The MongoDB Node.js driver provides a way to interact with a MongoDB database from your Node application code. It supports both callback-based and Promise-based interaction with your mongo database. This would be the opposite of the mongo shell, which is meant to be used in your Node.js application code.

Inserting One Document Into the Database

Insert One Document with the Mongo Shell – Example # 1A

Insert One Document with the MongoDB Node.JS Driver – Example # 1B

With the mongo shell, we need to specify which database we want to use. We do this by using the “use” command. The syntax is: “use DATABASE_NAME”. So, In Example # 1A, we accomplish two things; we select the madMen database with the user command (i.e. “use madMen”), and then we insert one document into the names collection. Actually, a third step was taken here, although you may not have noticed because it was not explicit; i.e., the names collection was created. With the mongo shell, if we reference a collection that does not already exist when using the insert command, then that collection is created. Note that when we inserted the document, we passed an object to the insert method. This object can have one or more key/value pairs. In this case, we provided that one key/value pair.

You’ll notice that in Example # 1B, an all of the following MongoDB Node.JS Driver examples, there is more code. The reason for this is that there this is application code, so there are some setup steps needed in order to provide dependencies to our application and tell it what we want to do. With the mongo Shell, there is context. That is to say, the mongo Shell understands that you will be working on performing MongoDB-specific tasks, so there is no need to provide dependencies or explain much.

Now here in Example # 1B, we accomplish the same tasks using the MongoDB Node.JS Driver. The first five lines of code provide dependencies and some configuration information. And on line # 8, we establish a connection to the madMen database using the mongoDbClient.connect() method. This method takes a callback, and inside the callback we set references to the madMen database and the names collection. We then use the insert method of the names collection to insert one document. We also add some console.log() statements, just to provide some helpful message so that we can see that the operation was successful. So far, so good.

Inserting Multiple Documents Into the Database

Insert Multiple Documents with the Mongo Shell – Example # 2A

Insert Multiple Documents with the MongoDB Node.JS Driver – Example # 2B

In Example # 2A we insert multiple documents Into the madMen database using the mongo Shell, and we do this in two ways. First, we insert the new documents one at a time. There is no need for a for-loop as this is not application code; since we are in the mongo Shell, we can simply run each command manually. Then, we insert three new documents by using the insertMany method. Now, the difference between the insert and insertMany methods is that with the insert method, you pass one document object as an argument, whereas with the insertMany() method, you provide an array of document objects.

In Example # 2B we insert multiple documents into the madMen database, using the MongoDB Node.JS Driver. The difference between this code and the code found in Example # 2A is that instead of only passing an array of objects to the collection.insertMany() method, we also provide a callback as the second argument. The callback is not required, but it is likely that you will want to provide it because the collection.insertMany() method is asynchronous and you will likely want to act upon the successful insertion of the documents. So, in this example, we’ve shown a couple of console.log() messages to indicate that the database insert was a success. But more importantly, we’ve called the database.close() method, which as you might expect, closed the database. The main thing to keep in mind about leveraging the collection.insertMany() method in your Node application is that it is an asynchronous action, as is often the case in Node.

Viewing All Documents in the Database

View All Documents with the Mongo Shell – Example # 3A

View All Documents with the MongoDB Node.JS Driver – Example # 3B

In Example # 3A, we use the mongo Shell to view all records in the database by simply executing the command: db.names.find(). If we were executing a script file in the shell, we’d need to set a reference to all records, set up a loop, and then in each iteration of the loop we could output the current record over which we are iterating. But because the mongo Shell provides REPL functionality, we can simply execute an expression that results in a value representing every record in the database.

In Example # 3B, we use the MongoDB Node.JS Driver to view all of the records in the database, and here, we need to roll up our sleeves, because we have a little more work to do. Now once again, this is because this is application code, so we need to explain to Node exactly what we want to do. So, if you’ll take a look at line # 11, you’ll see that we use the find() method to obtain a reference to all records in the database. We then chain the each() method to the return value of this, passing it a callback. In the callback, the second argument is the current document over which we are iterating, so we log that document. If the current document is null, then we close the database connection.

Deleting a Single Document

Dele a Single Document with the Mongo Shell – Example # 4A

Delete a Single Document with the MongoDB Node.JS Driver – Example # 4B

In Example # 3A, we use the mongo Shell to remove one document at a time. Notice that we reference a specific document by providing the key: “_id”, and the ID of the document we wish to remove. But we don’t provide the ID simply as a string; we pass a call to the ObjectId function, and then pass the document ID to that function. The reason for this is that MongoDB prefers the wrapper function that converts that string ID to an object.

In Example # 3B, we use the MongoDB Node.JS Driver to remove one document from the database. Now the main difference here is that we use the deleteOne() method, instead of the remove() method. And similar to the mongo Shell approach, we provide an object that uniquely identifies the document we want to remove. This action returns a promise, so we can chain the then() method to its return value and inside the callback, we close the database (line # 19).

Deleting All Documents

Delete All Documents with the Mongo Shell – Example # 5A

Delete All Documents with the MongoDB Node.JS Driver – Example # 5B

In Example # 5A, we use the mongo Shell to remove all documents from the database. Now this is a fairly simple task because we provided an empty object to the remove() method. This indicates to MongoDB that we want to remove all documents.

Example # 5B is somewhat similar. Using the MongoDB Node.JS Driver, we remove all documents in the database by calling the deleteMany() method (as opposed to the “remove()” method). And in a similar fashion, we provide an empty object that signals to MongoDB that we want to remove all documents from the database. Once again, this action returns a promise, so we chain the then() method, passing a callback, and inside of that callback, we close the database.

Summary

In this article, we walked through a comparison accomplishing basic CRUD operations with both the mongo Shell and the MongoDB Node.JS Driver. In each example, we saw that there is a fairly significant difference in the syntax and in some cases, the method names. The main reason for the differences is that the mongo Shell is a REPL environment; i.e., all actions are synchronous, and the shell understands that we are working with MongoDB databases. The MongoDB Node.JS Driver generally requires more work, because our Node application is vanilla JavaScript, and is not necessarily hosted in a MongoDB-specific environment. So, in this case, we need to establish a database connection, set a reference to the MongoDB client, and set references to the database and collection.

Now, as to which approach works best, it really depends on your needs. Both the mongo Shell and MongoDB Node.JS Driver provide significant power for your work with your MongoDB database. The difference is that the mongo Shell is a terminal-based REPL environment and the commands will tend to be simpler. On the other hand, the MongoDB Node.JS driver provides a way to interact with MongoDB from your Node.js code. So, in this case, you’ll need to take a more low-level approach and write code that takes care of connecting to and from the database, as well as your business logic. But while this will usually require more effort, there is great power in that you are writing application code that can have complex logic and be executed repeatedly.

Getting Started With the MongoDB Node.JS Driver – Basic CRUD Operations

MongoDB

JavaScript LogoWorking with any database always requires some CRUD. Learn how to connect to a MongoDB database and perform basic data transactions.

Database technology is a subject that can quickly become complicated, but here, we’re going to stick to the basics. For example, on a very high level, you’ll usually want to do the same few things repeatedly, that is: connect to a database, insert or update one or more records, or delete one or more records. This is otherwise known as “CRUD” (“create read update delete”). Now even though the exact syntax for these actions will differ from one database technology to the next, the good news is that the general concepts are the same.

In this article, I’ll demonstrate very basic MongoDB CRUD operations using the MongoDB Node.JS Driver. Let me just begin, however, by mentioning the part that I’ll be leaving out: the “U” (“update”) step of our CRUD operations. This is a practical move on my part, because I’m guessing that you no doubt found this article through a web search, and you’re perhaps just getting started with MongoDB. If this is the case, then I think the “create,” “read,” and “delete” steps in this article are the best ones to begin with, and I will follow up with an article dedicated specifically to the more challenging “update” operations in MongoDB. That said, let’s just dive right into some MongoDB CRUD (minus the “U” : – )

Connect to the Database – Example # 1

In Example # 1 we connect to the madMen database. There are just a few steps needed to set up the connection. On line #s 2, 3 and 4 we have the URL of the database server, the name of the database we want to connect to, as well as the name of the collection with which we want to work. On line # 7 we use the mongoDbClient object that was created on line # 1 and we call its connect() method, passing it the database url. The second argument that we pass to mongoDbClient.connect is a callback which will allow us to act upon a successful connection. Now our reason for needing the callback function is that the mongoDbClient.connect method is asynchronous. So inside of the callback function, we execute a console.log() statement just to let ourselves know that were able to establish the connection. Now there’s not too much going on here; I just wanted to point out the basics of how to connect to the database. Once again, just keep in mind that connecting to the MongoDB database is an asynchronous operation.

Insert a New Document – Example # 2

Example # 2 takes us to our next logical step in our CRUD operations by having us insert a new document into the database. The required steps for connecting to the database are exactly the same as those for Example # 1, so let’s save some time, skip over that, and talk about what’s new in Example # 2. Here, we’re using the database variable, which is the second argument passed to the mongoDbClient.connect callback function. Now, in using that database variable, we get ahold of the madMen database, and also set a reference to the names collection. So, using that variable, we call the collection.insert method, passing it the new document that we want to insert, as well as a callback function. Now the hope is that by now, you’ve noticed a pattern, which is that we need to provide a callback function because the collection.insert method is asynchronous. In the callback that we pass to the collection.insert method, we use console.log() to indicate that the document that was inserted was successful. This, of course, is just for demonstration purposes. We then call the database.close() method, to close the database connection.

Insert Multiple Documents – Example # 3

There is only a small difference between Example #s 2 and 3, and that is in Example # 3 we use the collection.insertMany method instead of collection.insert. And instead of passing one document, we pass an array of documents. Everything else is virtually the same; i.e., we execute a log message for demonstration purposes and then close the database connection.

View All Documents – Example # 4

So, now that we have created a few documents, it’s time to view them. Let’s take a look at Example # 4, and drill down to the collection object. By getting ahold of the collection, we can use its find() method. And by passing no arguments to the find() method, we get all of the documents in the collection. We iterate that list of documents, and output each one in the console. Then, when we have gotten to the end of the list, we close the database connection.

Delete One Document – Example # 5

So here we are at CRUD’s letter “D”, which is what we take care of in Example # 5. The main difference between this one and Example # 4 is that once we drill down to the collection object, we use the deleteOne() method, passing it an object that represents the document that we want to delete. Now, I say “…object that represents” because we do not pass it the exact document that we want to delete; what we actually pass it is an object that contains the ID that matches the document we want to delete. Note here that in this document the value of the _id property is an instance of ObjectID, which we initialized on line # 2. ObjectID is a special object that we need in order to pass around mongoDB document IDs. Now it’s important to point out that while it may be tempting to simply pass the ID of the document that we want to delete, unfortunately, MongoDB does not work like that. You need to actually provide an instance of ObjectID. It’s also important to note that, although the deleteOne() method is asynchronous, we handle it a bit differently. In other words, instead of passing a callback function, we use the then() method and pass a callback to that method. And once again, inside of that callback, we close the database connection.

Delete All Documents – Example # 6

In Example #6 we sort of kill two birds with one stone. We leverage the deleteMany() method and as you may have guessed, this method allows us to delete multiple documents in the database. Now, if we simply wanted to delete two or more documents, we would take an approach similar to the one in Example # 5, and pass an array of objects that contain ObjectIDs which match the documents we want to delete. In Example # 6, we wind up deleting every document in the database because we pass an empty object to the deleteMany() method. As with the deleteOne() method, deleteMany() is asynchronous, so we chain its then() method and pass a callback function to it. Inside of that callback function, we log our success and then close the database.

Summary

I’m hoping that this article has provided enough of a high-level understanding of MongDB’s basic operations to get you started. The examples are pretty simple, but they should be enough to help you do further digging around into CRUD operations. The main things to keep in mind are: most of the important methods that you will call are asynchronous, and the ObjectID is a critical component when you want to generate one or more matches with documents in the database.

An Introduction to Angular Reactive Forms

Angular

angular reactive formsAngular reactive forms provide a model-driven way to build, manage and test your HTML forms. There is also a difference when it comes to setting up event handlers.

In this article, I’ll provide an introduction to Angular Reactive Forms. You’ll learn how to create model-driven forms using the FormBuilder, FormControl and FormGroup modules. This is a high-level explanation and will primarily focus on how to compose forms using these modules, and how reactive forms differ from template-driven forms. Now you may have found that the term “model-driven forms” is often used to describe the same topic, but “reactive forms” seems to be the terminology that Angular uses consistently, so we’ll stick with that.

Angular’s reactive forms provide a way to build and interact with your forms in a more programatic way. Now, some may get the impression that reactive forms differ from template forms in that you don’t need to create the template, but this is incorrect; you do need to create it. With reactive forms, however, you build a matching model in your component that provides programmatic access to your form. This is significant in two ways: first, your form is now testable; second, your template is less tightly coupled and you can move your business logic to your component, or better yet, to your service.

Differences between Template Driven and Reactive Forms

With template-driven forms, your validation logic lives in the template. Now this is fine if your validation is simple (e.g. “required” or “min-length”), but once your validation logic involves any level of complexity (and when is this not the case : – ), then your template becomes bloated and difficult to read. With reactive forms, your validation logic lives in your component, or it can be proxied to your service. In either case, this is a better approach. Also, with template-driven forms, the only way to check the form is an end-to-end test. Now there’s nothing wrong with E2E tests, but that should be a final layer of defense. The real hardened testing should come via your unit tests, and with reactive forms, this is exactly the approach you can take.

Using FormControl – Example # A – 1

The Change Event Using FormControl – Example # A – 2

The Template Using FormControl – Example # A – 3

VIEW THE FULL CONTENTS OF THIS HTML FILE HERE: https://gist.github.com/kevinchisholm/cfbc4005e0ce7041553f35e6b35a6314#file-home-a-html

Example A contains the component and template for a fairly basic reactive form, but let’s look at Example A-1 first. Notice that I’ve imported FormControl and FormGroup from @angular/forms. These are the ones I’ll need in order to start building our form. Line # 10 is interesting because I’ve added a registerForm property, but just notice this property type, which is FormGroup. So here I’m saying that my registerForm property will be an instance of the FormGroup class. In fact, on Line # 14 that’s just what happens: I instantiate that class: this.registerForm = new FormGroup({…}), and when instantiating the FormGroup class, I pass it an object. In this object, I’ve created seven instances of the FormControl class, and at this point, I’ve done two things: I’ve created an instance of FormGroup, and I’ve passed several instances of FormControl to that constructor. So, what this comes down to is that I have a form “group” and a bunch of form “controls.”

On Line # 24, I have another interesting thing going on, which is a subscription to an Observable: this.registerForm.valueChanges.subscribe. Note that this is a major departure from template-driven Angular forms. My form is now a source of continuous values that I can subscribe to. I’ve passed an anonymous function to the valueChanges.subscribe method, this anonymous function receives an object as its first argument, and that object is a data-driven representation of my form. Essentially, this data contains all of the values for all of the fields in my form. Notice that on Line # 28 I’ve included a console.dir() statement, which allows me to inspect that data that I receive each time my subscription to the form gets an update. Now if you look at Example A-2, you’ll see the contents of this data. Notice that all of the property names match the names of the FormControl instances that I created on Lines 15-21. In Example A-3, you see the actual form template, which is not all that interesting, but there are a couple of things to keep in mind. First, the template is very simple, which is an improvement over the template-driven approach to forms. Second, the form itself has a [formGroup] attribute which matches the registerForm property from my component. Also, each form control has a formControlName attribute that corresponds to a FormControl instance from my registerForm property (which is an instance of the FormGroup class).

Using FormBuilder – Example # B – 1

The Change Event Using FormBuilder – Example # B – 1

The Template Using FormBuilder – Example # B – 3

VIEW THE FULL CONTENTS OF THIS HTML FILE HERE: https://gist.github.com/kevinchisholm/cfbc4005e0ce7041553f35e6b35a6314#file-home-b-html

In Example B I have an alternative approach to my home component. I’ve imported the FormBuilder component, and on Line # 15, instead of instantiating the FormBuilder class, I call a static method of that class named: group(). Things look a little different when compared to the previous example of this component. I pass an object to the static group() method, but instead of providing instances of the FormControl class, I pass property names that match my actual form controls in the template. This, then, enables me to avoid the numerous instantiations of the FormControl class, which, in turn, makes for more concise / readable code.

Notice that on Line # 20, I assign to the address property another call to the static group() method. I’m effectively creating a nested form group, and there are a couple of real wins here. First, the syntax is so simple: I create properties, which are form controls, and then when I want to logically group some form controls, I make a call to the static group() method, passing it yet another configuration object. In this case, the logical grouping is the address, which makes sense. Note that the street, city and zip code fields are all part of an address.

At this point, you may be wondering why the values for some of the form group’s properties are strings, yet some are arrays. Well, this is one of the real benefits of the reactive form syntax. That is, when you provide an empty string, the form control’s initial value is blank. When you provide an array, the first array element is the initial value of the control, and the remaining elements are the control’s validators. In the case of firstname and lastname, both controls are made mandatory via the validators.required built-in validator. For street, zip and city, assigning an empty array tells Angular that the initial value is blank and there are no validators. In other words, an empty array is a valid assignment, but it does not provide much value. So, I could have simply provided an empty string.

On Line # 27, I’ve set up another subscription to the registerForm.valueChanges observable. So, when you look at Example B-1, you’ll notice that the data object that is passed to the subscription callback has an address property, which is an object. This address object has properties for street, zip and city. So, by creating a nested FormControl instance, I’ve created an object with another object as one of its properties

In Example B-3, you’ll see the template for this form. Notice the section with the class name “form-data-container,” which is for demonstration purposes. If you clone the Github repo for this project and run it locally, you’ll see that any values you enter into the form fields will appear in the “form-data-container” section of the UI. The main purpose for this is to demonstrate that the entire form can be represented by a data object, which can then be used to update the UI in some manner.

Summary

Angular Reactive Forms is a major improvement over the Angular 1.x way of doing things. Angular template-driven forms are a continuation of the approach taken in Angular 1.x. So, while the template-driven forms approach can be a bit easier when it comes to creating a simple form, that simplicity is quickly compromised if your form and / or validation logic increases beyond a certain point. The Reactive Forms approach requires a little more up-front effort, but the reward is quickly granted in the form of code that is testable, easier to read, and easier to manage.

JavaScript – For-In vs For-Of

JavaScript

JavaScript Logofor-in and for-of both provide a way to iterate over an object or array. The difference between them is: for-in provides access to the object keys ,
whereas the for-of operator provides access to the values of those keys.

Iterating over an object or array is a pretty routine task with JavaScript, in fact, it’s hard to imagine a day when you don’t’ need to perform this action. When Iterating over an array, things are a bit easier because you can leverage the array’s “length” property to set up your iteration. But when you need to iterate over the properties of an object, things get a little sticky.

Why For-In vs For-Of

In his article, I will demonstrate the difference between the for-in and for-of JavaScript operators. Now, while these two methods may seem to provide the same functionality, actually, they do not. In fact, you might say that they are polar opposites. The for-in operator returns the keys of an object of array, whereas the for-of operator provides access to the values of these keys.

For a better understanding, let’s take a look at some examples.

for-in – Example # 1

In Example # 1, we use a for-in loop to iterate over the elements of the days array. Now, since we are creating the variable: “day in days”, on each iteration of the loop, we have access to a day variable, which represents the element over which we are currently iterating. The output for this example can be seen in line #s 8-15, and the purpose of this example is to demonstrate that the for-in operator provides the keys of an object, not the values of those keys. It is possible to get ahold of these values, which we will see in a moment, but, for now, I just wanted to point out that for-in provides direct access to the keys of the object over which we are iterating.

Using Bracket Notation – Example # 2

Example # 2 is virtually identical to Example # 1, in that we leverage almost the exact same code to iterate over the days array. The difference here is that we manage to get ahold of the key values by using bracket-notation. So, instead of outputting console.log(day), we output console.log(days[day]). In a pseudocode kind of way, we are saying: “give me the value of the days property that had this key”. The output for this example can be seen in line #s 10-14, and it is exactly what we wanted: we see the value for each key. This does feel a little hackey though, so let’s see if we can do better than this.

for-of – Example # 3

In Example # 3, we’re able to achieve our goal by leveraging the for-of operator. Simply by using for-of (instead of for-in), we’re able to access the value of each key. So, not only is this a non-hackey way to approach this problem, it is also cleaner and easier to read.

How to create a React Native side-menu with react-native-drawer

React Native

JavaScript LogoThe react-native-drawer module makes it easy to implement a side-menu in your React Native application.

In the article: “Getting Started with the Mapbox Maps SDK for React Native” I walked you through the basic steps needed to render a Mapbox Map in a React Native application. So, I thought this might be a good time to mention the side-menu, which is a fairly typical component in any mobile application, as it provides a container for critical tasks such as navigation or setting options. In this article, I will explain how to create a side-menu, using the react-native-drawer npm module. I’ll explain the basic syntax and make a few suggestions on how to format your code, and… spoiler alert: the react-native-drawer module is easy to implement and the syntax is very straightforward.

If you are just getting started with React Native, or building a new mobile application from scratch, the react-native CLI is incredibly helpful. By simply executing react-native init APPLICATION_NAME in your terminal, you’ll have a fully functional application in less than a few minutes. But what’s so magical is that for the entire few minutes, the react-native CLI is busy building that application for you. In fact, it even executes a first-time npm install for you. Pretty bad-ass.

But, I will mention that the application that the react-native CLI builds out for you consists of a simple view with some text. Of course we can’t complain; the react-native CLI has done a great deal of work, sparing us the tedious details of staring at a blank text file, wondering where to begin. So, one of the first steps in snazzing-up this “hello world” is to add a side menu, which will pretty much be a “hello world” side-menu, but you can copy-paste this code into your app and be up and running in minutes.

package.json

Above, we have our package.json file. Nothing too special going on here; we just need react, react-native and react-native-drawer. In the devDependencies section, babel-jest, babel-preset-react-native, jest and react-test-renderer were configured for us by the react-native CLI. You should be able to copy and paste this into your application, then fire up your emulator.

Example # 1

In Example # 1 you see the very basics of how the side-menu component is arranged in your code. In fact, it’s so basic that it won’t actually work yet. So what I’ll do is demonstrate how you wrap your application code in the Drawer component. Drawer is an object that we create by requiring the react-native-drawer module. (You’ll see the details for this in Example # 5.) Meanwhile, the main thing to keep in mind is that the Drawer component wraps your application code in your class’s render() method, and your application code would go where you see: “APPLICATION_CONTENT_GOES_HERE”.

Example # 2

Example # 2 is similar to Example # 1. The main difference here is the addition of two properties to the Drawer component. The content property is where you set the content for the Drawer component, which is where you see “CONTENT GOES HERE“. Now in theory, you could put a string there. I feel pretty sure, however, that you’ll never want to do that.

I suggest providing a reference to a method that will render your content. If you think about it, this is ALL of the content for your side-menu. I don’t think there is any chance that the side-menu content will be just a few words or something. So it makes sense to provide a method reference, and in that method you render your content. This way, you can break that method out accordingly into smaller sub-methods. You can then include any logic that is needed to properly render your content.

The styles property is where the styling for the side-menu component goes (i.e. where you see: “YOUR_CUSTOM_STYLES“). Now while you can provide an inline object literal here, again, I recommend you provide a reference to an object that contains your styles for cleaner and more manageable code.

Example # 3

In Example # 3, I added an event handler for the onClose() property. This is not a requirement, but I added this piece of code because you may want to know when the user closed the side-menu. For example, on menu closed, you may want to emphasize or deemphasize another element. Here too, you do not have to handle this event, but it’s just good to know that you can if needed.

Example # 4

In Example # 4, I’ve added more properties that give the side-menu a bit more of a finished feel. Take a look at line # 4: type=”overlay” tells react-native that we want this side menu to have an overlay look and feel. And setting tapToClose to true allows the user to close the menu simply by tapping anywhere outside of the menu. I’d recommend leveraging this feature, since you probably don’t want to require the user to click a dedicated button in order to close the menu. But that, of course, is your choice.

So, in an effort to keep things brief, I’ll skip the in-depth discussion of openDrawerOffset, panCloseMask, closedDrawerOffset, panOpenMask, captureGestures and acceptPan. They all have to do with visual aspects of the side-menu and are worth looking into when you have time.

Example # 5

There is a lot going on in Example # 5 as it is all the code for the working application. Take a look at the renderSideMenuContent() method on line # 13 and the renderMainContent() method on line # 23. In these methods, I’m rendering the content for both the side-menu and the rest of the application, because I wanted to keep the render() method on line # 39 as clean as possible. This way, you can get a sense of how the application is structured just by looking at the code between line # 39 and line # 66. And then, the implementation details for child components such as the side-menu content and the application content are broken out into their own methods.