Dependency: spring-boot-configuration-processor unknown

The Paradox of JavaScript

When adding spring-boot-configuration-processor as a dependency, Maven > Reimport is needed.

In my Spring Boot project, I was just attempting to add a dependency section to my pom.xml file. The groupId is org.springframework.boot, and the artifactId is spring-boot-configuration-processor. But IntelliJ IDEA highlights spring-boot-configuration-processor in BOLD RED and shows the error: “spring-boot-configuration-processor unknown.” The whole problem started because in my .java file, I was attempting to use the @EnableConfigurationProperties annotation.

A few minutes of searching led me to the page: Maven Repository: org.springframework.boot » spring-boot-configuration-processor » 1.5.4.RELEASE. Since I am using the 1.5.4.RELEASE version of org.springframework.boot.spring-boot-starter-parent, this sure looked like the exact page I needed.

Unfortunately, even with what appeared to be the exact and correct config properties, the BOLD RED highlighting and error persisted.

After some searching I found out that Maven > Reimport is needed. The steps to fix this are simple:

  • Right-click your project
  • Click Maven
  • Click Reimport

The above screenshot shows the menu. After a Reimport, the BOLD RED highlighting and error went away. The article Spring Boot Support in Spring Tool Suite 3.6.4 references Spring Tools Suite, but the content of the post led me in the right direction towards solving the problem. Scroll down to where you see the text: “Add this to the pom.xml”. That section might be helpful to you. But instead of Maven >> Update Project it’s Maven >> Reimport.

The Apache Tomcat installation at this directory is version 8.5.15 Tomcat 8.0 installation is expected

The Paradox of JavaScript

The problem is in

I was just attempting to add an Apache Tomcat server to Eclipse Java EE IDE for Web Developers. Version: Mars.2 when I ran into a nasty snag: In the New Server dialog, the Next> button is not enabled. I also noticed an error in the top of the dialog: “The Apache Tomcat installation at this directory is version 8.5.15 Tomcat 8.0 installation is expected“. Very annoying.

After some searching I found out that the problem is in The line needs to have the value: Apache Tomcat/8.0.0.

The Apache Tomcat installation at this directory is version 8.5.15 Tomcat 8.0 installation is expected

The above screenshot shows the issue. If you run into this problem, this Stack Overflow article:  How to use Tomcat 8.5.x and TomEE 7.x with Eclipse? – Stack Overflow has the answer you need. Check out the answer by dexter-meyers. But also note the edit by informatik01. In short, you want to follow the detailed steps to patch catalina.jar, but you only need to change the version in the line.

For the complete answer to this issue, check out this Stack Overflow page:



How to scrape any web page and return the metadata as JSON

CSS position relative

Let Node.js scrape the page and get the title, description, keywords and image tags for you

I recently built a web-based bookmarking application for myself using React. I was happy with the results and use the app every day. However I found myself having to go back-and-forth between the app’s tab and the tab of the page I am bookmarking. First the URL, then the title, then grab an image from the page, and then manually enter keywords that make sense to me. Too much work. I started to think that this was a perfect opportunity for some web page scraping.

On a high-level the game plan was:

  • The user makes a POST request to the route: /scrape, passing a URL in the request body
  • Secondly, we make a 2nd HTTP request behind-the scenes to the URL provided by the user
  • Next, we take the response of that  2nd HTTP request and parse it out
  • Finally, we take various values scraped from the HTML and return it to the user as a JSON response

Example Web Page

For demonstration purposes, I decided to create a simple web page that makes it easy to construct the initial HTTP POST request. Consequently, if you clone the Github repo below and follow the instructions, you can run the web page locally, enter a URL and then see the metadata scraped from the2nd HTTP request presented in the web page.

Image # 1 – The web-scraping application

If you look at the screen-shot above, you’ll see the web-scraping application that is available via the Github repo. In this screen-shot, I have entered the URL:, and then clicked the button: “Scrape Page“. The result is that the title, description, keywords and a list of images appear in JSON format in the box below. All code examples below are taken from this application so you can see how the code works when you run it locally.

github octocat logo Clone the example code here: (directions ca be found on the repo page)


In Example # 1, I setup a route handler for the /scrape route. So when the user makes an HTTP POST request to /scrape, the anonymous  function is executed.

Inside the route handler, we use the request object to make another HTTP request.  The URL of the request is provided via the user’s HTTP POST request. In other words, we look at the req argument that is passed to the route handler, which is the request object, and grab the body.url property. Next, an anonymous function is passed. In similar fashion, that function takes an error object, a response object, and a responseHtml object as its arguments We do a quick test to see if there is an error object, and if so, we therefore exit (just to keep things simple). I’ve chopped our the rest of the implementation code so that it is easier to understand how we got this far.


In Example # 2, we have the rest of the code that goes in the handler for the behind-the-scenes request. First of all, the resObj variable represents the object that will be returned to the user as JSON. Furthermore, the $ variable is an important one. It represents a jQuery-like function that allows us to query the DOM of the HTML returned by the 2nd HTML request.

Creating the metadata for the JSON response

Following the $ variable, we create the variables: $title, $desc, $kwd, $ogTitle, $ogImage,  $ogkeywords and $images. The first six variables represent meta data scraped from the HEAD section of the HTML. On the other hand, the $images variable differs a bit in that the values in that HTML collection are scraped from the BODY tag of the page.

View the full code for app.js here:

Over the course of the next few dozen lines of code, we just check to see if each variable has a value. If it does, we therefore add it to the resObj object. In other words, we want to avoid any errors as we construct our JSON response. Similarly, for the $images variable, we first make sure that the collection has length. Secondly, we use a for-loop to gather-up all of the image href values and add them to the images property of the resObj object, which is an array.


To be sure, there is fair amount of code that I left out for brevity sake. Mainly, I did not discuss package.json, the variable declarations at the top of app.js, or the contents of the www folder because that might have made the article quite long. If you clone this repo, follow the instructions and then run the app, as a result it should be very easy to follow the code and understand what is happening .

For this article, I wanted to focus on how the /scrape route handler is setup to handler an HTTP POST request and then how the 2nd HTTP request is made and subsequently handled.

The Paradox of JavaScript

The Paradox of JavaScript

Are you getting an ECMA-Headache?

In the book: The Paradox of Choice: Why More Is Less, author Barry Schwartz argues that too many choices can dilute satisfaction. While this title spends much of its time in the context of consumer products, a similar argument can be made about the world of JavaScript. There is so much going on in the wild wild west that is JS, but is that really a good thing?

In short, I’d say  yes, it is a good thing. Even though it can be difficult to navigate the maze of libraries and frameworks, the explosion of activity breeds a world of innovation and creativity. But there is no doubt a cost; Where to begin? How to keep up? There is a lot of noise associated with the world of JavaScript. I actually feel that most of it is good noise, but it can be overwhelming.

I recently participated in an  Aquent Gymnasium webinar titled: keeping up with javascript is a full-time job, and I thought the title was brilliant. Not only are beginners feeling JavaScript anxiety, but experienced developers as well. I’ve heard many people ask the same questions: “Should I learn Angular or React”? – “If few ES-2015 features are currently supported, should I still learn them?” – “Grunt , Gulp or Webpack?” and so on.

ES6 vs ES-2015 vs ES-2016 vs ES-WTF

And speaking of ECMAScript, what is up with the naming-scheme? ES6 is AKA ES-2015, and ES7 is AKA 2016? Ok, that’s easy to remember. But what to learn? What the hell is a JavaScript symbol? And, what significance does it play in the million-and-fifty-fifth JavaScript slideshow I will have to make in my next Agile Sprint? Is this just like all that cruddy math that we had to learn in 8th grade, knowing perfectly well that we’d never ever need it in adult life?


So many libraries, so little time

This is where the paradox may lie. We have so many JavaScript toys to play with, but who has time to keep up with all of them? First, you have to be aware of changes in the JavaScript jungle. For example, Angular 4 is out, but there is no Angular3. Okie dokie. Next you have to understand the role of each library or framework. And then at some point, you want to learn how to use it, right?

Sometimes it is really tough to know where to invest your time. I’ve been hearing more and more about Aurelia and Vue.js. Both have enjoyed positive reviews and are gaining traction. But are they really going to take off line Angular? Am I really going to benefit in my next job interview by learning either one of these libraries or any of the other up-and-coming JavaScript libraries/frameworks ?

My answer: Bet on JavaScript every time

I’m not sure it is necessary to learn every single JavaScript framework or library that falls from the tree. We all have lives to live and there are only 24 hours in each day.

Something interesting about all of this craziness is that there is one common thread throughout: JavaScript.  JavaScript is the language used in all of these libraries/frameworks/build tools. So, you simply cannot lose by making JavaScript your top priority. If you have a free hour, spend 45 minutes studying JavaScript, and 15-minutes learning a new library. As long as your JavaScript skills continue to improve, you will always have the tools you need to learn any new library/framework/build tool. Not only that, but you will get better at picking them up. In addition, you will start to see the similarities between them and common patterns in the source code.

In short: you simply cannot lose by concentrating on JavaScript.


Not only is it important to focus on JavaScript, but it is also key to learn the new features of the specification.  Most browsers do not support these features, but they will soon, so best to get ahead of the eight-ball. ES-6 and ES-7 features are powerful and when supported, will take much of the pain out of creating sophisticated client-side web applications. More important than Angular, more important than React, learn the newest features of JavaScript. And, Babel is your friend; it allows you to use features that browsers do not yet support. Also, the combination of Typescript/Webpack is another solid solution.

Planning is key

I can only speak to what has worked for me, and that is: always trying to decide where my time is best spent. For example, one of the biggest arguments in the JavaScript world is: “should I learn Angular or React?” Well, I’d say: learn both!

You don’t have to master each one, but learn enough to understand the differences between them as well as their strengths / weaknesses. Since, I happen to spend 90% of my professional day working with Angular2, I am a fan. But, I was worried that I was falling behind on my knowledge of React, so I spent my last Christmas holiday building an application with React. Now, I am far from a React guru, but in building a simple CRUD application that I actually use each day, I was able to gain an understanding of how it works, how it differs from Angular, and what its strengths are.

I’ve tried to take this approach with every other segment of the JavaScript ecosystem: NPM vs Yarn, Gulp vs Grunt vs Webpack, Typescript vs Vanilla JavaScript, and so on. In each case I ask myself: “What is the most important thing I need to know about this library/framework/build-tool ?” and then my goal is to be able to speak intelligently about it. Sometimes it takes a Saturday afternoon, sometimes it takes a month. Sometimes it turns out that I wind up using that particular tool heavily in my daily work. But I try to at least understand what it does, how it differs from its competitor and what it brings to the table.


In my opinion, there will always be a couple of JavaScript libraries or frameworks that you work with on a daily basis, a few that you used to work with, and then a zillion that you have heard of but have not had time to learn yet. They key from my perspective, is to accept this reality; you can’t have an expert-level knowledge of everything. But you can keep your finger on the pulse of what’s going on out there, and do your best to have a good understanding of the more popular tools and the role the play.

Why the create-react-app Node module is so awesome

angular2 KEYWORD

This tool is perfect for beginners as well as React experts

The JavaScript ecosystem is the wild wild west of the technology world. It seems like every year there is a new heavyweight champ. Right now, Angular and React are duking it out for the belt. They are both solid and enjoy tremendous corporate backing / community support. But they are as different as chalk and cheese. Learning new technologies can be painful. Chances are you got here because you have decided to take the dive into the world of React. Depending on your level of experience, this can be a challenge. The create-react-app Node module can definitely help.


The create-react-app Node module protects you from all of the pain involved with setting-up a React application. Granted, there are tons of JSFiddle links out there that show you how you can spin-up a React application simply by adding to script tags to your web page. Yes, but this kins of setup is not going to cut it in the real world. These are examples that help you get up-and-running and learn.

If you are going to build a real production-ready React application, even a small one, you need some kind of workflow. This is where the pain is: front-end tooling. The create-react-app Node module takes care of all of that for you, literally. Once you have cloned the github repo, you simply execute the following command: create-react-app YOUR_APP_NAME.

Yep, that’s it!

The create-react-app  module takes care of all your Webpack, Babel & ESLint configuration and setup. The funny thing is: you don’t see it. Under the hood there is one main dependency: the react-scripts Node module. This module is like your personal front-end engineer. It sets-up Webpack and Babel  and configures it. You never have to write one line of configuration code for any of this. After you run that create-react-app YOUR_APP_NAME command,  you cd YOUR_APP_NAME, and the npm install. After the npm install is complete, npm start is your last terminal command and your local instance is alive in the browser.

Why this is so amazing

The beauty of all this is: you can get a production-ready React application setup in about 3 minutes. Not only that, this setup was created by the Facebook React team and sanctioned by them. So, you have a great template to start with. The actual application itself is literally a static “Hello World” HTML page. Before you complain, keep in mind that if you are going to learn React, you have to actually write a little code! But the really amazing part is that the most painful aspect of setting up a React application is taken care of for you. You can clone, create, install in a couple of minutes and then start writing code.


Finally, there is the “Eject” command. When you run npm run eject, the create-react-app Node module will un-wrap all of the abstraction. What this means is: all of the front-end tooling remains in-tact and continues to work perfectly, but you are no longer protected from it. Tools such as Webpack and Babel are now available to you and completely customizable. The advantage to this approach is that you can customize your application however you like. It’s also a great way to learn about front-end tooling: you can really see the recommended ways that these tools are configured.

Down sides

There are a few downsides to the create-react-app Node module. The biggest one is that there is no consideration for CSS pre-processors such as LESS and SASS. Also, you cannot configure your application when creating it. You are stuck with the configuration and tooling that is provided. Of course you can use the eject command to reveal all of that detail and do as you wish, but that brings us to the final downside: when you eject, you can never go back.

Gettin’ Down with BaconIpsum

angular2 baconipsum json

Sometimes you need Lorem Ipsum, and sometimes you need it in json format via an API call. If you don’t mind bacon references spattered throughout that latin, then Bacon Ipsum can help.

I’ve already written an article about the need to make an API call before that service is ready. And is a perfect tool for that. But if you just need a lot of text and want to generate it via an API, then Bacon Ipsum is a fun tool that can solve that problem.

The overall concept is a bit silly. I mean, lorem ipsum is kind of silly, and having words like  “bacon”, “flank” and “shankle” sprinkled throughout borders on nonsense. But their API works well and is very easy to use. So, if you are in the middle of a dev project and need some dummy text in JSON format, it’s nifty stuff.

Json API

The landing page allows you to generate some Bacon Ipsum on the fly, but if you want the API calls, then go to There  you’ve got very simple instructions on how to get your Bacon using simple query string parameters. These parameters allows you to control how many words and paragraphs will be returned.

Other Bacon-stuff

If you find this all funny and want more Bacon, they also offer a Chrome extension as well as Android and IOS apps that allow you to… well, I’m sure you can figure out what these do… more Bacon. Although I will note that these apps are pretty outdated, so use at your own risk. There is a jQuery plugin, and three.. count ’em three WordPress plugins for increased Bacon madness. And to top it all off: a baconmockup HTML generator. This might be a bit too much Bacon for me. They had me at the JSON API calls.

Angular2 HTTP Observables in Five Minutes

rxis angular2 observablesObservables are the way to stream data in Angular2. Here’s how to get it working.

Managing asynchronous activities in any JavaScript-driven application can be tricky. Every framework / library has an approach, and there are proven design patterns that are worth considering. In Angular 1.x, $q is the engine that drives this. In Angular 2, it’s RxJS. This technology is not for the faint at heart. It’s very cool, and works well, but does take some getting used to.

This article has one focus: providing example code that demonstrates how to create an observable, and then consume it. You can start out by cloning the Github repository below. Once you’ve done that, you can run the example code locally and see it in action.


Making the HTTP request

In the following example, we will make an HTTP request, and then stream the return data to anyone who has subscribed to it.

Example # 1

In Example # 1, we have the code for the PackageService service. This service makes the HTTP request for the JSON data. I’m using so that we don’t have to spend time with implementation details about serving up the JSON. We just want to make the request and then talk about how we can share that data across our application via an RXJS stream.

Let’s talk about this line:

Here we create the packageData property.  Although it will be an array, it will be a BehaviorSubject instance. So when we define the property, we specify that it is of type: Subject,  and instantiate the  BehaviorSubject class, passing it an empty array. The reason for the empty array is that we don’t want to stream “undefined. Somewhere else in our code, there is a consumer who will want to subscribe to this stream. That consumer expects an array. It’s fine if the array is empty at first, we just don’t want the consumer to get “undefined“.

Later in Example # 1, we call the http.get method, map that result to JSON, and then we subscribe to that JSON. I don’t want to get into too many implementation details about the last sentence as I promised that this would take “five minutes“, so let’s focus on the next line: subscribe. By subscribing to this JSON, are saying “Hey, any time there is  a change in this JSON, I want to know about it“. And when that change occurs, then the function you see passed to the subscribe method is executed.

That function will receive the updated JSON data as its first argument. The very next line is critical: What’s happening here is:  the packageData object that we created earlier has a “next” method. That method takes an argument, which can be anything. In our case, it is the JSON data. So, anyone who has subscribed to our packageData property will receive a notification that there is new data and will be passed that data.

Example # 2

In Example # 2, we have our PackagesComponent component. Let’s zero-in on the critical part: the ngOnInit method. As you probably guessed by the name, this method is executed when our component is initialized. Take a look at this line: this.packageService .packageData .subscribe. There we are subscribing to the packageData property that we created in the packageService service. Because that is an instance of BehaviorSubject, subscribing to it gives us a live connection to anything it streams out. In our case, the http.get request fetched some JSON data, and in that service, the JSON data is streamed out via the  line: The JSON data comes to the subscribe callback via the variable: “packages“. So we set this.destinations = packages. At that point, the UI is updated and we see the list of travel packages in the page.


I promised that this would take less than five minutes. Hopefully it did. RXJS is a deep subject that takes some time to get familiar with. I wrote this article because the first time I needed to stream the result of an HTTP request in an Angular2 application, it was a giant pain in the rump.  Here I have tried to provide an easy example of how to sketch out this scenario and get it working quickly. I hope it was helpful!

Setup a Node / Express Static Web Server in Five Minutes

node express

setting up node and express as a simple lightweight web server for your single page application is very easy to do.

Sometimes you just need a local web server.  Sure, you could use MAMP, but installing Apache, MySQL and PHP seems like overkill in some cases. I have used MAMP for years and it is terrific. In particular, when you need to run PHP locally and / or connect to a MySQL database, it’s the cats pajamas. But that was the standard 10 years ago. Nowadays, it’s very common to build a single page web application where all of the assets are static, data is pulled-in from a REST endpoint and all of the heavy-lifting is done in the browser via JavaScript. In these kinds of scenarios, a static Node / Express server is a simple, easy and lightweight approach.

The code samples can be downloaded here:

Example # 1

In Example # 1, we have the contents of package.json. Nothing too special going on here. Just note that our only dependency is the express module. Also, in the scripts property, I’ve setup the start command to execute the app.js file in the node web-server folder. This way, we can simply type npm start in the terminal, instead of node web-server/app.js (just a bit less typing).

Example # 2

In Example # 2, we have the entire contents of our web server: 15 lines of code (and nearly 25% of that is comments!). The magic happens on line # 10:  We call the app.use method and pass it express.static, which also takes a couple of arguments. This tells Express that we want to set a static folder. We use the path.join method to tell Express where all static assets should be served from. In our case, it is the www folder. The two arguments passed to the path.join method are __dirname, which tells us the absolute path to the folder within which the current script is found, and then “../www” which is a relative path to the www folder.

Express does all of the heavy lifting

A little earlier, I used the word magic. We both know that none of this is actually magic, but it sure feels like it. If you’ve ever created a Node web server manually, then you know two things: 1) It’s really easy, 2) It’s really tedious once you get past “Hello World”.  Express hides all the tedium and makes serving static assets as easy as 1-2-3.


There is one downside here. Express does not set the appropriate content-type headers for the HTTP requests.  This is not fatal in most cases because this approach is simply meant to provide a very fast an easy way to setup a static web server. The actual web server works just fine, but keep in mind that content-type headers for files such as JPEG, PNG, CSS or JS will not be set accordingly. If that is a problem, then a simple static web server is probably not what you need and you should consider a more robust approach. Hopefully, if you do need a simple static web server, this article was what you needed to get up-and-running quickly.


Getting started with the Cloud9 development environment

cloud9 logoIf you are learning web development, Cloud9 offers free and low-cost cloud based environment that provides everything you need to get started.

Every now and then, I’m impressed. I’m not sure how I’ve never heard of this before, but Cloud9 is pretty amazing. This online integrated development environment supports the following languages: C#, C/C++, Clojure, CoffeeScript, ColdFusion, CSS, Groovy, Java, JavaScript, LaTeX, Lua, Markdown, OCaml, PHP, Perl, PowerShell, Python, Ruby, Scala, SCSS, SQL, Textile, X(HTML), XML.  When creating a new project, you can import code from Git, GitHub, Bitbucket or Mercurial.  You can also deploy your projects to Heroku, Joyent, Openshift, Windows Azure, or Google App Engine.


What amazed me right away about Cloud9 is the fact that it is 100% browser-based. There is no software to install or anything to download. You simply fire-up your browser and get to work. In your browser, you’ll find an IDE, as well as a console window. You can chose from a number of editor themes, so you can do for the “Monokai” look if that is your thing.

cloud9 IDE

Cloud9 IDE

Impressive TemPlates

Creating a new application could not be more simple; you can chose from one of about a dozen templates. These include basic HTML5, Node, Python, C++, PHP/Apache, django, Ruby, WordPress, or a blank Ubuntu Linux image. There is even a template specifically for Harvard’s infamous CS50 course.


Yep. You can configure a database for your application. Cloud9 supports MongoDB, MySQL, CouchDB or Cassandra.  In each case, the setup is slightly more involved than a simple click or two, but overall it’s not too complicated.


It’s very cool that they have a free tier. Not only can you kick the tires, but for students, it’s a no-brainer.  You get one private workspace, and then the rest are public. The “Individual” plan is $19 per month. This is not a bad deal at all as you get three “Hot Workspaces” (i.e. they don’t spin-down due to inactivity), unlimited private workspaces, and increased performance. There is a “Teams” plan which is even more robust as well. If you are a full-time student or represent a school, look into their “Education” plan, which will run you a whopping $1 per month. Amazing.


While services like Heroku, Cloud Foundry, Dokku, Deis, Flynn all make it easy to spin-up various kinds of web-based stacks, Cloud9 makes it even easier. One of the really key aspects of this is the 100% online approach. You do everything in the browser; create files, edit files, deploy your code, even run terminal commands. For serious / production PAAS, I’d go with AWS, but for learning, quick testing, or prototyping, I highly recommend taking a look at Cloud9.

Yikes! AWS Node / NPM ERR! enoent ENOENT: no such file or directory package.json

Node.js LogoAWS’s Node deployment keeps telling me that it cannot find package.json, but it’s there! – Fortunately, this problem is easily solved.

AWS makes deploying your Elastic Beanstalk easy. Compress your build files, upload the ZIP and then deploy that application version. Lovely. But sometimes your application goes into a “warning” or “degraded” state, and then a visit to the application with a browser yields: “502 Bad Gateway“. Errrggggg…..

At this point, you look in the logs and see a cryptic message that says something like: “enoent ENOENT: no such file or directory package.json“. You double-triple-quadruple-check and yes, package.json is in-fact very much alive and well. So, of course your next thought it: “WTF???

I have run into this problem a few times and in each case, the problem was me: I zipped-up a folder, instead of the contents of a folder.

Do not compress an entire folder

Compressing the my project folder does not fix package.json problem

Let’s say your Node application is in a folder named: “myProject“. If you are compressing that folder, then this is your problem. You don’t want to compress a folder because when AWS un-zips that file, it will not know to look in the “myProject” folder that is created when the file is un-zipped.

Compress ALL of the items in  your project folder

Compressing the root files fixes package.json problem

What you want to do is: select EVERY file in the root of that folder (i.e. your Node application’s root folder), and then compress THOSE files. This will create a ZIP file that when un-zipped, creates the file structure that AWS expects. Now AWS will find package.json. This should solve the problem.

Compressing the root files fixes package.json problem

In the image above, I have zipped up the contents of the “myProject” folder, and created

Upload the zipped file

Compressing the root files fixes package.json problem

Now, back in your AWS console, you can use the “Upload and Deploy” button to upload your ZIP file, and then deploy it.