AngularJS Directives: Using HTML5 Web Speech


Posted on October 30, 2013 by & filed under Content – Highlights and Reviews, Programming & Development.

A guest post by Jonnie Spratley, who currently works for GE as a UI Developer on the Industrial Internet Team building user interfaces for next generation products, and teaches a variety of programming courses at AcademyX.

AngularJS is one of the hottest JavaScript frameworks on the Internet, providing a full stack for creating single page applications (SPAs).

Angular Directives are a way to teach HTML new tricks. During DOM compilation, directives are matched against the HTML and executed. This allows directives to register behavior, or transform the DOM.

The Web Speech API provides an alternative input method for web applications (without using a keyboard). Developers can give web applications the ability to transcribe voice to text, from the computer’s microphone.

1ng-webspeech-bannerFollow along to see how to implement all three!

Let’s Get Started

To quickly get started creating a custom component for AngularJS, install the AngularJS Component Generator, by executing the following command:

Now you’ll be able to scaffold a angular component project.

Step 1 – Create the project

Proceed to create the project folder and then cd into that directory.

Now use Yeoman to create the project files, by executing the following command:

Then proceed to answer a few questions about your project.

For distribution, register the new project with Bower (a web library package manager), by executing the following command:

Now the component is available to the world via the bower package manager.

Step 2 – Create the Directive

To create a directive with AngularJS, it is best to create a module for the directive, then attach the directive definition to your module instance.

This allows users of the component to easily include the required scripts and declare the component in the existing application’s dependencies array.

2.1 – Module Definition

To define the module, use the angular.module() method to create a module instance; in this case the variable _app is the components module.

The angular.module is a global method for creating, registering and retrieving Angular modules.

    1. When passed two or more arguments, a new module is created.
    1. If passed only one argument, an existing module (the name passed as the first argument to module) is retrieved.

All modules that should be available to an application must be registered using this method.

2.2 – Factory Definition

The factory module is a good way to store methods or properties that can be reused throughout your directive. We create a factory for storing the icons, messages and some utility methods that the directive will use.

To register a service factory, which will be called to return the service instance, use the following format:

2.3 – Directive Definition

The directive definition object options available are as follows:

Property Description
restrict Declare how directive can be used in a template as an element, attribute, class, comment, or any combination.
priority Set the order of execution in the template relative to other directives on the element.
template Specify an inline template as a string. Not used if you’re specifying your template as a URL.
templateUrl Specify the template to be loaded by URL. This is not used if you’ve specified an inline template as a string.
replace If true, replace the current element. If false or unspecified, append this directive to the current element.
transclude Lets you move the original children of a directive to a location inside the new template.
scope Create a new scope for this directive rather than inheriting the parent scope.
controller Create a controller that publishes an API for communicating across directives.
require Require that another directive be present for this directive to function correctly.
link Programmatically modify resulting DOM element instances, add event listeners, and set up data binding.
compile Programmatically modify the DOM template for features across copies of a directive, as when used in ng-repeat.

The definition object that this directive will use is shown as follows:

2.4 – Directive Logic

Directives that modify the DOM use the link option, which takes a function with the following signature:

Parameter descriptions:

scope – is an Angular scope object.
element – is the jqLite-wrapped element that this directive matches.
attrs – is an object with the normalized attribute names and their corresponding values.
ngModel – is a ngModelController object that provides an API for the ng-model directive, with services for data-binding, validation, CSS updates, and value formatting and parsing.

a. Link Function

In order to properly hook into the directive to attach event listeners and manipulate the DOM, provide a link function.

b. Setup default options

Setup the user interface with default options.

c. Watch the Model

To watch the model for any changes call the $watch method on the scope.

d. Safe $apply

A utility for doing a safe $apply, basically this method checks to see if an $apply is already in progress.

e. Set the message

This is a utility method for setting the message value in the UI.

f. Set the icon

This is a utility method for setting the image icon in the UI.

g. Initialize

Now handle checking to see if the browser has the API.

h. Show Upgrade UI

Handle changing the UI by setting the message and the icon.

i. Start Handler

Next, handle when the recording starts up.

j. Error Handler

Handle any errors from the Speech Recognition API.

k. Result Handler

Now, handle processing the results from the Speech Recognition API.

l. Reset Handler

Handle reseting the UI after recognition is complete.

m. Toggle Button UI

Allow the user to toggle starting and stopping the recognition.

n. Start the directive

Finally, start the initialization of the directive.

2.5 – Extending

Now that we have the basic structure and logic to get the Web Speech Recognition API working with a custom UI, extending this directive to add additional functionality should be pretty seamless.

The code is available on Github, so feel free to contribute more customizable options, keyword event maps and other logic to make this directive more effective and efficient.


Download the production version or the development version.

Or install via bower:

Add to main page:

Add to main script:

Add to view:

Add to controller:


For an example visit the Plunkr.

2webspeech-tonightBe sure to look at the AngularJS resources that you can find in Safari Books Online.

Not a subscriber? Sign up for a free trial.

Safari Books Online has the content you need

Developing an AngularJS Edge is intended for intermediate JavaScript programmers. No attempt has been made to explain the JavaScript syntax used (except in the cases where AngularJS may introduce a peculiarity), nor do we explain concepts such as closures, function chaining, callbacks, or other common patterns. What we do explain are basic AngularJS concepts, components, and their applications. We provide examples along the way, answer questions, and correct common misconceptions. Together, we’ll build a working single-page weblog application using AngularJS, which will help you become proficient with using AngularJS to go out and create your own applications.
Develop smaller, lighter web apps that are simple to create and easy to test, extend, and maintain as they grow. AngularJS is a hands-on guide that introduces you to AngularJS, the open source JavaScript framework that uses Model–view–controller (MVC) architecture, data binding, client-side templates, and dependency injection to create a much-needed structure for building web applications.
Instant AngularJS Starter is designed to get you ramped up on AngularJS as quickly and efficiently as possible. By the end of this book, you’ll possess all of the knowledge you need to make full-featured, real-life applications with AngularJS. The code samples are reusable, and specifically intended to give you a head start on your next project. This book will transform your curiosity about AngularJS into a set of production-ready AngularJS skills, through a broad overview of the framework and deep dives into its key features.

About the author

jonnie Jonnie Spratley is currently working for GE as a UI Developer on the Industrial Internet Team building user interfaces for next generation products. He also teaches a variety of programming courses at AcademyX, and can be reached at @jonniespratley.

How To Find The Elusive DevOps Engineer

Since 2007 I’ve been involved in IT projects to build private clouds and other projects for migrating to public clouds.

And what stands out from these experiences is this: A different breed of IT person is needed if you plan to advance your business beyond the typical IaaS (infrastructure as a Service) model.

In this post, I will share what I have discovered through my trials and tribulations over the last 7 years.

So stick with me and I will share the secret recipe of skills that will help you identify the elusive DevOps Engineer talent whom can successfully get your applications running “RIGHT” in the cloud.

I will also warn you up front, it is going to take more than knowing VMware vSphere and Linux to even get you out of the gate…

My Own DevOps Definition


Let’s start with my own clear and concise DevOps definition.


Because if you searched the Internet today you will find there aren’t many definitions for DevOps…

…and, your guess is as good as mine which one is right.

So here’s how I like to think of it:

“DevOps is the culmination of behaviors, community, culture and technical talent colliding to improve user experience through tools, technologies, trust and people.”

How was that for a DevOps definition? Share your thoughts in the comments.

Creating A Perfect DevOps Engineer Job Description

Look:DevOps Engineer

Before we get to the meat of this post, let’s figure out what a devops engineer job description should look like.

First, let’s begin with a DevOps Engineer job search on SimplyHired. Go ahead, check them out then come back.

Do you see what I see?

They’re all the same job description except for a few unique scripting skills.

I plan to give you the secret sauce if you stay with me until the end. Here we go…

10 DevOps Skills To Look for in Job Applicants

#1 – An Impeccable SysAdmin

Must be a senior level Windows/Linux Administrator (Either/Or/Both depending on your shop) with 5 – 10 years of experience.  Why? Because they need to be able to build and administer servers in their sleep. But that’s not the only reason, a lot is riding on someone to automate server deployments because this is a big problem in most IT shops.

#2 – Virtualization Experience

Must have 3 – 5 years of virtualization experience with VMware, KVM, Xen, Hyper-V, or whichever favor hypervisor you are running in your private cloud. Now, they may never get involved in the day-to-day support of the infrastructure work, but they darn-well better understand it because most public clouds are running multiple flavors of virtualization.

#3 – Broad Technical Background

Along with virtualization experience, they must understand storage and networking. Why? Because gone are the days when network and storage are silos. You need people who can design a solution that scales and performs with high availability and uptime. Applicants also need to understand fault tolerance and failure domains so they are not putting all the eggs in one basket.

#4 – Scripting Guru

Have I said they need to be able to script yet? Bash, Powershell, Perl, Ruby, JavaScript, Python – you name it. They must be able to write code to automated repeatable processes. But we’re not stopping there because they also need to be able to code to RESTFUL APIs. That’s right, if you are going to replace manual processes such as assigning IP addresses and DNS reservation, someone needs to write some code.

#5 – Borderline Developer (more is better)

Have I said they need to code in C+, C++, .NET, ASP? No, I am not repeating myself. I am talking about writing scripts that will fire off and orchestrate the complete deployments of DEV, QA and Production environments via tools such as Chef, Puppet, CFEngine or other tools of this kind. Why? Because gone are the days when someone installs Windows or Linux from a CD. Nowadays, you fire off a command that shoots out a server build, then triggers another script that installs applications, then licks its lips and shoots off yet other scripts that do configurations and validation checks. Whom do you think is going to write all this code? Not a SysAdmin. DevOps Engineers will.

Some would argue he/she doesn’t exist but I disagree. The DevOps Engineer is a new emerging role you soon won’t be able to be without.

#6 – Chef, Puppet or other Automation Tool ExperienceDevOps Engineer Tools

I think I already mentioned automation tools such as Chef, but there are others such as Ansible, Fabric, and GIT that all have their place on the key chain too. Finding a DevOps Engineer with all this talent will not be easy or cheap. But let’s keep going while I have your attention.

#7 – People Skills

There used to be a free pass for people who were geniuses but they just couldn’t get along with anyone. Call them JERKS or other four-letter words, but they were tolerated because nobody else could do what they did. Not the case in today’s world. Fault tolerance and scalability happens at the people level too. And you need people others can go to for assistance without someone taking off their head with insults. Do your best to find people who can communicate without yelling or fighting. This also segues into the next DevOps skill related to being a human being…

#8 – Customer Service

If you have watched Gene Kim’s video on YouTube, then you have heard how important the feedback loop is. Finding people with all the technical skills I have listed will be hard enough, but now I am adding customer service to the list. Here’s a thought. If applicants have owned a business, then they are probably good at customer service. Finding people who care and can drill down into a conversation with the developer or customer is key to solving problems. It really does take a special person to listen to feedback, especially when the developer or customer is calling someone’s baby ugly. I wish I had a dollar for every time a developer blamed my infrastructure for why they were late on a project, or why their app was slow.

#9 – Real Cloud Experience

We’re almost there. The ninth DevOps skill you want is experience deploying applications in Amazon AWS, Google or Azure. Real stuff that was measured in successes. Why? Because there’s a shortage on people who understand IaaS versus PaaS; stateful versus stateless, and something known as loosely coupled apps. It’s no longer about fork-lifting existing servers and applications to the cloud, now it’s about designing and deploying applications using the “best of the best” Amazon, Azure and Google have to offer. We’re talking doing what the people building clouds are doing. Which is leveraging software defined data centers to code true PaaS environments. We’re talking compute, networks, and storage resources at developer’s finger tips.

Number 10 – Someone Who Caresdevops engineer salary

So as we come to the final skill which is dear to my heart, I want to say it’s not common. Why? Because most IT people are – well – IT people, and want to be left alone in a dark corner. Finding someone with all this skill is rare and worth every dollar. But now I am talking about someone who cares and can mentor others. Someone who is willing to share their ideas and scripts with the team. Someone who can lead people and get people thinking together about solving problems. Far to often the real problems with IT is because IT people don’t talk, or should I say, they don’t listen!

The Search is Over or Should I Say is Just Beginning?

These are the 10 DevOps skills to look for in applicants while you screen resumes and people for the elusive DevOps engineer position you have posted on Dice or LinkedIn.

It won’t be easy to find applicants, and you will most likely need a strategy to create the right set of DevOps interview questions.


Because there aren’t many managers or recruiters around with the right mindset to write them…

…and even less who understand what DevOps is.

The other option is to develop DevOps skills in-house which in some cases is less disruptive.

The $ecret $auce (Hint)

Finally, I want to cover what a DevOps Engineer salary may be.

Let’s look at a salary graph from SimplyHired.

On the top end (we’re talking seasoned) the range might be a little low, while on the low-end the range is good.

devops engineer salary

DevOps Salary Graph Compliments of SimplyHired

Think about what I just covered in this post.

We’re not talking a network or server engineer who might make anywhere from 85 – 110K.


We are talking an elusive skill set not many people in the world currently have.

Finding someone with 6 out of 10 of the skills listed above would be a prize!

So now I am going to share the secret sauce.

How important is it to you to do things right the first time?

Or should I say, how much are you willing to pay to do it a second or third time, or  until someone gets it right?

You see, what I have learned in the last 7 years is businesses can always afford to pay twice, yet they never understand the value of paying enough for the right people to do it right once.

Championships are won by the right people and leaders who leave it all on the field, or court, when it counts most.

Rule of thumb:

A DevOps Engineer Salary is more than enough but less than having to pay twice or three times the amount to do the same work over…

What’s next…

If you’re a Windows SysAdmin and you’re not sure where to start, check out my NEW Ultimate Guide for Microsoft DevOps.

Want to learn where you can get many the skills I covered?

Check out my new post called DevOps Training for Beginners.

Or Download My Free DevOps eBook!

Get all my DevOps lessons in an easy to read Free DevOps eBook.

Difference between ASP.NET Core MVC and ASP.NET MVC 5


The best way to learn what’s new in any technology is to compare with its earlier version. Here will be understanding the difference between ASP.NET Core MVC and ASP.NET MVC 5 by creating sample application and comparing project solution  structure between them.

Well we have many difference(s) between ASP.NET Core MVC and ASP.NET MVC 5 in solution structure itself, lets explore them without code been written.

ASP.NET Core is a lean and composable framework for building web and cloud applications. ASP.NET Core is fully open source.

Being fully open source is not easy task, Microsoft has done some amazing work on making it work across Windows, Mac, Linux OS. ScottGu’s blog on Introducing ASP.NET 5 is excellent reading to understand its features.

Quick look of ASP.NET Core improvements

  • Build and run cross-platform ASP.NET apps on Windows, Mac and Linux
  • Built on .NET Core, which supports true side-by-side app versioning
  • New tooling that simplifies modern Web development
  • Single aligned web stack for MVC and Web API
  • Cloud-ready environment-based configuration
  • Integrated support for creating and using NuGet packages
  • Built-in support for dependency injection
  • Ability to host on IIS or self-host in your own process

Difference between ASP.NET Core MVC and ASP.NET MVC 5 in 10 points

Firstly create ASP.NET Core MVC application and ASP.NET MVC 5 using Visual Studio 2015 Community Edition against .NET Framework 6.

Difference 1 – Single aligned web stack for ASP.NET Core MVC and Web APIs

ASP.NET MVC 5 will give us option of choosing MVC or Web API or both while creating web application. It was because web stack for MVC 5 and Web API was not the same.

ASP.NET Core MVC now has single aligned web stack for MVC and Web API. Image below shows check box are GREYED out for MVC and Web API while MVC 5 gives option to add Web API.


Difference 2 – Project(Solution) Structure Changes

If you see ASP.NET Core MVC solution explorer on right hand side, there is no Web.config, Global.asax. Then how it deals with configuration settings, authentication and application start specific code execution.

Project.json, appsettings.json are some files which does those work of missing files from ASP.NET MVC 5. There are many changes if we look at folder by folder.


Difference 3 – ASP.NET Core MVC targets Full .NET  and .NET Core

We have being working on full .NET framework, it is amazing experience till now and will continue to be. Then what is .NET core?

.NET Core is a general purpose development platform maintained by Microsoft and the .NET community on GitHub. It is cross-platform, supporting Windows, macOS and Linux, and can be used in device, cloud, and embedded/IoT scenarios

Oh cross-platform !! Yes, now we can develop ASP.NET Core web apps against .NET core and run in either Windows or Linux or Mac.  Take a quick look at image

Wait it’s not over yet, not only we can develop in Windows OS but also in Linux, Mac using Visual Studio Code  or any other code editors like Vim, Atom, Sublime


Difference 4 – ASP.NET Core apps  doesn’t need IIS for hosting

Don’t get surprised, the goal of ASP.NET Core is to be cross-platform using .NET core framework. With this in mind Microsoft decided to host ASP.NET Core applications not only on IIS but they can be self hosted or use nginx web server on linux. Kestrel will be internal web server for request processing

Difference 5 – wwwroot is now place for static files

The wwwroot folder represents the actual root of the web app when running on a web server. Static files like config.json, which are not in wwwroot will never be accessible, and there is no need to create special rules to block access to sensitive files.

These static files might be plain HTML, Javascript, CSS, images, library etc.


In addition to the security benefits, the wwwroot folder also simplifies common tasks like bundling and minification, which can now be more easily incorporated into a standard build process and automated using tools like Grunt.

“wwwroot” name can be changed in project.json under “webroot”: “Demowwwroot”




Difference 6 – New approach to Server side and client side dependency management of packages.

Any .NET developer would be familiar that References folder holds all DLLs,  Nuget packages for particular .NET Framework, While in ASP.NET Core development in Visual Studio we can target DNX 4.5.1 and DNX Core 5.0.

Leverage the experience of working in Visual Studio IDE and deploy ASP.NET 5 applications either in Windows, Linux or Mac using .NET Core. Its Server side management of dependencies.

Client side dependency management is more important because client side has more different packages from server side. Client side will surely have jQuery, Bootstrap, grunt, any Javascript frameworks like AngularJS, Backbone etc, images, style files.

Client side package management in open source community has two great names “Bower” and “NPM”. They are part of “Dependencies”.


Difference 7 – Server side packages save space in ASP.NET Core

We have being using NuGet package manager to add reference to assemblies, library, framework or any third party packages. They would have being downloaded from NuGet which creates “Packages” folder in project structure.

30 sample ASP.NET Core applications, all of them use NuGet packages to reference dependencies each costly approx 70 MB disk space, so we end up nearly using 2GB disk space for storing packages even though they all are same.

Some SMART developers know this issue, they have some work around of their own.

ASP.NET Core came up with storing all the packages related to its development in Users folder and while creating ASP.NET Core applications, Visual Studio will reference them from Users folder.

Now even if you have 100 sample ASP.NET 5 applications, they all are referencing from dotnet in Users folder which is near to 400 MB as of now.


Difference 8 – Inbuilt Dependency Injection (DI) support for ASP.NET Core

Dependency Injection (DI) achieves loosely coupled, more testable code, its very important because its kind of coding standard.

In ASP.NET MVC 5/4 or classic ASPX based applications, we use to have separate DI containers used like Unity, AutoFac, StructureMap etc,. We had to build up our project to use DI, its additional effort.

Now in ASP.NET Core applications, dependency injection is inbuilt i.e. no setup headache for DI. Just create some services and get ready to use DI.

Infact sample Core MVC application has DI inbuilt in it, lets open “Startup.cs” and look for “ConfigureServices(IServiceCollection services)” method. Its main purpose is configuration of services like EF, Authentication, adding MVC and hand written custom services like IEmailServer and ISmsSender.


Difference 9 – User Secrets of ASP.NET Core

Many times we keep sensitive data during our development work inside project tree, often we mistaken share these secrets with other through sharing of code, accidentally adding it TFS (source control). Once in while we might have experienced this.

ASP.NET Core based applications have now concept of User Secrets; if we look at “project.json” file, we see that “userSecretsId” is present and Secret Manager tool uses this id to generate user secrets.

The Secret Manager tool provides a more general mechanism to store sensitive data for development work outside
of your project tree.

The Secret Manager tool does not encrypt the stored secrets and should not be treated as a trusted store. It is for development purposes only.

There are many differences compared to ASP.NET MVC 5/4 but without writing single of code if we can find these differences then it means Microsoft has moved much ahead in terms of making it Open Source.

7 reasons service oriented architecture and REST are perfect together

It’s not a question of either/or with SOA and REST. It’s a matter of how the two design approaches can be brought together.

Service oriented architecture and REST — which is seen more with cloud and social applications — are actually highly compatible approaches. SOA and REST have a lot in common, and it’s time the two started interacting. 

It’s not a question of either/or. Organizations would be best served adopting SOA and REST in tandem. “Both SOA and REST are commonly described as distinct architectural styles, each with its own design approaches that is carried out to attain specific design goals,” relates a new book, SOA With REST: Principles, Patterns & Constraints for Building Enterprise Solutions with REST. “On the surface, it may appear as though we need to choose one architectural style over the other, depending on our own individual preferences and goals. However, that is not the case.”

The book, co-authored by Thomas Erl, Benjamin Carlyle, Cesare Pautasso, and Raj Balasubramanian, makes the point that one is the medium by which the other can be implemented:

“The choice is not between SOA and REST but rather whether REST is the correct implementation medium for a service-oriented technology architecture, or whether service-oriented architecture is the correct architectural model by which a REST architecture should be formalized. The answer to either question depends on the business requirements that need to be fulfilled.”

Erl and his colleagues draw distinctions as well as similarities between the two approaches, while noting that “there are no conflicts between REST and service-oriented computing goals:”

  • Service oriented computing goals are strategic and business-centric.
  • REST goals are technology-centric and can help achieve strategic or tactical business goals.
  • While not all REST design goals are relevant to each sercice-oriented computing goal, most REST design goals are directly supportive pf service-oriented computing goal.

The book describes common design goals for SOA and REST:

  1. Increased intrinsic interoperatability: “All of the REST design goals directly or indirectly support and enhance the interoperability potential of services within a service inventory.”
  2. Increased federation: “Both REST and service oriented architecture have similar effects on federation. While the application of REST constraints can lead to consistency across service contracts with freedom from business context, service-orientation can add architectural layers that can drive an organization to achieve federation over a broader scope.”
  3. Increased vendor diversity options: “Both service-oriented architecture and REST-style architecture advocate abstracting away service implementation details from service consumers to avoid negative forms of coupling that can inhibit vendor product independence.”
  4. Increased business and technology alignment: This is the foundation of SOA, and the primary means by which REST can support this goal is in its emphasis on building flexibility into technology architecture.”
  5. Increased ROI: SOA “has a primary focus on achieving return on investment through reusability, normalized service inventories, and mechanisms that enable the effective composition and re-composition of services. Reusability is one of the aspects of the modifiability design goal in that REST advocates leveraging reuse as a means of modifying, evolving and adding solution logic.”
  6. Increased organizational agility: While organizational agility — the holy grail of SOA — is not a business-centric goal not directly addressed by REST, “each REST design goal can directly contribute to improving an organization’s responsiveness. The REST constraints that directly contribute to agility on an organizational level are those that support abstraction, evolvability, and unforeseen change.”
  7. Reduced IT burden: SOA subdues IT burdens by breaking down departmental silos, reusing and composing services, and decoupling services from their consumers so they can be upgraded independently. REST constraints make scaling services more efficient, individual services more reliable, and service upgrades more efficient.”


Is REST the future for SOA?



It seems like everywhere we turn we keep hearing that SOA’s future is REST. There are a lot of publications comparing REST to SOAP and WS*[1], but such comparison seems to be too simplistic. There are two main approaches that have emerged lately – true REST and REST as a technology approach for services (aka REST Web Services[2]). In this article I will try to discuss whether either of these approaches can improve SOA implementations.

True REST for SOA

A true REST is effectively an implementation of Resource-Oriented architecture and not a pure technology decision. So the right question to ask when discussing true REST is whether its underpinning – ROA – is a good fit for your SOA implementation.

In order to assess the problem correctly, let’s first recall that the SOA architectural style [2] is based on a functional decomposition of enterprise business architecture and introduces two high-level abstractions: enterprise business services and business processes. Enterprise business services represent existing IT capabilities (aligned with the business functions of the enterprise). Business processes, which orchestrate business services, define the overall functioning of the business.

REST, on another hand, is a set of architectural guidelines [3] expressed as Resource-Oriented Architecture (ROA). ROA is based upon the concept of resources; each resource is a directly-accessible distributed component that is handled through a standard, common interface. So, the foundation of ROA is a resource-based decomposition[3].

In order to assess the applicability of true REST for the implementation of SOA, the real question that we need to answer is “What is the relationship between a service and a resource?”

Services vs. Resources

What is a service?

In the simplest case, a service can be defined as a self-contained, independently developed, deployed, managed, and maintained software implementation supporting specific business-relevant functionality for an enterprise as a whole and is “integratable” by design. A “Service” is defined by a verb ( For example, “validate customer’s credit score”, which describes the business function it implements.)

A service is not a programming construct or a set of APIs, but rather an architectural (unit of design, implementation, and maintenance) and deployment artifact used for implementation of enterprise solutions. The service functionality is defined by a service interface (specific for a given service), which can be supported by multiple implementations. There are two basic ways of defining a service interface – RPC-style and messaging-style. RPC-style implementations use service invocation semantics and are defined through a set of parameters in the service interface. In the case of messaging style, a service interface is effectively fixed – it essentially performs “execute” – with an XML document as input and output (much like the GoF command pattern). A service semantic, in this case, is defined by the semantics of the input and output messages[4].

Historically, services are often defined as a collection of methods, but as explained in [2] these methods are independent from each other[5] and such collection serves as a namespace, simplifying the management of the services.

What is a resource?

In the simplest case, a resource can be defined as a directly-accessible, independently-developed, deployed, managed and maintained software artifact supporting specific data. A resource is defined by a noun for example, “doctor’s appointment” that describes the data provided by the resource. A resource can also relateto other resources and provide a reference (link) to them. In effect, a resource is similar to an object[6], but with a predefined (CRUDish) interface semantic.

The semantics in REST are based on the set of HTTP operations and looks as follows [5]:

  • createResource – Create a new resource (and the corresponding unique identifier) – PUT
  • getResourceRepresentation – Retrieve the representation of the resource – GET
  • deleteResource – Delete the resource (optionally including linked resources) – DELETE (referred resource only), POST (can be used if the delete is including linked resources)
  • modifyResource – Modify the resource – POST
  • getMetaInformation – Obtain meta information about the resource – HEAD

A resource is defined by its URL and definition of inputs/outputs for every operation supported by a resource[7]. Unlike a service, where methods are completely independent and can be deployed as independent endpoints, methods on a resource follow OO semantics, which means that all of them (except createResource) have to exist on the underlying resource (same URL).

Basic differences between Resources and Services

Based on the above definitions of resources and services, it seems intuitively obvious that they are very different. Let’s delve into these differences first, and then discuss how they can impact resulting architecture.

As stated in [6]:

“Not only is REST not service oriented, service orientation is irrelevant for REST”

And [7] goes even further explaining the differences between the two as:

“If WS-* is the RPC of the Internet, REST is the DBMS of the internet… Traditional SOA based integration visualizes different software artifacts being able to interact with each other through procedures or methods. REST effectively allows each software artifact to behave as a set of tables, and these artifacts talk to each other using SELECT, INSERT, UPDATE and DELETE. ( or if you wish GET, PUT, POST, DELETE). And where exactly is the business logic? Is it in the stored procedures? Not Quite. It’s in the triggers.”

Here we will use a slightly different analogy, one based on J2EE. We can think of services as stateless session beans and resources as entity beans.

Services – session beans – serve as controllers allowing execution of a required operation, regardless of the underlying resource. For example, a debit account service might take the account ID and the amount and debit required account. A single service can debit any of the existing accounts.

Resources – aka entity beans – serve as a data access mechanism for a given instance of given data type. For example, in order to debit a certain account, it is necessary to find a resource representing this account and then update it to debit the required amount. An additional caveat here is that unlike an entity bean which can implement any required method, a REST resource has only a single modify resource method. This means that the actual business operation, the debit, has to be encoded as part of the request.

What does this mean?

Based on the above, it is impossible to build an SOA system using true REST. It is possible to build a system, but it won’t be SOA. Both will start with the business-aligned decomposition, but because they are using very different decomposition approaches they will result in completely different architectural styles[8] based on different set of components and connectors.

Just because they are trying to solve the same problem – business/IT alignment and are both based on business driven decomposition does not mean that the result will adhere to the same architectural style.

Another question is whether it is possible to build a complete system using true REST. Based on the above, it is a question of whether it is possible to build a complete system using only a database or entity beans. Certainly you could, but it would require adding procedural code in the form of stored procedures (overwriting the meaning of the methods) or triggers (doing post processing based on the data changes). The same is typically true for a true REST implementation – you have to change the meaning of the modifyResource method (often using command pattern) to do more than data update.

As a result, a REST-based implementation is rarely true REST; it typically includes at least some elements of REST Web Services. So what does it mean to be a REST Web Service?

REST Web Services

The REST Web Service approach is an approach for using REST purely as a communication technology to build SOA. In this case, services are defined using SOA style decomposition and REST-based Web Services[9] are leveraged as a transport.

Although commonly referred to as REST, this approach has nothing to do with true REST and is similar to POX (plain old XML over HTTP), with the difference being that in addition to XML, it supports multiple other data marshalling types ranging from JavaScript Object Notation (JSON) to ATOM to binary blobs and leverages additional HTTP methods, compared to POX, which is typically based on GET and PUT.

Using JSON became a very popular approach due to the advances of Web and wide-spread adoption of Ajax technology; the majority of modern browsers have built-in support for JSON. Since it is a non-trivial task to process XML (especially with multiple namespaces) in JavaScript, it is much easier for web-based implementations to use JSON-based REST Web Services. The proliferation of REST Web Services for Web interactions lead to the increased popularity and wide spread of these technologies.

What is the real difference?

Publications describing differences between SOAP and REST typically point to the following advantages of REST Web Services, for example [11]:

  • “Lightweight – not a lot of extra xml markup
  • Human Readable Results
  • Easy to build – no toolkits required”

Although these differences are important (I will discuss them in detail later in the article), the main difference between SOAP and REST is the fact that while REST is implemented directly on top of the HTTP protocol, SOAP introduces an abstraction layer (SOAP messaging), that can be implemented on top of any transport. Standardized SOAP bindings currently exist for HTTP, SMTP and JMS, but non-standard bindings have been implemented for other transport solutions. This additional abstraction layer that provides decoupling between existing transports and SOAP-based implementations is the root cause of major differences between SOAP and REST Web Services.

The opinions about this abstraction layer vary significantly depending on whom you talk to. The REST camp considers it to be over-engineering and claims that it does not provide any real value. They claim that HTTP already provides all of the features required for implementation of services interactions. The SOAP camp, on the other hand, will argue that HTTP is not the only transport that is typically required for service interactions (especially inside the enterprise) and having a portable, extensible[10] abstraction layer is necessary for building robust, feature-rich service interactions.

Although both points of view have their merits, in my experience trying to limit SOA implementation to a single transport – HTTP – rarely works in practice. Yes, HTTP is ubiquitous and its usage typically does not require any additional infrastructure investments, but it is not reliable (HTTP-R is not widely adopted), synchronous only[11] (creating temporal coupling), does not have transactional semantics and so on.

Additionally, even if HTTP is the only transport used in implementation, the SOAP envelope can become very handy for a clean separation of business (SOAP Body) and infrastructure or out-of-bound (SOAP Headers) data in SOAP messages. And finally, if your original implementation does not require any infrastructure or out-of-bound data, the overhead of the SOAP envelope is minimal – two tags, but provides a well-defined way for adding such data as the need arises.

So, at the end of the day, data enveloping with separation of business and infrastructure concerns is a very powerful paradigm, which is often used even in the REST Web Services implementations. Whether to use a standardized SOAP or a custom enveloping schema[12] has to be decided by a specific implementation.

Other differentiators

Let’s take a moment to discuss some of the other differentiators between SOAP and REST Web Services often cited in publications.


A popular opinion is that REST is much simpler then SOAP. According to them, REST simplicity stems from the fact that REST does not require WSDL or any interface definition. Such statements are naïve, at best. No matter which technology is used for communications between service consumer and provider, they must still agree on both syntax and semantic of their message exchange (interface)[13]. This means that in the case of REST, one of two approaches is possible:

  • Defining an interface in a text document and “manually” coding of data marshalling/unmarshalling based on a common interface definition described in the interface document. Although such an approach is often promoted by REST advocates, it rarely scales beyond 10 – 15 elements, which is not typical for coarse grained REST services. Besides, such an approach is very error prone and as a result, most of the available REST frameworks have abandoned it in favor of the next approach.
  • Defining an interface on the XSD level and generation of the data marshalling/unmarshalling based on the preferred framework (for example, JAXB or Castor in the case of XML payload or Jackson, in the case of JSON payload). Such an approach is, in effect, a minimalistic version of WSDL and requires about the same amount of effort as SOAP-based implementations. In fact, exactly the same approach is often used in the SOAP-based implementation, leveraging a single interface and a command pattern for service execution. The extension of this approach is usage of WSDL2.0 [13] and/or WADL [14] for REST.

Another common complaint about SOAP is perceived complexity of WS* standards [15]. Although there is no single spec that lays out the key WS*standards and their interrelationships, there is a standard for a majority of service interaction use cases. Granted, the choosing of an appropriate WS*standard and its usage might require some extra understanding/implementation time, but [16]:

“Arguing simplicity versus standards is ridiculous in the war between REST and SOA because simplicity without standards is just as detrimental to the costs and manageability of an application “

So, with the exception of the most simplistic examples like “temperature converters”, REST is not any more simple than SOAP.


Another reason why many REST proponents are advocating REST as an alternative to SOAP is the fact that in REST, both requests and responses can be short. There are two main reasons for this:

  • SOAP requires an XML wrapper around every request and response, which increases the size of a message. Although this is true, the important thing to consider here is not how many bytes the wrapper adds, but rather the percentage of overhead it creates. Because the wrapper size is constant, this percentage decreases as the size of the message grows, eventually becoming negligible. Considering that a typical service is fairly coarse-grained, the size of the request and reply is fairly large and consequently the overhead of a SOAP envelope is rarely an issue.
  • SOAP is XML-based messaging, which uses verbose encoding. REST, on the hand, provides a more lightweight messaging alternative – JSON[14]. Although this is true, usage of the Message Transmission Optimization Mechanism (MTOM) [17], supported by most SOAP frameworks, allows for splitting of messages into minimal XML-based SOAP Envelope/Header/Body parts and additional parts containing message content that can be encoded as any MIME types, including JSON, binary streams, etc.

Although in theory, REST is lighter-weight compared to SOAP, in practice, with some advanced SOAP design techniques, the difference between size of realistic SOAP and REST messages can be made minimal.

Easy to build – no toolkits required

Because REST is based on HTTP, one of the claims of its proponents is that one can use familiar technologies like the Java servlet API and Java HTTP support to write REST services implementations and clients without any specialized toolkits. This is true if one wants to “manually” implement building input/output messages and data marshalling. The same can be done for SOAP Web Services as well. However, people rarely want to write such boilerplate code and as a result use toolkits for both SOAP and REST [18].


REST can be used as both system design approach leveraging ROA (true REST approach) and SOA design implementation leveraging REST technologies (REST Web Services). Although both approaches have their merits, they do not change the hardest part– defining business services/resources aligned with the enterprise business model. There are cases for both SOA and ROA, but at the end of the day those are two very different styles.

About the Author

Boris Lublinsky is principal architect at NAVTEQ, where he is working on defining architecture vision for large data management and processing and SOA and implementing various NAVTEQ projects. He is also an SOA editor for InfoQ and a participant of SOA RA working group in OASIS. Boris is an author and frequent speaker, his most recent book “Applied SOA”.


I am thankful to my NAVTEQ colleagues, especially Jeffrey Herr for help in writing this article. Also thanks to Stefan Tilkov and Kevin T. Smith for providing interesting feedback (often negative), that helped me to improve the article


1. Cesare Pautasso, Olaf Zimmermann, Frank Leymann – RESTful Web Services vs. “Big” Web Services: Making the Right Architectural Decision 

2. Boris Lublinsky –Defining SOA as an architectural style 

3. Resource oriented architecture

4. Martin Fowler Richardson Maturity Model: steps toward the glory of REST 

5. Resource Oriented Architecture and REST 

6. Dhananjay Nene. Service oriented REST architecture is an oxymoron. 

7. Dhananjay Nene. REST is the DBMS of the Internet 

8. Dhananjay Nene. Musings on REST. 

9. Jørgen Thelin. A Comparison of Service-oriented, Resource-oriented, and Object-oriented Architecture Styles 

10. Richard Hubert. Convergent Architecture: Building Model Driven J2EE Systems with UML. Wiley, 2001 ISBN: 0471105600

11. Arun Gandhi. SOAP vs. REST – The Best WebService. 

12. Please see this link.

13. Lawrence Mandel. Describe REST Web services with WSDL 2.0 

14. Web Application Description Language

15. Stefan Tilkov Interview with Sanjiva Weerawarana: Debunking REST/WS-* Myths. 

16. Lori MacVittie. SOAP vs REST: The war between simplicity and standards. 

17. Please see this link. 

18. Mark Little. A Comparison of JAX-RS Implementations.

[1] See, for example, an excellent comparison in [1]

[2] I am using here the term, that technically make no sense, and is not REST, but is widely adopted in industry and is considered to be REST by many people.

[3] By definition, a resource is any component deserving to be directly represented and accessed

[4] A typical implementation of such service is based on “command pattern”. An input document defines both command itself and data used by this command.

[5] Method independence stems from the fact that although different methods can be executed on the same data – here this is an enterprise data which exists regardless of whether it is exposed by services, not an object instance specific data as it is used in OO.

[6] For example, [4] makes a direct analogy between OO and REST

[7] Many of the REST proponents claim that the latter is not necessary. We will return to this point later in the article.

[8] Architectural style are “like “design patterns” for the structure and interconnection within and between software systems” [9]. A more holistic definition of the architectural style, provided by [10] states that “An architectural style is a family of architectures related by common principles and attributes”

[9] Another misnomer popular in industry – Web Services are SOAP by definition.

[10] All of the WS* implementation heavily relies on SOAP, more specifically on SOAP headers.

[11] You can always implement asynchronous messaging on top of HTTP, but you need an additional abstraction layer, for example SOAP with WS-Addressing, on top of it.

[12] Many REST proponents will argue that HTTP already have a set of standard headers thus SOAP headers are completely unnecessary. The issue here is that a predefined set of HTTP headers [12] has a very well defined semantics and any application specific data requires a custom HTTP header, which creates the same level of complexity as custom SOAP header.

[13] An interesting example of this is usage of the client APIs in many JAX-RS implementations, where interface is effectively a java interface – so much for multi language support.

[14] As always, simplicity comes with the price – try to implement polymorphism in JSON messages without manually encoding object types

The Conflict Between Continuous Delivery and Traditional Agile

In working with development teams at organizations which are adopting Continuous Delivery, I have found there can be friction over practices that many developers have come to consider as the right way for Agile teams to work. I believe the root of conflicts between what I’ve come to think of as traditional agile and CD is the approach to making software “ready for release”.

Evolution of software delivery

WaterfallA usefully simplistic view of the evolution of ideas about making software ready for release is this:

  • Waterfall believes a team should only start making its software ready for release when all of the functionality for the release has been developed (i.e. when it is “feature complete”).
  • Agile introduces the idea that the team should get their software ready for release throughout development. Many variations of agile (which I refer to as “traditional agile” in this post) believe this should be done at periodic intervals.
  • Continuous Delivery is another subset of agile which in which the team keeps its software ready for release at all times during development. It is different from “traditional” agile in that it does not involve stopping and making a special effort to create a releasable build.

Continuous Delivery is not about shorter cycles

Going from traditional Agile development to Continuous Delivery is not about adopting a shorter cycle for making the software ready for release. Making releasable builds every night is still not Continuous Delivery. CD is about moving away from making the software ready as a separate activity, and instead developing in a way that means the software is always ready for release.

Ready for release does not mean actually releasing

A common misunderstanding is that Continuous Delivery means releasing into production very frequently. This confusion is made worse by the use of organizations that release software multiple times every day as poster children for CD. Continuous Delivery doesn’t require frequent releases, it only requires ensuring software could be released with very little effort at any point during development. (See Jez Humble’s article on Continuous Delivery vs. Continuous Deployment.) Although developing this capability opens opportunities which may encourage the organization to release more often, many teams find more than enough benefit from CD practices to justify using it even when releases are fairly infrequent.

Friction points between Continuous Delivery and traditional Agile

As I mentioned, there are sometimes conflicts between Continuous Delivery and practices that development teams take for granted as being “proper” Agile.

Friction point: software with unfinished work can still be releasable

One of these points of friction is the requirement that the codebase not include incomplete stories or bugfixes at the end of the iteration. I explored this in my previous post on iterations. This requirement comes from the idea that the end of the iteration is the point where the team stops and does the extra work needed to prepare the software for release. But when a team adopts Continuous Delivery, there is no additional work needed to make the software releasable.

More to the point, the CD team ensures that their code could be released to production even when they have work in progress, using techniques such as feature toggles. This in turn means that the team can meet the requirement that they be ready for release at the end of the iteration even with unfinished stories.

This can be a bit difficult for people to swallow. The team can certainly still require all work to be complete at the iteration boundary, but this starts to feel like an arbitrary constraint that breaks the team’s flow. Continuous Delivery doesn’t require non-timeboxed iterations, but the two practices are complementary.

Friction point: snapshot/release builds

Many development teams divide software builds into two types, “snapshot” builds and “release” builds. This is not specific to Agile, but has become strongly embedded in the Java world due to the rise of Maven, which puts the snapshot/build concept at the core of its design. This approach divides the development cycle into two phases, with snapshots being used while software is in development, and a release build being created only when the software is deemed ready for release.

This division of the release cycle clearly conflicts with the Continuous Delivery philosophy that software should always be ready for release. The way CD is typically implemented involves only creating a build once, and then promoting it through multiple stages of a pipeline for testing and validation activities, which doesn’t work if software is built in two different ways as with Maven.

It’s entirely possible to use Maven with Continuous Delivery, for example by creating a release build for every build in the pipeline. However this leads to friction with Maven tools and infrastructure that assume release builds are infrequent and intended for production deployment. For example, artefact repositories such as Nexus and Artefactory have housekeeping features to delete old snapshot builds, but don’t allow release builds to be deleted. So an active CD team, which may produce dozens of builds a day, can easily chew through gigabytes and terabytes of disk space on the repository.

Friction point: heavier focus on testing deployability

Nobody likes cleaning up broken buildsA standard practice with Continuous Delivery is automatically deploying every build that passes basic Continuous Integration to an environment that emulates production as closely as possible, using the same deployment process and tooling. This is essential to proving whether the code is ready for release on every commit, but this is more rigorous than many development teams are used to having in their CI.

For example, pre-CD Continuous Integration might run automated functional tests against the application by deploying it to an embedded application server using a build tool like Ant or Maven. This is easier for developers to use and maintain, but is probably not how the application will be deployed in production.

So a CD team will typically add an automated deployment to an environment will more fully replicates production, including separated web/app/data tiers, and deployment tooling that will be used in production. However this more production-like deployment stage is more likely to fail due to its added complexity, and may be may be more difficult for developers to maintain and fix since it uses tooling more familiar to system administrators than to developers.

This can be an opportunity to work more closely with the operations team to create a more reliable, easily supported deployment process. But it is likely to be a steep curve to implement and stabilize this process, which may impact development productivity.

Is CD worth it?

Given these friction points, what benefit is there to moving from traditional Agile to Continuous Delivery worthwhile, especially for a team that is unlikely to actually release into production more often than every iteration?

  • Decrease risk by uncovering deployment issues earlier,
  • increase flexibility by giving the organization the option to release at any point with minimal added cost or risk,
  • Involves everyone involved in production releases – such as QA, operations, etc. – in making the full process more efficient. The entire organization must identify difficult areas of the process and find ways to fix them, through automation, better collaboration, and improved working practices,
  • By continuously rehearsing the release process, the organization becomes more competent at doing it, so that releasing becomes autonomic, like breathing, rather than traumatic, like giving birth,
  • Improves the quality of the software, by forcing the team to fix problems as they are found rather than being able to leave things for later.

Dealing with the friction

The friction points I’ve described seem to come up fairly often when Continuous Delivery is being introduced. My hope is that understanding the source of this friction will be helpful in discussing it when it comes up, and working through the issues. If developers who are initially uncomfortable with breaking with the “proper” way of doing things, or find a CD pipeline overly complex or difficult understand the aims and value of these practices, hopefully they will be more open to giving them a chance. Once these practices become embedded and mature in an organization, team members often find it’s difficult to go back to the old ways of doing them.

Edit: I’ve rephrased the definition of the “traditional agile” approach to making software ready for release. This definition is not meant to apply to all agile practices, but rather applies to what seems to me to be a fairly mainstream belief that agile means stopping work to make the software releasable.