Design Patterns for Functional Programming


In software circles, a design pattern is a methodology and documented approach to a problem and its solution which is bound to be found repeatedly in several projects as a tumbling block. Software engineers customize these patterns according to their problem and form a solution for their respective applications. Patterns follow a formal structure to explain a problem and then go over a proposed answer as well as key points which are related to either the problem or the solution. A good pattern is one which is well known in the industry and used by the IT masses. For functional programming, there are several popular design patterns. Let’s go over some of these.

Monad

Monad is a design pattern which takes several functions and integrates them as a single function. It can be seen as a type of combinatory and is a core component of functional programming. In monad, a value is wrapped in a box which is then unwrapped and a function is passed to use the wrapped value.

To go into more technicalities, a monad can be classified into running on three basic principles.

·         A parameterized type M<T>

According to this rule, T can possess any type like String, Integer, hence it is optional.

·         A unit function T -> M<T>

According to this rule, there can be a function taking a type and its processing may return “Optional”. For instance, Optional.of(String) returns Optional<String>.

·         A bind operation: M<T> bind T -> M <U> = M<U>

According to this rule which is also known as showed operator due to the symbol >>==. For the monad, the bind operator is called. For instance, Optional<Integer>. Now this takes a lambda or function as an argument for instance like (Integer -> Optional<String> and returns and processes a Monad which has a different type.

Persistent Data Structures

In computer science, there is a concept known as a persistent data structure. Persistent data structure at their essence work like normal data structure but they preserve their older versions after modification. This means that these data structures are inherently immutable because apparently, the operations performed in such structures do not modify the structure in place. Persistent data structures are divided into three types:

  • When all the versions of a data structure can be accessed and only the latest version can be changed, then it is a partially persistent data structure.
  • When all the versions of a data structure can be accessed as well as changed, then it is a fully persistent data structure.
  • Sometimes due to a merge operation, a new version can be generated from two prior versions; such type of data structure is known as confluently persistent.

For data structure which does not show any persistence, the term “ephemeral” is used.

As you may have figured out by now, since persistent data structures enforce immutability, they are used heavily in functional programming. You can find persistent data structure implementations in all major functional programming language. For instance, in JavaScript Immutable.js is a library which is used for implementing persistent data structures. For example,

import { MapD } from ‘immutable’;

let employee = Map({

employeeName: ‘Brad’,

age: 27

});

employee.employeeName; // -> undefined

employee.get(’employeeName’); // -> ‘Brad’

Functors

In programming, containers are used to store data without assigning any method or properties to them. We just put a value inside a container which is then passed with the help of functional programming. A container only has to safely store the value and provide it to the developer in need. However, the values inside them cannot be modified. In functional programming, these containers provide a good advantage because they help with forming the foundation of functional construct and assist with asynchronous actions and pure functional error handling.

So why are we talking about containers? Because functors are a unique type of container. Functors are those containers which are coded with “map” function.

Among the simplest type of containers, we have arrays. Let’s see the following line in JavaScript.

const a1 = [10, 20,30, 40, 50];

Now to see a value of it, we can write.

Const x=y[1];

In functional, the array cannot be changed like.

a1.push(90)

However, new arrays can be created from an existing array. An array is theoretically a function. Technically, whenever a unary function is mapped with a container, then it is a functor. Here ‘mapped’ means that the container is used with a special function which is then applied to a unary function. For arrays, the map function is the special function. A map function processes the contents of an array and performs a special function for all the elements of the element step-by-step after which it responds with another array.

Zipper

A zipper is a design pattern which is used for the representation of an aggregate data structure. Such a pattern is good for codes where arbitrarily traversal is common and the contents can be modified, therefore it is usually used in purely functional programming environments. The concept of Zipper dates back to 1997 where Gérard Huet introduced a “gap buffer” strategy.

Zipper is a general concept and can be customized according to data structures like trees and lists. It is especially convenient for data structures which used recursion. When used with zipper, these data structure are known as “a list with zipper” or “a tree with zipper” for making it apparent that their implementation makes use of zipper pattern.

In simple terms, zipper with data structure has a hole. They are used for the manipulation and traversal in data structures where the hole indicates the present focus for the traversal. Zipper facilitates developers to easily move within the data structure.

Advertisement

Java Lambdas


So far we have talked a lot about functional programming. We discussed the basics and even experimented with some coding of functional interfaces. Now is the right time to touch one of the most popular features of Java for functional paradigm, known as lambda expressions or simply lambdas.

What Is a Lambda Expression?

A lambda expression provides functionality for one or more functional interface’s instances with “concrete implementations”. Lambdas do not require the use of a class for their use. Importantly, these expressions can be viewed and worked by coding them as objects. This means that, like objects, it is possible to pass or run a lambda expression. The basic style for writing a lambda expression requires the use of an “arrow”. See below:

parameter à the expression body

On the left side, we have a “parameter”. We can write single or multiple parameters for our program. Likewise, it is not mandatory to specify the parameter type because compilers already ascertain the parameter type. If you are using a single parameter, then you may or may not add a round bracket.

However, if you intend to add multiple parameters, then make sure to use round brackets (). Sometimes, there is no need of parameters in a lambda expression. For such cases, it is possible to signify them by simple adding an unfilled round bracket. To avoid error, use round brackets for parameter whether you are using a 0, 1, or more parameters.

On the right side of the lambda expression, we can have an expression. This expression is entailed in curly brackets. Like parameters, you do not require brackets for a single expression while multiple expressions require one. However, unlike parameter, the return type of a function can be signified by the body expression.

Without Lambdas

To understand lambdas, check this simple example.

package fp;

public class withoutLambdas {

public static void main(String[] args) {

withoutLambdas wl = new withoutLambdas(); // generating instance for our object

String lText2 = “Working without lambda expressions”; // here we assign a string for the object’s method as a parameter

wl.printing(lText2);

}

public void printing(String lText) { // initializing a string

System.out.println(lText);              // creating a method to print the String

}

}

 

The output of the program is “Working without lambda expressions”. Now if you are familiar with OOP, then you can understand how the caller was unaware of the method’s implementation i.e. it was hidden from it. What is happening here is that the caller gets a variable which is then used by the “printing” method. This means we are dealing with a side effect here—a concept we explained in our previous posts.

Now let’s see another program in which we go one step ahead, from a variable to a behavior.

package fp;

public class withoutLambdas2 {

interface printingInfo {

void letsPrint(String someText);  //a functional interface

}

public void printingInfo2(String lText, printingInfo pi) {

pi.letsPrint(lText);

 

}

 

public static void main(String[] args) {

withoutLambdas2 wl2 = new withoutLambdas2(); // initializing instance

String lText = “So this is what a lambda expression is”; // Setting a value for the variable

printingInfo pi = new printingInfo() {

@Override // annotation for overriding and introducing new behavior for our interface method

public void letsPrint(String someText) {

System.out.println(someText);

}

};

wl2.printingInfo2 (lText, pi);

}

 

 

}

In this example the actual work to print the text was completed by the interface. We basically formulated and designed the code for our interface’s implementation. Now let’s use Lambdas to see how they provide an advantage.

 

package fp;

public class firstLambda {

 

interface printingInfo {

void letsPrint(String someText);    //a functional interface

}

public void printingInfo2(String lText, printingInfo pi) {

pi.letsPrint(lText);

}

 

public static void main(String[] args) {

firstLambda fl = new firstLambda();

String lText = “This is what Lambda expressions are”;

printingInfo pi = (String letsPrint)->{System.out.println(letsPrint);};

fl.printingInfo2(lText, pi);

}

See how we improved the code by integrating a line of lambda expression. As a result, we are able to remove the side effect too. What the expression did was use the parameter and processed it to generate a response. The expression after the arrow is what we call as a “concrete implementation”.

Core Concepts of Functional Programming


Now that you have learned about the paradigm shift to functional programming, let’s go into the depths of the fundamentals concepts that power functional programming. The comprehension of these basic concepts is important to create high-quality functional programming applications. You may be tempted to directly begin coding but these concepts can help you become a better coder.

1- Pure Functions

In functional programming, everything is seen as a function. Each function has to be “pure”. Pure here refers to two basic capabilities:

No Side Effects

A function can never be pure if it carries even a single side effect. A side effect is a property when a function’s states are modified by other functions. By states, we mean the data like variables or data structures. Pure functions do not carry any side effects; hence their memory or I/O operations can’t be affected. Now, you might wonder that why exactly does their presence considered bad. Well, because they make functions “unpredictable” where a function has to rely on its system’s state.

On the contrary, if a function’s state cannot be changed, then the same output is generated for the given input. A side effect of a function can also mean to write any operation which has been applied to the disk or turning on/off a control of your front-end UI’s function.

Same Result with Multiple Calls

Whenever a function is called without any modifications in its arguments, the same results are generated. Consider an example where you have designed a simple function “multiply (4, 5). Now, this function is expected to generate the same results .i.e. for each invocation. However, if you were programming in other paradigms then random functions or global variables may not allow your result to remain the same.

Pure functions also offer “memoisation”. Memeoisation refers to a technique in which pure functions’ output (always same result) is saved in the cache memory. Now, whenever such functions are invoked, caching helps to enhance the performance and speed of the application.

2- Higher Order Function

The concept of a function which is higher order is known in mathematics as well as computer science. Generally, they possess two fundamental characteristics.

Return Type

The return type of a higher-order function has to be a function. For example, review the following code in Java.

package fp;

 

public class higherOrderfunction {

public static void main(String args[]) {

 

System.out.println(A(4));

 

}

static int marks() {

int a=5;

return a;

}

static int A(int total) {

total =4;

return total+marks();

}

}

To simplify things, we have constructed a simple function marks. This function holds an integer value “a” which is returned. Now, we have a function A. A takes an argument for an integer total and assigns it a value. Now, comes the actual part. See how in return, we have used marks method as a return type. Since we have processed our function by returning another function; hence this function is a higher-order function.

Arguments

Another characteristic of a higher-order function can be its use of functions as input parameters. For instance, see this see pseudo-code:

Public areaRect (lb) // Here areaRect represents a function that takes arguments from another function lb to calculate length  and breadth

{

int area;

area =l*b;

return lb; //

}

This function is a higher-order function because it processes itself using another function as a parameter.

3- First Class Functions

After higher-order functions, we have first-class functions. These are not too dissimilar to higher-order functions. It is important to note that a first-class function always adheres to the terms and conditions of a higher-order function. So what this means is that a first-class function has to return another function as well as contain a parameter in the form of function. Hence, by default, all first-class functions are higher-order functions. So what exactly is the difference between them? Well, context matters!

By referring a language to have support for first-class functions, we generally mean that uses its functions as values that can be easily passed around.

On the other hand, the term higher-order is more associated with the mathematics outlook when pure problem-solving requires a more theoretical and general perspective of the problem.

4- Evaluation

Some languages support strict evaluation while others offer non-strict evaluation support. This evaluation is targeted at the language’s processing of an expression while considering the function parameters or arguments. To familiarize yourself better with the concept, check this simple example:

Print length([4-3,4+6,1/0,7*7,3-8])

When a programming language uses non-strict evaluation to process this expression, then it simply returns back a value of 5 .i.e. the total number of elements. Such evaluation does not concern itself with the depth of values.

On the other hand, the same expression returns an error with strict evaluation because it found the third element “1/0” to be incorrect. Hence, this means that strict evaluation is more stringent and processes an expression more deeply.

In a few scenarios, non-strict evaluation has to enforce processing of strict evaluation when a function needs to be evaluated on a “stricter” basis due to an invocation.

5- Referential Transparency

In functional programming, the usual assignment of values is not offered. Variables are immutable; a value defined once is the final one which is not possible to be changed in future. It is the property which makes them without any side-effects. To understand further, consider the following example, where the value of x is changed after each evaluation.

x=x*15

In the start of the program, x was assigned a value of 1. The first evaluation made it 15.

Now, in the second evaluation, the value of x changes to 225.

Now, this function is not referentially transparent because the value of x is continuously changing.

So now, if we go by the functional concepts, then our example can be altered into this pseudocode:

int sum(int x)

{

x=x+2;

}

This type of function ensures that the value of x remains constant and it cannot be altered implicitly.

What is RabbitMQ?


What is RabbitMQ?

The concept of messaging in the software environment is similar to the daily life processes. For example, you went for a morning coffee. After taking your order, the manager inputs it into the system. If there is no rush in the coffee shop, your order does not require to be added in a queue.  However, if there are previous orders, the system puts it behind other orders. Thus, your order becomes part of the queue.

However, what if there are countless orders and the server is unable to manage all those due to a hardware issue? What to do now? In such cases, a service like RabbitMQ can prove to be the game changer. RabbitMQ will take all the orders and only forward them to the server when it can manage the workload.

Before understanding RabbitMQ, it is essential to equip yourself with the knowledge of a message broker. A message broker is an intermediary program that works on the translation of the contents of a message with the messaging protocols of both the receiver and the sender. Message brokers are used as a middleware solution for a variety of software applications.

RabbitMQ is a message broker software that is used for the queuing of messages. There are three main actors in the RabbitMQ lifecycle. First, we have a ‘publisher’ or ‘producer’. A publisher is the one who creates a message and sends it. Second, we have an ‘exchange’. Exchange receives the message with a routing key from the producer. The exchange will then save the message and store it in a queue. Third, we have a consumer. A consumer is a party for which the message was intended. A consumer can either be a third party or the publisher itself who consumes the message after getting it from the queue of the broker.

For the above example, we have used a single queue, but in real-world applications, there would be multiple queues. An exchange is connected to a queue through a binding key. The exchange will use the routing key and binding key to confirm the consumer of a message. However, it is important to note that sometimes an exchange will link a routing key with the name of a queue instead of using a binding key. There are mainly four types of exchanges: direct, topic, headers and fanout.

Whenever a message goes to a consumer, RabbitMQ makes it certain that it is received in the correct order. The queues do not let a message get lost.

RabbitMQ comes with a protocol known as AMQP (Advanced Message Queuing Protocol). AMQP helps to define three major components.

  • Where should the message go?
  • How will it get delivered?
  • What goes in must also come out.

AMQP does not require a learning curve and can be easily programmed due to its flexibility. Thus, if a developer works with the HTTP and TCP requests and responses, they will easily adapt its protocol.

RabbitMQ supports development support for all the popular programming languages including Java, .NET, Python, PHP, JavaScript, etc.

Example

For a practical explanation, we will write a simple application in Java with RabbitMQ. The application will consist of a producer, which will send a message, as well a consumer, which will receive that message. For sending, we have a file named Send.java. You will require the following import.

import com.rabbitmq.client.ConnectionFactory;
import com.rabbitmq.client.Connection;
import com.rabbitmq.client.Channel;

 Now, setup the class.

public class Send { 
private final static String QUEUE_NAME = “hello”;
public static void main(String[] argv)      throws java.io.IOException {      …  }}

Now we will have to link our class with the server.

ConnectionFactory factory = new ConnectionFactory();
factory.setHost(“localhost”);
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();

 

This code helps in the abstraction of the socket connection. Now, the next step is the creation of a channel. For this purpose, you will have to define a queue.

channel.queueDeclare(QUEUE_NAME, false, false, false, null);
String message = “Hello World!”;
channel.basicPublish(“”, QUEUE_NAME, null, message.getBytes());
System.out.println(” [x] Sent ‘” + message + “‘”);

Lastly, we close the channel and the connection;

channel.close();
connection.close();

This ends the code for the sender.

Here is complete send java class.

import com.rabbitmq.client.Channel;
import com.rabbitmq.client.Connection;
import com.rabbitmq.client.ConnectionFactory;

public class Send {

private final static String QUEUE_NAME = “hello”;
public static void main(String[] argv) throws Exception {         ConnectionFactory factory = new ConnectionFactory();    factory.setHost(“localhost”);
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();    channel.queueDeclare(QUEUE_NAME, false, false, false, null);    String message = “Hello World!”;
channel.basicPublish(“”, QUEUE_NAME, null, message.getBytes(“UTF-8”));
 System.out.println(” [x] Sent ‘” + message + “‘”);
 channel.close();
connection.close();

  }

}

Now you will have to write the code for the consumer. For this purpose, create a Recv.java class. Use the following import.

import com.rabbitmq.client.ConnectionFactory;
import com.rabbitmq.client.Connection;
import com.rabbitmq.client.Channel;
import com.rabbitmq.client.Consumer;
import com.rabbitmq.client.DefaultConsumer;

 

Now we will open a connection here too.

public class Recv { 

private final static String QUEUE_NAME = “hello”; 
public static void main(String[] argv)      throws java.io.IOException,             java.lang.InterruptedException {     ConnectionFactory factory = new ConnectionFactory();    factory.setHost(“localhost”); 
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();     channel.queueDeclare(QUEUE_NAME, false, false, false, null);    System.out.println(” [*] Waiting for messages. To exit press CTRL+C”);

    …    }

}

 

Now you will have to notify the server so it can fetch the messages that are accumulating in the queue.

Consumer consumer = new DefaultConsumer(channel) {  @Override  public void handleDelivery(String consumerTag, Envelope envelope,   AMQP.BasicProperties properties, byte[] body)      throws IOException {    String message = new String(body, “UTF-8”); 
  System.out.println(” [x] Received ‘” + message + “‘”); 
}};
channel.basicConsume(QUEUE_NAME, true, consumer);

Now run both the consumer and the producer, and you will have your RabbitMQ hello world application.

Complete Recv.java

import com.rabbitmq.client.*;
import java.io.IOException;

public class Recv {

private final static String QUEUE_NAME = “hello”;
public static void main(String[] argv) throws Exception {
ConnectionFactory factory = new ConnectionFactory();
factory.setHost(“localhost”);
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();
channel.queueDeclare(QUEUE_NAME, false, false, false, null);
System.out.println(” [*] Waiting for messages. To exit press CTRL+C”);
Consumer consumer = new DefaultConsumer(channel) {
@Override
public void handleDelivery(String consumerTag, Envelope envelope, AMQP.BasicProperties properties, byte[] body)
throws IOException {
String message = new String(body, “UTF-8”);
System.out.println(” [x] Received ‘” + message + “‘”);
}
};
channel.basicConsume(QUEUE_NAME, true, consumer);
}
}

Final Thoughts

RabbitMQ has been adopted in thousands of deployment environments. It provides a significant boost in the scalability and loose coupling of applications. Today, it is by far the most popular message broker. Moreover, it also provides convenience with the cloud and also supports various message protocols, making it a desirable option for your development toolbox.

Serverless Computing: An Introduction to Amazon Lambda


AWS Lambda is a service provided by AWS that relates to the compute service. It assists in the running of application code where a physical server is not required. Lambda processes code per user requirements and raises the scalability bar when the need arises.

Whether your application deals with a few user requests or you need to handle thousands of requests, Lambda can be handy. Lambda utilizes Amazon’s highly powerful IT cloud infrastructure for running its compute services where the hardware and OS intricacies are delegated.

Considerations for Writing Lambda Functions

Irrespective of your choice of programming language, the understanding, and usage of the following components are important for the creation of a function.

Handler

Handler is used by Lambda for the execution of your function. Handler is adjusted after a function is created. Whenever a function has to be invoked, execution initiates with the help of the handler.

Context Object

A handler also receives a context object from the AWS Lambda, which can be marked as the second parameter. Context object provides communication between AWS Lambda and your written code.

Logging

Lambda functions keep a track of logging statements.

Exceptions

A result of your Lambda function’s execution is necessary to be conveyed to the AWS Lambda. This can be done through various strategies so a request’s lifecycle can come to an end. Likewise, the occurrence of an error can also be notified to the AWS Lambda. AWS Lambda passes the function execution result to the client if a function is invoked through synchronous means.

Writing a Simple AWS Lambda Function

Let’s see an example of our traditional “hello world” where we can run code of an AWS Lambda function without the use of a server— purely on the cloud!

1-     Lambda Console

Open the AWS Management Console. Check for the option of Lambda that appears under the Compute button. Click it so that the Lambda Console can be opened.

2-     Choosing a Blueprint

Now, you have to choose a blueprint. Blueprints have pre-existing code to speed up the processing. Blueprints can handle events from different sources. In the console, click the button of “Create a Function”. Now, click the “Blueprint” option. Find the filter box and enter the following details.

“hello-user-python”.

Choose the associated blueprint.

Now, press the Configure option.

3-Configuring the Function

Lambda functions store lines of code that are written by the users while they also manage dependencies and configuration. Configuration details can include the allocation of the resources for compute like memory, timeout of execution, etc. Lambda takes these details as input and does the required processing in return.

Now, you have to enter a description for your function. This description includes the following.

  • Name – Select a name for your function. For this article, we can use the “hello-user-python”.
  • Role – An execution role can be created which carries certain authorizations. Lambda uses the role for the invocation of a function. Choose the option of “Create a new role from template”.
  • Role Name – Select a role name for your function. For this example, we can use lambda_simple_procsessing.
  • Policy Templates – Before the generation of a function, a role is assigned after selecting an appropriate template.

A sample code is provided under the label of Lambda Function Code. Go to the lower part of your screen now and choose the option of “Create Function”. The coding for Lambda function is supported in all the popular programming languages like C#, Node.js, Java, Python. By default, Python is used for the runtime.

For managing the code, a handler method can be defined in the code. Lambda passes data related to events to the handler after which processing of the event is initiated. By scrolling down the options, any configuration for the execution time or memory can be configured, though we will not modify it in this example.

Invoking the Function and Checking the Results

The Lambda function of the hello-user-python appears on the console. This function can be tested where you can review the results and view the logs. Find a dropdown menu with the name of the “Select a test event” and click on the “Configure Test Event”.

You have a textbox for the testing of a function through an event. Go through the template list of sample events and select the “HelloWorld”. You can now assign any name to your event like “HelloUser”. The field values for the text in JSON can be modified. However, the event structure should not be changed. Modify the “value1” field with the “hello user”.  Now, finish by clicking on the “Create Button” and click the “Test Button”.

If everything goes alright, you can view your results from the console. These results are classified into the following:

  • Execution – Confirms if the execution was successful or not.
  • Summary – Presents the crucial details from the log input.
  • Log Output – Views all the logs that are produced due to the execution of your Lambda function.

Metrics

Amazon CloudWatch engages in the supervision and reporting of the Lambda function’s metrics. For effective management of the code for its execution, Lambda keeps a record of the following and publishes them:

  • Count of requests.
  • A request’s latency.
  • Requests concluding with an error.

Click on the “Test” button a few times so the metrics can be produced and displayed. Now, choose the option “Monitoring” for the displaying of results. As you scroll down, you can find various Lambda function metrics.

Since Lambda supports the “pay-as-you-go” model, users have to pay according to the request numbers of the Lambda functions. To be specific, pricing is based on two invocation factors: Duration and count.

Removing the Function

A Lambda function does not incur any charges. It can be deleted through the console. Go the “Actions” button and choose the “Delete Function”. A pop-up will appear now for confirmation; choose “Delete”.

Well, now you have successfully created, managed and deleted a simple AWS Lambda Function.

Micro Service part 3 with an example


Problem

Before going into the details about microservices, it is important to understand the background of another architecture known as a monolithic architecture. A monolith application is generally a large one that has tightly coupled components. The challenges of monolithic application include the following:

  • It is hard to scale a monolithic application. Since all the components are tightly coupled, larger modifications are needed.
  • If the business aims to adopt a different technology then it is not viable to adopt it, due to the presence of certain constraints.
  • Automation is a hard nut to crack with monolithic applications.
  • Modern-day coding conventions and solutions are also difficult to implement.

What Are Microservices?

One of the easiest definitions of microservices is explained by Sam Newman.

Small autonomous services that work together

Microservices can be seen as a self-contained solution that helps in the provision of distinct business functionalities for applications. Various microservices may appear as separate but their combination as a whole runs the entire application smoothly.

For instance, suppose there is an e-commerce website. For simplicity purposes, we will divide its business processes into two modules. Firstly, we have the order module that helps customers to order a product by selecting customized options. Secondly, we have the processing module that will communicate with the back-end and verify the banking and other relevant details of the customer. In a monolithic application, a change in the order module means a change in the processing module too.

However, if we are talking about microservices, then essentially we separate these mini processes. In microservices architecture, we have an order microservice and a processing microservice. These microservices can exchange information through a protocol or interface like REST. Generally, this communication is stateless which means that there is no dependence on the state of a component. Additionally, each of the microservice is independent and manages its own data.

Why Use Microservices?

Work on the Immediate Problem

In the case of a monolithic application, an upgrade, repair or modification means tinkering with the entire codebase of the application. This dilemma is solved through the emergence of the microservices. Microservices help to focus and modify only the relevant component of the example. For example, if the above-mentioned order microservice needs a change in its business logic, then only its microservice needs to be worked upon. Restructuring or recoding might not seem like a difficult problem for small applications but in the case of enterprise applications, they consume a great deal of time and resources.

Organization of Teams

Often IT managers are unable to properly utilize their developers as they are unsure about how to divide the tasks of different modules. Subsequently, developers from different teams struggle in the debugging and modification of the code. With microservices, each service can be allocated a small team. Due to its loose coupling, developers are empowered to focus on their own services. Consequently, they are also saved from consulting with other teams for the updating of single business functionality.

Different Languages

Often a problem in an enterprise application is the selection of a programming language and framework. Sometimes, PHP is good for certain business functionality while sometimes Java’s security is the need of the hour. Luckily, the microservices architecture allows the writing of code for each service in the language of the developer’s choice. Since all the services communicate with each other through standardized protocols, hence microservices provide flexibility.

How Microservices Improve on the Previous Architectures?

There is a misunderstanding regarding the nature of microservices architecture. Some people believe that the microservices divide an application’s web, business and data models. This approach is not dissimilar to the vision behind the previous out-dated architectures. However, this is a faulty analysis.

Instead, each microservice manages its own data model. Hence, only the team of a specific microservice can change its behavior. Another feature that separates microservices from others is stateless communication. Stateless communication helps in the scalability of the application as each pair of the request and response is handled independently.

Example

For a practical implementation, let’s take a look at an example of Hello World application using Microservices in Java. We will create a HelloWorldService class.

class HelloWorldService {

public String greet() {

return “Hello, World!”;

}

The above-mentioned code can be written in different Java environments. For example, for our console application, we can write the following.

class Starter {

  HelloWorldService helloWorldService = new HelloWorldService();

  public static void main(String[] args) {

    String message = helloWorldService.greet();

    System.out.println(message);

  }

}

For java servlets, we can write the following lines of code.

class HelloWorldServlet extends HttpServlet {

  HelloWorldService helloWorldService = new HelloWorldService();

  public void doPost(HttpServletRequest request,

    HttpServletResponse response) throws ServletException, IOException {

    String message = helloWorldService.greet();

    response.getWriter().println(message);

  }

}

For coding the controllers of Spring MVC applications, we will have to write the following piece of code.

@Controller

class HelloWorldController {

  HelloWorldService helloWorldService = new HelloWorldService();

  @RequestMapping(“/helloWorld”)

  public String greet() {

    String message = helloWorldService.greet();

    return message;

  }

}

Now, we have to solve the service issues that can be either related to the entire service’s unavailability or its ineffectiveness in returning an appropriate response. Service unavailability is mainly the client’s headache to deal with. With code written with the help of frameworks like unirest.io, clients are able to manage the service unavailability better.

Future<HttpResponse<JsonNode>>future= Unirest.post(“HTTP://helloworld.myservices.local/greet”).header(“accept”,”application/json”).asJsonAsync(new Callback<JsonNode>() {

public void failed(UnirestException e) {  //tell them UI folks that the request went south

    }

public void completed(HttpResponse<JsonNode> response) { //extract data from response and fulfill it’s destiny }

    public void cancelled() {//shot a note to UI dept that the request got cancelled   } } );

In the case the service fails, you can use the following code for the response.

{

  “status”:”ok”,

  ”message”:”Hello, World!”

}

The status attributes help to show the correct nature of a response.

Final Thoughts

Since its emergence, microservices have reinvented several enterprise applications and helped save developers from a great deal of complexities. Now, developers do not need to reduplicate their codebases. The working of entire projects has been improved vastly this way.

Microservice through my lens (Part2)


Have you finally taken a leap of faith and resolved to adopt the microservices architecture? Hopefully, it is for the best. The other out-dated solution architecture might have been compromising the performance, security, and customer satisfaction of your applications, website or mobile app.

However, as you will delve deeper in the microservices world, you will face certain hiccups. Thankfully, a number of design patterns and best practices have been introduced in the software community that can help you to tackle these challenges and move forward with your application.

API Gateway

While using the micro-services architecture, often the client-side has to face a certain issue. The problem stems from the client’s inability to gain access to different services. Through applying the API gateway pattern, client-side is provided with a single access point.

This point acts as a center from which it can interact with other services of the application. Sometimes, the API will send a request from the client to its suitable receiver while other times, more than a single service will receive the request. As a result, the client-side does not need to fall into the complexities about how different microservices have been split up by the application.

Service Registry

Often microservices in an application struggle to locate the free instances of the services. In this case, the service registry can be helpful. Service registry can be basically considered as a DB for all of your services.

It holds the instances of the services which are registered and de-registered on each startup and shutdown. Hence, clients can search for any available or free instance through the service registry. However, there are few problems that are related to the pattern.

One of them is that it needs constant configuration and management. In case a service registry falters, crucial data can be lost. Hence, it has to be ensured that the service registry is always available for use by the application.

Circuit Breaker

In a microservices application, services often have to communicate and execute tasks together. A service can receive a request for which it has to call another service for action. However, the summoned service can sometimes be unavailable due to an issue. In such a predicament, valuable resources can be wasted and the actual service that was called will not be able to process other incoming requests. As a result, one service’s failure incurs a significant loss to the entire application’s resources.

In such events, ‘circuit breaker’ is the need of the hour. It is a remote service that listens to the communication between two services. If a service fails to respond after a certain limit, then the circuit breaker will trip. The summoned service cannot communicate any further during the timeout interval. After the interval, few requests can be accepted by the circuit breaker to see if the service has regained its working. In case the service is working, the communication will be restored. However, a timeout interval will follow if the called service is still unavailable.

DB per Service

Let’s suppose you are working with an e-commerce application. For the order of products, you have an ‘Order’ service while for payment, there is a ‘payment’ service. Since the majority of the services require data to be stored, how will you manage your data storage of services? Since it is a microservices application, you cannot use a single DB. Instead, you can link each microservices’ API with its own DB.

As a result, each service will be able to keep its data confidential. You can also try to use separate tables, schemas and DB servers for your services. With every service having a dedicated DB, one of the primary requirements for microservices architecture, loose coupling, is achieved. Additionally, each service will be able to use a DB according to its needs.

Health Check API

Often, a service instance is running fine but it is unable to process requests. This can happen if the DB connections are not available. For this scenario, a monitoring mechanism is required that can serve as a warning tool. Hence, in order to alert about a running service that is facing difficulty in processing requests, we can use health check API. As the name suggests, it is used to examine and alert about the health of a microservice. The API will examine things like application-specific logic, disk space, connection status etc. However, it must be noted that a service instance can still falter in the middle of a health check and thus the pattern should not be considered to deliver 100% success.

Messaging

For handling and processing the requests from the clients as well as working together with other microservices, a standard communication mechanism is required that can enhance the performance of the application through effective communication. For this purpose, the asynchronous message can be the solution to your problems. It helps in inter-service communication through which microservices are able to pass all types of messages to each other. Kafka and RabbitMQ are one of the most popular messaging tools available. Due to the message broker mechanism, requests are handled better and do not get lost. However, the message broker has to be available 24/7 while the client will also need to know the message broker’s address.

Log Aggregation

While dealing with a large application that is built on microservices architecture, you will have to deal with a number of instances spawning from each service. There would be a continuous stream of requests that have to be handled by all the instances. These instances produce information about their workings to a log file.

The log file will entail debug, warnings, errors and other information. Hence, in order to increase the understanding of an application’s complete behavior, a logging service can be used that is based on the centralized model. The service will help to accumulate the log data from all the instances of services. As a result, IT professionals can find and understand these logs and apply configuration for alerts so any important message can be displayed on a priority basis.

Best Java Web Frameworks


Java is easily the most popular language of the last two decades. Due to its wide range of features, including the cross-platform compatibility, strong community, an extensive list of libraries, and high security, it has been the first and foremost option for the developers in coding business and enterprise systems for both the public and private sector.

However, earlier Java web development used to be too complex as the ecosystem and tools were confusing for many coders. As a result, many developers had to scratch their heads while reading through hundreds of pages of official documentation for the software bugs that can even originate from a single line code in a class. Luckily, today Java’s ecosystem has been bolstered by the arrival of several frameworks that has made programming on the web for Java easier, and Java no longer bears the tag of the most difficult language for the web. Some of these frameworks are the following.

Spring

Spring was and still is one of the most popular web frameworks in Java. Spring is a light-weight framework as Spring uses various technologies like Hibernate, Tapestry, etc.   Thus, it thus can be implemented for a wide segment of web applications. Spring employs a software engineering concept known as dependency injection through the use of either a construction injection or setter injection. Through Spring’s container, the hard coupling of Java objects is reduced. Moreover, another programming framework called the Aspect Oriented Programming is used in Spring. This focuses on the modularization of concerns, making it easier to deal with middleware development.

Spring helps greatly in the elimination of presentation and business logic and minimizes the previously existed complexities that existed with the J2EE frameworks. Spring is flexible and assists coders with the elimination of a framework-specific base class. With the addition of Model View Architecture, it allows data binding and efficient management of data models.

JSF (Java Server Faces)

One of the biggest problems with a web back-end project is not only the design and development. For enterprises, continuous updates and maintenance are a frequent requirement. However, with JSF corporate developers can easily maintain their code with the support of modern software architectures. With a JSF web application, you can map component-specific event handling with HTTP requests while the server can also be used to treat the components as stateful objects. Java Server Faces eases back-end development through the introduction of an approach that centers on components, which helps in the coding of the web UIs. This is made possible due to JSF’s Facelets that help in the design of the views in web projects with the integration of HTML. Moreover, JSF has in-built support for AJAX.

JSF is chosen by developers for enterprise systems as they are handy for corporate development. For beginners, the drag-and-drop feature will facilitate the design of sleek and elegant user interfaces. For senior developers, the JSF API provides high customization.

Play 2

If you desire a speedy framework without any compromise on the scalability, then Play Framework 2 is a good option. This means that you can edit your code and refresh it to see instant results. Moreover, with support for non-blocking I/0, the performance of an application is highly improved through remote calls in parallel.

Unlike the previous Java web frameworks’, Play rescues developers from the complexity of Servlets and provides modern components of web development frameworks including REST, JSON, NoSQL, and ORM. Furthermore, due to its support for JVM, developers who have to transition from Java to Scala find it convenient due to the community and libraries support.  Additionally, with its integration with front-end technology like CoffeeScript and Less, it has received considerable praise for being one of the most promising new Java frameworks in the last few years.

Google Web Toolkit (GWT)

Are you a full stack web developer working with React and Vue JS? Or do you focus solely on the back-end logic?

For full stack developers, Google Web Toolkit provides a great advantage for design and development of both front-end and back-end development. GWT was released in 2006 by Google for its own use. Seven years later, Google made it open source and it gained popularity quickly due to Google’s extensive documentation and support for the framework for a variety of development environments and technologies.

Its platform advantages include generating JavaScript, compatibility with all the popular web browsers, as well as coding advantages like refactoring, syntax highlighting, and a dynamic UI component library. Thus, if you are incorporating front-end controls like a radio button or a checkbox in your project and are linking it with the back-end in Java code, GWT serves as a leading option for full stack development.

Grails

If you are familiar with the JVM ecosystem and write code in Groovy, then Grails can provide an easier learning curve for a shift in Java web development. Grails also has an extensive support for Java libraries and boasts availability of 700+ plugins. Grails also employ the modern day programming ideology of ‘convention over configuration’, limiting the lines of code.

Moreover, if the requirement of CRUD functionalities is a recurrent theme in your development then Grails’ Scaffolding makes it a breeze. Furthermore, if you are also involved in the Search Engine Optimization of your website, then the websites developed on Grails are easy to optimize for better search engine results. Additionally, with Grails’ GORM, developers have an access to a reliable data tool for linking with relational databases and NoSQL, including MongoFB.

With such powerful tools at your disposal, web development in Java has never been easier. If you have a wide range of web projects, then Spring MVC is the go-to option, while JSF can assist in the upgrade and maintenance of enterprise systems. If you require a framework for full stack development for working with both the front-end and back-end and also require sufficient documentation, then Google Web Toolkit is quite powerful, while for a stateless and non-blocking project, Play 2 can be the best solution.

What Is NoSQL? How Is It Different Than the Traditional Databases and What Are the Types of NoSQL Databases?


As the computers progressed through different stages, computer scientists realized that the real-life information of humans could be stored through a digital solution called database. With the rising processing and storage demand, pen and paper were not enough to handle the requirements of mankind. As a result, relational databases made an entry in the 1970s. All the information was saved on the databases through a query language called SQL (Structured Query Language).

Relational databases were a massive hit, and businesses of all scales began designing schemas, relations, and database designs to manage their clients and customer bases. However, data continued increasing at a rapid pace.

As more technologies emerged and the web and mobile platforms accelerated, discussions were held about the feasibility of relational databases in today’s world. Similarly, the ability of DBMS systems to handle a massive number database request was also part of the discussion where real-time interactions like those of social media platforms have increased the data requirements.

What Is NoSQL?

NoSQL stands for ‘Not Only SQL’. NoSQL database is an approach where the traditional concepts of database management systems are not followed. NoSQL databases are required when enterprise systems have issues related to the performance and scalability of their application.

NoSQL has been especially deemed suitable and fit for the requirements of modern day web applications where it is a part of the MEAN (MongoDB Express Angular Node JS) and MERN (MongoDB Express Angular Node JS) technology stacks. These web development stacks have been insanely popular in the recent years where the NoSQL database models are either slowly used as an additional tool or are used to completely replace the RDBMS systems.

Types of NoSQL Databases

There are four key types of NoSQL databases that store information in different ways.

Key-Value Data Store

In key-value DBs, data is stored through a distinct series of key-value pairs, which is similar to the concept of dictionaries. This means that a key can only be linked with a single value of a collection. As a result, the simplicity of the application is increased as a simple query that can generate the required result with a key-value data store.

Additionally, it also negates the need of a query language like SQL. Now the question arises, when should you use the key-value data store? Well, if you have are developing a social media platform, then you can save the profiles of the users through key-value data store.

Similarly, if you are managing a blog, then the comments of your fan following can be stored through a key-value data store. Social media platforms like Twitter and Pinterest are already using key-value data store for their newsfeeds and user profiles.

See the following example where our key is a list of countries that have cities as their values.

Key Value
USA {“New York, San Diego, Seattle”}
India {“Mumbai, Chennai, Delhi“}
Canada {“Toronto, Vancouver, Montreal”}

Example of KeyValue store

These are Key-value store options.

 AerospikeApache IgniteArangoDBBerkeley DBCouchbaseDynamo,  FoundationDBInfinityDBMemcacheDBMUMPSOracle NoSQL DatabaseOrientDBRedisRiakSciDB,  ZooKeeper

Column Store

Traditional DBMS always store and process data horizontally using rows. However, the rows are not stored together in the disk and, thus, accessing different rows can be slow for the database retrieval purposes.

In the case of a column store, the horizontal concept of storing and retrieving data is completely changed. Instead, a new approach has been designed where data can be stored in columns. These columns are combined together into column families. A column family can have countless columns that may either have to be generated at the runtime or at the time of specifying a schema. Column store is useful because the disk saves the column data continuously, which makes searching and accessing data for retrieval purposes easier and faster.

Column DBs is used by Spotify to manage the metadata related information of the music industry. Facebook also uses Column stores for searching mechanisms and personalization.

Example of Column store

These are Column options.

 AccumuloCassandraDruidHBaseVertica.

 

Document Store

In a document store DB, different data formats can be saved. These include the common JSON (JavasScript Object Notation), XML (Extensible Markup Language) or BSON (Binary Encoding of the JSON) data.

Hence, the document store provides a level of flexibility that was previously unimaginable with the relational database approach. It is similar to the key-value store in its storage mechanism of key-value pairs, but here all the values are called ‘documents’.

Additionally, the values also have encoding and structure related information. For example, see the following instance where our values are stored in the form of a document. All of the following instances are related to the address of offices. However, you can see how they can be represented in different formats.

 

{officeName:”ABC 1”,

{Street: “B-329, City:”Mumbai”, Pincode:”806923”}

}

{officeName:”ABC 2”,

{ Block:”A, 2nd Floor”, City: “Chennai”, Pincode: 400452”}

}

{officeName:”ABC 3”,

{Latitude:”60.257314”, Longitude:”-80.495292”}

}

We can also query our searches through the details of the data in the document. For instance, in the above example, we can use the city ‘Mumbai’ for all the data related to the “ABC offices”.

The popular gaming franchise SEGA utilizes the document store approach through MongoDB for its management of more than 10 million gaming accounts. Similarly, the real-time weather functionalities in Android and iOS are provided through the document store model.

Example of Document store

These are document store option

Apache CouchDBArangoDBBaseXClusterpointCouchbaseCosmos DBIBM DominoMarkLogicMongoDBOrientDBQizxRethinkDB

Graph Database

As the name suggests, graph databases use a graphical representation. Like the CS algorithm of the graph, there are nodes, edges, and other related components in graph databases. A graph database is useful to tackle the scalability issues of the applications. In graph databases, entities are represented by the nodes, while the association is linked through relationships. Each node contains records of relationships. For example, suppose a social media network where users know each other.

 

graphdb

Example of Column store

These are Key-value store options.

 AllegroGraphArangoDBInfiniteGraphApache GiraphMarkLogicNeo4JOrientDBVirtuoso

Final Thoughts

With the rising application development needs and demands, NoSQL databases serve as an excellent solution. Whether you are managing an e-commerce store or you are intending to create the next big app, the traditional databases may not always fulfill your needs. With the help of a NoSQL database, you can get an instant performance boost, and your applications will be able to handle requests with enhanced optimization.