Risks of Cloud APIs


An application programming interface (API) allows developers to establish a connection with services. These services can be cloud services and assist with updating a DB, storing data, pushing data in a queue, moving data, and managing other tasks.

APIs play an important role in cloud computing. Different cloud providers depend on various API types. Consumers debate on portability and vendor lock-in issues. AWS leads the market, holding its position as the de facto standard.

A cloud API is a special category of API which facilitates developers to develop services and applications which are needed for provisioning software, hardware, and platforms. It acts as an interface or gateway which may be indirectly or directly responsible for the cloud infrastructure.

Risk of Insecure APIs

Cloud APIs help to simplify many types of cloud computing processes and automate various complex business functionalities like configuration of a wide range of cloud instances.

However, these APIs need a thorough inspection of cloud customers and cloud providers. If a cloud API has a security loophole then it can create a number of risks pertaining to availability, accountability, integrity, and confidentiality. While the providers of cloud services have to be vigilant around securing their APIs, their mileage can differ. Hence, it is necessary to learn how you can assess the cloud API security. In order to evaluate cloud APIs, this report discusses several areas of concerns like what cyber threats do they exactly represent? How do the risks operate? How is it possible for companies to evaluate and protect these cloud APIs? Some of these areas where the customers have to be vigilant are as follows.

Transport Security

Generally, APIs are provided through a wide range of different channels. Those APIs which hold and store private and sensitive data require a greater level of protection via a secure channel like IPSec or SSL/TLS. Designing the tunnels of IPSec for a customer and CSP (cloud service provider) is often complex and resource-intensive; therefore many eventually select SSL/TLS. As a consequence, a can of worms is opened in the form of multiple potential issues. These issues include the management and production of legitimate certificates from an external or internal CA (certificate authority), problems with end-to-end protection, platform service, and their configuration conundrums, and the integration of software.

Authorization and Authentication

Most of the clouds APIs emphasize on authorization and authentication—hence they are major considerations for a lot of clients. One can ask the cloud service provider questions like the following.

  • How easily can they handle the attributes of two-factor authentication?
  • Are the APIs capable enough to encrypt the combination of usernames and passwords?
  • Is it possible to create and maintain policies for fine-grained authorization?
  • What is the connectivity for internal IMS (identity management systems) and their attributes along with those which are offered by the cloud service providers’ APIs?

Code practices

If your cloud API processes XML or JSON messages or receive user and application input, then it is integral to test them properly so they can be evaluated for routine injection flows, schema validation, input and output encoding, and CSRF (cross-site request forgery) attacks.

Protection of Messages

Apart from making sure that the standard coding practices are used, other cloud APIs factors can include encryption, encoding, integrity validation, and the message structure.

Securing the Cloud APIs

After a company analyzes the concerns which the insecure cloud APIs can cause, they have to think about what practices and solutions they can implement in order to safeguard them. Firstly, you have to assess the cloud service provider’s API security; request them to provide the APIs documentation which can include reports and assessment results of the existing applications that can help to highlight the best audit results and best practices. For instance, take the examples of Dasein Cloud API which offers a comprehensive case study pertaining to the cloud APIs with extensive and open documentation.

Additionally, other than documentation, clients can request their cloud service providers so they can operate vulnerability assignments and penetration tests for the cloud APIs. Sometimes, CSPs seek the service of other third-party providers to proceed with these tests. The final outcome is then shown to the clients with an NDA; this helps the client to assess the current cybersecurity of the APIs.

The APIs of the web services must be secured for the OWASP’s—Open Web Application Security Project—10 common considerations for security loopholes via the application and network controls. Additionally, they should be also protected for QA testing and development practices.

Several cloud service providers provide an authentication and access mechanism via encryptions keys for their customers to benefit from the APIs. It is necessary to safeguard these keys—for both the CSP and customers. There should be clear-cut policies to manage the production, storage, dissemination, and disposal of these encryption keys—they must be stored with the help of a hardware security component or a protected and encrypted file store can be used for them. Do not use the configuration files or similar scripts for key embedding. On a similar note, if keys are embedded directly into the code, then developers will have a tough time with updates.

Search for cloud security providers like Microsoft Azure and Amazon. They offer symmetric keys and authentication codes that run on a hash-based mechanism in order to provide integrity and refrain from spreading shared information between untrusted networks. If a third party intends to work with a cloud service provider’s API then they must abide by these considerations and view API security and protection keys as a major priority.

Advertisement

IoT Challenges and Solutions


Internet of Things still remains a relatively new technology for companies around the world. It is providing businesses with a lucrative opportunity to thrive and prosper in the future of “Things”. However, implementing the internet of things is easier said than done. Their deployments are complex. This means that you not only require the IT team but also need the business units and operations team to implement your IoT solutions. Some of the IoT challenges and solutions are listed below.

Cost

The expenses and costs incurred in migrating from traditional architecture to an IoT are significantly high. Companies should refrain from proceeding with this leap initially via a one-off stream. While there is nothing wrong with the overall vision to adopt IoT, it is difficult for the management to ignore costs.

To handle and mitigate these expenses, there are a number of projects with IoT implementations. They are quite cost-friendly and have defined goals, also called as ‘bite-sized’. Start your adoption slowly by utilizing pilot technologies and spend money via a series of phases. To manage additional costs, give a thought to SaaS (software as a service) to get on-premise and robust installations.

Moreover, evaluate those IoT projects which provide good value for money and go through the business cases documentation.

Security

Sending and receiving data on the Web is always one of the riskiest activities faced by the IT team. A major reason behind this is the latest onslaught of hacking which has engulfed the modern-day world as cybercriminals are attacking governments and businesses left and right. However, in IoT, the issue is more complex; not only have you to facilitate online data communication but also connect a lot of devices— creating more endpoints for the cybercriminals to attack. While assessing the security of your IoT application, consider the following.

Data at Rest

When databases and software store data through the cloud or on-premises architecture, such data is commonly known as “at rest”. To protect this data, companies rely on perimeter-based defense solutions such as firewalls and anti-virus. However, cybercriminals are hard to deter—for them this data offers lucrative opportunities in the form of several crimes like identity theft. Cybersecurity experts recommend that this issue can be resolved with the use of encryption strategies for both the hardware and software in order to ensure that the data is saved from any illegal access.

Data in Use

When an IoT application or a gateway uses data, then its access is often available for different users and devices and thus is referred to as data in use. Security analysts claim that data in use is the toughest to safeguard. When a security solution is designed for this type of data, mainly the security considerations assess the authentication mechanisms and focus on how to secure them so only the authorized user can access them.

Data in Flight

The data which is currently being moved is primarily referred to as data in flight. To protect this data, communication protocols are planned and designed by using the latest and most effective algorithms of cryptography; they allow blocking the cybercriminals from decoding the data in flight. You can use a wide range of internet of things equipment which provides an extensive list of security protocols—many of them are enabled by default. At the minimum, you should ensure that your IoT devices which are linked with the mobile apps or remote gateways utilize HTTPS, TLS, SFTP, DNS security extensions, and similar protocols for encryption.

Technology Infrastructure

Often, clients have IoT equipment which is directly linked to the SCADA (supervisory control and data acquisition). This means that they are ultimately responsible for the production of data that can help with analytics and insights. In case there is a lack of power monitoring equipment, the SCADA network can provide the relevant system to connect the newly-added instrumentation.

Secure networks often rely on one-way outbound-only communication. SCADA can facilitate the management of the equipment’s control signals.

You can use two methods for the protection of data for the APM data transmission. First things first, you can link the APM and the SCADA’s historian. The historian is a component which is required for the storage of the instruments’ readings and control actions and it resides in a demilitarized zone where it is required for the access of applications through the Internet. These applications only have access to look into the historian’s stored data.

You should know that SCADA only permits the writes to the DB. To do this, the historian sends the SCADA an outbound signal which is based on an interval. Many EAM (enterprise asset management) systems use the SCADA’s historian data to populate dashboards.

Another handy solution is to adopt a cellular service or any other independent infrastructure. This can help you to power your data communication without any dependence on a SCADA connection. The idea for uploading data—which is cellular in nature—is a wise one in facilities that have issues with the networking infrastructure. In such a setup, users can connect a cellular gateway device with a minimum of at least five devices while a 120-V outlet powers it. Today, pre-configured cellular equipment is offered by different companies, helping businesses to easily connect and deploy their IoT solutions within a few days.

 

Communication Infrastructure

The idea to use a cellular gateway for connecting internet of things equipment is smart. However, users who are close remote areas can struggle with reception. For such cases, you need to invest an enormous amount of money to develop the required infrastructure. LTE-M and LTE-NB may use the existing cellular towers but even then you can get a broader coverage from them.

This means that even while using the 4G-LTE data, if a user is unable to work with a good signal for voice calling, then they have a formidable option in the form on the LTE-M.

Four Layer and Seven Layer Architecture in the Internet of Things


Nowadays companies are aggressively incorporating internet of things into their existing IT infrastructures. However, planning to shift to IoT is one thing; an effective implementation is an entirely different ball game. Several considerations are required to develop IoT application. For starters, you need a robust and dependable architecture which can help with the implementation. Organizations have adopted a wide range of architectures along the years but the four layers and seven layer architecture are especially notable. Here is how they work.

Four Layer Architecture

The four-layer architecture is composed of the following four layers.

1.    Actuators and Sensors

Sensors and actuators are the components of IoT devices. Sensors collect and gather all the necessary data from an environment or object. It is then processed and becomes meaningful for the IoT ecosystem. Actuators are those components of IoT devices which can modify an object’s physical state. In standard four-layer architecture, data is computed at all the layers including in sensors and actuators. This processing depends upon the processing limits of the IoT equipment.

2.    The Internet Gateway

All the data which is collected and stored by sensors fall into the analog data category. Thus, a mechanism is required which can alter it and translate it into the relevant digital format. For this purpose, there are data acquisition systems in the second layer. They connect with the sensor’s network, perform aggregation on the output, and complete the analog-to-digital conversion. Then this result goes to the gateway which is responsible for routing it around Wi-Fi, wired LANs, and the Internet.

For example, suppose there is a pump which is integrated with many sensors and actuators; they send data to an IoT component so it can aggregate it and convert it into digital format. Subsequently, a gateway can process it and perform the routing for the next layers.

Preprocessing plays an important role in this stage. The sensor data produce voluminous amounts of data within no time. This data includes vibration, motion, voltage, and similar real-world physical values to create datasets with ever-changing data.

3.    Edge IoT

After the conversion of data into a digital format, it cannot be passed onto data centers before you can perform additional computation on it. At this phase, edge IT systems are used for the execution of the additional analysis. Usually, remote locations are selected to install them—mostly those areas which are the closest to the sensors.

Data in IoT systems require heavy consumption of resources such as the network bandwidth in the data centers. Hence, edge systems are utilized for analytics which decreases the reliance on computing resources. Visualization tools are often used in this phase to monitor data with graphs, maps, and dashboards.

4.    Cloud

If there is no need for immediate feedback and data must go under more strenuous processing, then a physical data center or cloud-based system routes the data. They have powerful systems that analyze, supervise, and store data while ensuring maximum security.

It should be noted that the output comes after a considerable period of time; however, you do get a highly comprehensive analysis of your IoT data. Moreover, you can integrate other data sources with actuators and sensors’ data for getting useful insights. Whether on-premises, cloud, or a hybrid system is used, there is no change in the processing basics.

Seven Layer Architecture

Following are the layers of the seven-layer architecture.

1.    Physical Devices

Physical equipment like controllers falls into the first layer of the seven layer architecture. The “things” in “internet of things” is referred to these physical devices as they are responsible for sending and receiving data. For example, the sensor data or the device status description is associated with this type of data. A local controller can compute this data and use NFC for transmission.

2.    Connectivity

Following tasks are associated with the second layer.

  • It connects with the devices of the first layer.
  • It implements the protocols according to the compatibility of various devices.
  • It helps in the translation of protocols.
  • It provides assistance in analytics related to the network.

3.    Edge Computing

Edge computing is used for data formatting which makes sure that the succeeding layer can make sense of the data sets. To do this, it performs data filtering, cleaning, and aggregation. Other tasks include the following.

  • It is used for the evaluation so data can be validated and computed by the next layer.
  • It assists in the data reformat to ease up high-level and complex processing.
  • It provides assistance in decoding and expanding.
  • It provides assistance in the data compression, thereby decreasing the traffic workload on the network.
  • It creates event alerts.

4.    Data Accumulation

Sensor data is ever changing. Hence, it is the fourth layer which is responsible for the required conversion. The layer ensures that data is maintained in such a state that other components and module of IoT can easily access it. When data filtering is applied in this phase, a significant part of data is eliminated.

5.    Data Abstraction

In this layer, the relevant data is processed for adhering to specific properties pertaining to the stored data. Afterward, data is provided to the application layer for further processing. The primary purpose of the data abstraction layer is data rendering keeping in mind its storage and using an approach through which IoT developers are easily able to code applications.

6.    Application

The purpose of the application layer is data processing so all the IoT modules can access data. Software and hardware layer are linked with this later. Data interpretation is carried out for generating reports, hence business intelligence comprises a major part of this layer.

7.    Collaboration

In this layer, response or action are offered to provide assistance for the given data. For instance, an action may be the actuator of an electromechanical device following a controller’s trigger.