How Has Google Improved Its Data Center Management Through Artificial Intelligence


Historically, the staff at data centers adjusted the settings of the cooling systems to save energy costs. Times have changed, and this is the sweet age of AI where intelligent systems are on guard 24/7 and automatically adjust these settings to save costs.

Last year, a tornado watch prompted Google’s AI system to take control of its cooling plant in a data center and it modified the system settings. The staff at Google was initially perplexed because the changes did not make sense at the time. However, after a closer inspection, the AI system was found to be taking a course of action that reduced the energy consumption.

The increase and decrease in temperature, humidity levels, and atmospheric pressure force the change in weather conditions, and they can stir a storm. This weather data is used by Google’s AI to adjust the cooling system accordingly.

Joe Kava, Google’s Vice President of data centers, revealed Google’s use of AI for data centers back in 2014. At that time, Kava explained that the company designed a neural network to assess the data which is collected from its data centers and suggested a few strategies to enhance its processing. These suggestions were later utilized as a recommendation engine.

Kava explained that they had a single solution which would provide them with recommendations and suggestions. Afterward, the qualified staff at Google would begin modifying the pumps, heat exchangers, and chillers settings according to the results of AI-based recommendations. In the last four years, Google’s AI usage has evolved beyond Kava’s proposed vision.

Presently, Google is adopting a more aggressive approach. Instead of only dishing out recommendations to the human operators could act on them, the new system would itself go onto adjust the cooling settings. Jim Gao, a data engineer at Google, said that the previous system saved 20 percent energy costs and that the newer update would save up to 40 percent in energy consumption.

Little Adjustments

The tornado watch is only a single real-world instance of Google’s powerful AI and its impact on energy savings to an extent which was impossible with manual processes. While at first glance, the minor adjustments done by the AI-enabled system might not seem enough. However, the sum of each savings results in a huge total.

Kava explains that the detailing performed by the AI systems makes it matchless. For instance, if the temperature in the surroundings of the data center goes from 60 degrees Fahrenheit to 64 degree Fahrenheit while the wet-bulb temperature is unaffected, then an individual from the data center staff would not go think much about updating the settings of the cooling system. However, the AI-based system is not so negligent. Whether there is a difference of 4 degrees or 40 degrees, it keeps on going.

One interesting observation regarding the system was its noticeably improved performance during the launch of new data centers. Generally, new data centers are not efficient as they are unable to get the most of the available capacity.

From Semi to Full Automation

The transfer of critical tasks of the infrastructure to the AI system has its own implications and considerations.

With the increase of data and runtime, the AI system becomes more and more powerful and therefore, management also starts to have faith in the system, enough to give it some control. Kava explained that after some experimentation and results, slowly and gradually the semi-automated tools and equipment are replaced by fully automated tools and equipment.

Uniformity is the key to Google’s AI exploits; it is not possible to implement AI at such a massive scale without uniformity. All the data centers are designed to be distinct such that a single AI system is not possible to be integrated across all of them at the same time.

The cooling system of all the data centers are constructed for maximum optimization according to their geographical locations. Google has tasked its data engineering team to continuously look for any possible techniques for energy savings.

Additionally, ML-based models are trained according to their sites. The models have to be programmed to follow that site’s architecture. This process takes some time. However, Google is positive that this consumption of time would result in better results in the future.

The Fear of Automation

One major discussion point with this rapid AI automation and similar AI-based ventures is the future of “humans” or the replacement of the humans. Are the data center engineers from Google going to lose their jobs? This question contains one of mankind’s biggest fears regarding AI. As AI progresses, this uncertainty has crept into the minds of workers. However, Kava is not worried. Kava stated that Google still has staff at its disposal at data centers that is responsible for maintenance. While AI may have replaced some of their “chores”, the staff still has to perform corrective repairs and preventative maintenance.

Kava also shed some light on some of AI’s weaknesses. For instance, he explained that whenever the AI system finds itself in the midst of uncharted territory, it struggles to choose the best course of action. Therefore, it is unable to mimic the brilliance of humans in making astute observations. Kava concluded that it is recommended to use AI for cooling and other data center related tasks, though he cautioned that there must be some “human” presence to ensure that nothing goes amiss.

Final Thoughts

Google’s vision, planning, and execution of AI in its data centers are promising for other industries too. Gao’s model is believed to be applicable to manufacturing plants that also have similar setups like cooling and heating systems. Similarly, a chemical plant could also take advantage of AI and likewise, a petroleum refinery may use AI in the same way. The actual realization is that, in the end, such AI-based systems can be adopted by other companies to enhance their systems.

Advertisements

How Is Machine Learning Assisting Organizations to Tackle Sophisticated Cyberattacks?


Previously, cybercriminals were limited in their approach. With the passage of time, they evolved and firmed their grasp on newer technologies. As a result, they were able to initiate highly sophisticated campaigns against businesses and individuals alike. One such example is of the attack on LapCorps—one of the prominent names in the healthcare industry in the USA.

Over the past few years, the trend has worsened as cybercriminals are directly challenging governments through attacks in cities and town governments. For instance, just a few months ago, the American cities of Atlanta and Baltimore faced city-wide cyberattacks that halted their public services. What’s more worrisome is the fact that authorities have discovered that some of the cyber attacks were backed by other countries, thereby changing the face of modern-day warfare to cyber warfare.

In such challenging times in the cybersecurity industry, the recent advancements in machine learning have made it highly useful against cyber attacks. The entire purpose of machine learning is to “learn” from the past and update itself with the passage of time. This vision is perfectly suited to address cyber attacks where machine learning can learn from the historical data of cyber attacks like the information of their victims, their target industries, their patterns, and other related information and can then use it to prevent any future attacks while evolving at the same time. Following are some of the cases where machine learning has been pretty impressive against some major threats.

Classification

Traditionally, burglars and robbers used to analyze and research targets and carry out crimes accordingly. Today, the situation is the same but the battleground is different as criminals have transformed into cybercriminals. These cybercriminals target specific businesses or persons to infect their servers with a technique called spear phishing.

In order to combat these cyber attacks, several phishing detection solutions have been released albeit with limited success because they do not fare well on the precision and quickness of their actions against such infections. As a consequence, users are left alone to fight off cyber attacks.

Machine learning is providing a breakthrough by using classification to assess recurring hacking patterns and decoding the encrypted emails of the senders. For analysis, ML-based models are trained to pinpoint any anomaly in the punctuation, email headers, body-data, and other relevant metrics. The purpose of these models is to identify whether or not an email is filled with a malicious phishing threat or not.

Traversal Detection Algorithms

Cybercriminals are increasingly keeping an eye on digital users like which websites do they use the most as well as the network of such websites. For instance, consider a restaurant business. As all of the customers order their food on the website of the restaurant, hackers exploit such websites, gain access to private customer data such as credit card details, and misuse the credentials of the visitors. This type of attack is known as a watering hole.

In these types of attacks, machine learning (ML) can be a game-changer by improving the traditional web security. For instance, it can determine if users are going to be forwarded to a dangerous website’s link through the destination path’s traversal. To attain this goal, traversal detection algorithms are integrated in ML. Likewise; ML can look for any sudden or unusual redirecting from a web-page on the host server.

Deep Learning

Ransomware is a type of cyberthreat that paralyzes and effectively locks the data of its victim. In order to provide access to this data, cybercriminals ask for ransom in exchange for data. The data is encrypted through cryptographic algorithms which generate an encryption key and sends it to the command and control center of the cybercriminals.

In such scenarios, a division of machine learning called deep learning is utilized. Deep learning is used to recognize any fresh ransomware threat. Datasets are trained for analyzing the common ransomware behaviors to predict any upcoming ransomware attack.

To make the system learn, a huge amount of ransomware files along with a bigger amount of non-malicious files are needed for training of the model. ML-based algorithms search and identify the major features from the dataset. These attributes are then subdivided in order to initiate the training of the model. Afterward, whenever a ransomware strain attempts to infect a system, the ML tool runs it against the trained model and computes a set of actions to respond to the attack, thereby saving the computer from being locked.

Remote Attacks

When a single computer or multiple computers are targeted by a cybercriminal, it is known as a remote attack. Such a hacker searches for loopholes in the network or the machine to enter a system. Usually, such attacks are carried out to copy sensitive data or completely ravage a network through a malware infection.

Remote attacks can be caused from a DDoS attack. In such types of attack, the server is damaged by repeatedly flooding it with fake requests. Consequently, as the servers are frozen, the cybercriminals make their move.

With machine learning algorithms, these attacks can be thwarted by a thorough analysis of the system behavior and pinpointing of any unusual instances which does not make sense according to the standard network activities. ML algorithms can be empowered to monitor and detect a malicious payload before it is too late.

Webshell

Webshell is a malware threat which facilitates a hacker in accessing and changing the settings of a website from the server’s web root directory. Hence, the cybercriminal has his/her hands on the complete database.

For e-commerce websites, cybercriminals can even get their hands on financial details like credit card data which can be exploited in a wide range of crimes. This is the major reason that webshell is mostly used against e-commerce websites.

By using machine learning, the figures and data of shopping carts can be analyzed and learnt by ML-based models to differ between malicious actions and standard actions. Malicious files can be fed to ML in order to enhance the training and capability of the model. This training then assists ML-based systems to pick webshells and quarantine them before they can perform harm the system.