What are the most popular Amazon cloud services?

The most used offers, the ones with the highest growth in terms of expenses … Here are the services that have the wind in their sails on AWS.

Athena is in pole position of Amazon’s fastest-growing cloud bricks in 2018. According to  the latest 2nd Watch barometer  of the most popular AWS services. To compile its ranking, the US publisher relied on the analysis of a fleet of 400 managed enterprise workloads and more than 200,000 instances backed by a managed public cloud environment. “It’s no wonder that Athena is providing the Amazon S3 storage service with a SQL layer that makes it easy to search and visualize data,” said Jérémie Rodon, cloud architect at D2SI, an AWS Devoteam affiliate. From 2017 to 2018, Athena costs increased by 68% among 2nd Watch customers.

Not far behind, Amazon EKS (Elastic Container Service for Kubernetes) makes a great breakthrough. Spending on it jumps 53% from one year to the next. “For years, engineering teams have been implementing Docker containers in-house, which largely explains the enthusiasm for Amazon’s container orchestrator, not to mention that it has an advantage for decoupled applications. micro-services base “, analyzes Jérémie Rodon. 

In the top of 2nd Watch, the D2SI consultant notes the presence of SageMaker, the AWS solution designed for machine learning. Ranking sixth in the charts, “SageMaker records an increase of 21% over one year, still in terms of expenses,” it is estimated at 2nd Watch. Jérémie Rodon confirms: “Our customers are increasingly adopting this service, and it would not be surprising to see it top the list next year.” It must be said that SageMaker raking broad. It covers both the preparation of training datasets and the deployment of learning models through the delicate learning phase. “Its hyperparameter tuning features allow you to fine-tune the configuration of the algorithms automatically.

On the side of the most used AWS bricks, EC2 instances and S3 storage rise unsurprisingly in the lead alongside AWS Data Transfert which takes charge of billing data coming out of the cloud from Amazon to the Internet (see graph below). “These are key elements for managing the deployment of applications on AWS and making the most of it in terms of horizontal sizing of resources based on traffic,” said Jérémie Rodon at D2SI. Paving the way for a fully isolated network infrastructure, Amazon Virtual Private Cloud (VPC) is also one of Amazon’s most-consumed cloud products, according to 2nd Watch. Bringing the security layer, it allows to define its own range of IP addresses,  

“Amazon SNS, SQS and SES are typical cloud components for building micro-services-based applications”

Just behind this leading pack, CloudWatch stands out. The Amazon Monitoring Console is present in 98% of 2nd Watch customers. Again, this is not a surprise. CloudWatch has become a de facto central solution for managing systems on AWS. This level of adoption (98%) is also achieved by AWS KMS, which is designed to secure data.

Other offers in the race include Amazon SNS (96%), AWS SQS (84%) and Amazon SES (80%), respectively pruned to push notifications, orchestrate the exchange of inter-application messages and send messages. emails. “These are typical cloud functions for building applications based on micro-components, it’s not surprising to see them back in this ranking,” recognizes Jeremiah Rodon. Used by the vast majority of D2SI customers to power software in serverless mode, Amazon Lambda is not far behind (72%). “Lambda is nestled in architectures to manage the“, decrypts Jeremiah Rodon.

In the top 10 of the most used AWS bricks, there are also two data server services. In mind: DynamoDB. Amazon’s flagship managed NoSQL database is implemented by all customers analyzed by 2nd Watch. The second is none other than Amazon RDS (Relational Database Service).

OVH shapes infrastructure cloud for AI

Optimized for the machine and deep learning, the IaaS aims to offer a whole range of complementary services to manage the training and deployment pipelines.

OVH’s strategy in artificial intelligence is taking shape. First stage of the rocket, the French cloud intends to build a computing infrastructure tailored to machine learning. An IaaS that is both optimized in terms of network performance, computer computation (CPU) and graphical acceleration (GPU). Second floor: design virtual or bare metal servers that respond to the most common uses of AI. Lastly, OVH plans to offer a series of managed cloud services to facilitate the deployment of machine learning pipelines on its infrastructure. A strategy that has the merit of being clear and precise.

From 2017, OVH delivered its first GPU instances on its public cloud (OpenStack) with machine learning among the main targeted use cases. On the occasion of its last customer event in October 2018, the Roubaix group completed the building of Nvidia Tesla V100 GPU virtual machines shaped to accelerate the learning phases of neural networks. “In the coming days, we will also market Flash-based NVMe storage applications targeting intensive applications,” said Alain Fiocco, chief technical officer of OVH. For companies preferring a dedicated training environment,

To top it all off, OVH has just announced support for Nvidia GPU Cloud (NGC) technology through its Nvidia Tesla V100 GPU instances. It opens to its customers access to a catalog of machine learning libraries (Caffe2, MXNet, PyTorch, TensorFlow …), all optimized for the graphics processors of the American foundry. Available as containers, these pre-integrated frameworks embed the necessary bricks for their execution, from the Nvidia Cuda environment to the OS via the Nvidia libraries.

Best of all, NGC software is also compatible with OVH’s offer of the DGX-1 dedicated beta server . Equipped with 8 graphics processors, this Nvidia multi-GPU calculator targets the intensive training needs of deep learning. “This offer allows us to test the appetite of the market for this type of configuration.If there is a sponsor, we could consider building our own multi-GPU machine,” says Alain Fiocco.

To the question of whether OVH could go so far as to design its own graphic processors designed for deep learning, like Google with its TPU, the technical director of OVH responds in the negative. “Our mission is not to manufacture chips, but rather to assemble servers from market components to achieve a price / performance / density ratio that makes the difference.” A way that Facebook already borrows for its internal needs with physical machines GPU eight hearts homemade. As for the rest of its infrastructure, OVH already leans its VM and metal bar solutions for AI to servers designed by its Roubaix R & D and assembled in its Croix factory a few kilometers away.

Alain Fiocco is CTO of OVH. © OVH

In parallel, OVH intends to capitalize on its developments in internal AI to offer its customers new products. Example of this approach: the machine learning platform offered on its Labs (in alpha version) comes from an internal project centered on the predictive analysis of the life cycle of its IT infrastructures. “We have decided to extend it to make it generalizable and to respond to use cases from other business entities, since then we have also been using this application for fraud detection,” explains Alain Fiocco.

From there to the packager and market it as a cloud service, there is one step. “In the same vein, we could in the future benefit our customers from our predictive models for managing IT capabilities,” the CTO adds.

A Spark service tested in the Labs

Another illustration of this logic of conversion of internal bricks in the form of products: FPGA processors (for Field-Programmable Gate Array). Historically, OVH has used these reprogrammable chips as part of its system to fight against denial of service attacks (read OVH’s post on the subject ). The latter leans on FPGA servers assembled, again, by the group’s teams. “We could definitely consider marketing them if the need arises among our customers,” says Alain Fiocco. In its Labs, OVH also offers (in beta) a PostgreSQL database acceleration service that already takes advantage of these FPGA machines.

In total, OVH has deployed a team of about twenty people dedicated to its R & D projects in data science and artificial intelligence (excluding business intelligence). Alongside the initiatives mentioned above, she is working on other experimental AI projects available on the OVH Labs. This is the case, for example, with an image recognition engine or an Apache computing cluster cloud service.Spark. Directly based on the company’s OpenStack public cloud infrastructure, it allows you to train machine learning models by backing up the SparkML library. On the price side, these managed cloud solutions will initially be made available free of charge. Only the underlying machine resources (virtual or bar metal) and actually consumed by the customer will be billed.

Among his first references on the field of AI, Octave Klaba’s company highlights Systran. The text-based translation expert uses his NVIDIA DGX-1 servers to orchestrate his intensive calculations of neural networks applied to linguistic processing.