Privacy and Telemetry in Windows 10: How to limit what is sent to Microsoft

Tutorial in pictures to limit as much as possible the information sent by a Windows 10 terminal to Microsoft. The possibilities differ according to the editions.

The release of Windows 10 this summer has raised concerns: the latest Microsoft OS being equipped with new telemetry systems, some users may have felt a little too spy. The media did not hesitate to echo these fears, sometimes abundantly, which could amplify them.

Microsoft has since wanted to calm the game. The editor now offers the ability to manage settings that allow to choose the information that will be shared. Here’s how to minimize telemetry on Windows 10, Family, Pro or Enterprise license.

Choose your telemetry level in 3 clicks

From the desktop, you must first go to the space dedicated to notifications, accessible from the taskbar, bottom right on the desktop. Then click on the button “All settings” (box in red in the screenshot below).

The first click:

You must first click on “All Settings” at the top right of this screenshot.

The second click:

Once the settings are open, go to the “Privacy” menu with the padlock as the icon at the bottom.

A scroll, the third click, and a last scroll to scroll a small menu:


Then you have to go down in the side menu to reach and click on “Comment and Diagnostics”. 
There appears, on the right, under “Diagnostic data and use”, a drop-down menu (with the options “Basic”, “Improved”, and “Complete” on the capture). 
This is where telemetry is set.

Several levels of telemetry exist. By default, on Pro and Family licenses, the level is set to  “full” . This is level  4 , the highest, the one that shares the most information with Microsoft. “It solves the most complex problems, but it will also get more information than other levels.” Microsoft will not use the personal data that can be collected: it will not be used for advertising, and users will not be contacted.This is strictly controlled, “said Arnaud Jumelet, Cloud and Security Advisor at the technical and security direction of Microsoft France in a video very educational on the subject.

The Level 3 is the level “improved” . This is the level on which the Enterprise Edition of Windows 10 is set by default. This level “collects performance information, the type and number of apps that are running, and it will also get memory snapshots, ‘memory dumps’, just like Windows Error Reporting on previous versions of Windows”, recalls Arnaud Jumelet in this same video broadcast on the official webTV of Microsoft, channel 9. Again, if personal data were to be collected in the memory footprints, Microsoft promises not to exploit them. 

Windows 10 Pro: The minimum amount of information sent

From a Windows 10 Family or Pro client, it is finally possible to lower the telemetry level to “basic” level. This is level 2, and the minimum level for these licenses. It will go back to Microsoft “basic information about the functioning of the OS”, and must allow the giant of Redmond to “ensure that there is no problem of slowdown, application compatibility, applications that do not install or crash, “says Jean-Yves Grasset, security architect at Microsoft also involved in the video. This basic level “also allows to know if the OS is virtualized, to know the version of Internet Explorer used” but “no personal information is sent, only information on the system”, complete Florent Pélissier Windows Product Manager 10 at Microsoft France interviewed by the JDN.

Finally, there is a level   : the security level . “It’s the lowest possible,” said Florent Pélissier. But it is only available for Windows 10 Enterprise, Windows 10 Education and Windows 10 IoT Standard, and can only be obtained through GPO or MDM. Not since the interface of a Windows 10 client so unlike the other levels shown in the catches above.

According to Microsoft, this level 1 does not go back “only the information related to the security of the system”. For example, it includes the tool MSRT “that will scan the post every month and will trace the information about possible infections.Other parameters are also transmitted.The version of the OS is part of the data sent to Microsoft with this level, as the identifier of the terminal, which avoids duplicates, “says Jean-Yves Grasset in the broadcast channel 9. 

Setting up a post on this security level is not enough: “to send nothing to Microsoft, you also have to disable certain services like Cortana, or geolocation, among others”, adds Florent Pélissier.

All disable in “Privacy”

We must also curb the great curiosity of Cortana

It’s also feasible in a few clicks from the Windows client interface. Here again you have to start by clicking on “Privacy” in “All Settings”, just as in the first two screenshots above.

Then, you have to disable everything that appears, starting with what is in the “General” menu: the advertising ID, Smartscreen, even if Microsoft does not recommend it, the information about the writing, and the list of language.

That’s not all: then you have to go to tabs (Location, Camera, Microphone, Contacts, Calendar …) and disable everything (the first button at the top for each menu entry is sufficient in most menus). We must also curb the great curiosity of Cortana, which can be done from its settings – which can be quickly found after typing “Cortana” in the field of the Start menu. Again, you have to disable everything, but obviously, some services can not then be operational.


With AWS IoT, Amazon Web Services is Positioned in the Internet of Things

The AWS IoT platform is now in final release. It is designed to control connected object networks in cloud mode.

The Amazon Web Services (AWS) cloud platform, which has been built to drive connected object networks (IoT) since October in beta, is now available in final version (read the official post of this announcement ). The new service was revealed on October 8 at the global US cloud event in Las Vegas. Called AWS IoT , “it is sized to handle the consolidation and processing of data from billions of objects,” says Amazon, which discusses possible IoT applications in automotive, industrial turbines, or urban sensor networks .

A service compatible with IoT standards

Upstream, AWS IoT offers an AWS IoT Device SDK designed to connect connected objects to its various cloud services. It allows to create a virtualized image of each object allowing a control of its state permanently and the possibility to update it (via API). Amazon says it has signed several agreements with semiconductor manufacturers to make its kit compatible with their technology. This is particularly the case with Broadcom, Intel, Qualcomm or Texas Instruments.

To connect the devices to its cloud, AWS has developed an HTTP gateway implementing the Message Queue Telemetry Transport (MQTT) protocol: an industry standard for managing communication with connected objects. In terms of network performance management, the throughput made available by Amazon can be self-sized based on the growth of the object network and its level of activity. On the security side, AWS IoT authenticates objects, including through the AWS Access Management Tool (AWS Identity and Access Management). It can also encrypt the transmitted data.

An engine to deliver feeds to the right AWS application

Once the data is uploaded to AWS, a rule engine takes over. It is responsible for routing them, according to their business interest and level of criticality, to the appropriate service (Amazon S3, Amazon Machine Learning, Amazon DynamoDB …). “Among the many technical data that can be provided by a connected industrial pump, it can for example automatically route the pressure level to Amazon Kinesis Firehose who will load this information into an Amazon Redshift warehouse for analysis,” says Amazon.



The rules for executing AWS IoT Rules flows can also be orchestrated using AWS Lambda, notably to manage technical processes (data compression, sending push notification if an anomaly is detected …) ©Amazon Web Services

Microsoft’s Azure IoT Suite in Focus

On the pricing side, AWS IoT offers a pay-as-you-go price list that allows customers to pay only for what they consume (in terms of machine power, storage capacity and network transit ). Amazon gives access to its service for free for 12 months with a limit of 250,000 incoming or outgoing messages per month. Then, AWS IoT is priced $ 5 for a million messages exchanged, except if the service is operated on Amazon’s Asian data centers (the price then rises to $ 8 per million messages). Amazon means by message a block of 512 bytes.

AWS already claims several references in the field of IoT. This is the case of Philips, which leans on the American’s cloud to drive more than 7 million connected objects (medical sensors, consumer terminals and mobile apps), with 15 data keys patients stored on AWS. NASA’s Jet Propulsion Laboratory (JPL) also relies on AWS to manage feedback from different sensors in the solar system. The US Space Agency also uses AWS to aggregate data from mobile devices.

Offering a very comparable solution, the Microsoft Azure IoT Suite seems pretty clear here in the line of sight of Amazon.

The first website put online 25 years ago

On December 20, 1990, Tim Berners-Lee activated the first ever website after developing a navigator at the European Nuclear Research Center in Switzerland.

The very first website was put online on December 20, 1990 by Tim Berners-Lee. At the time, the engineer is a researcher at CERN (European Center for Nuclear Research) in Switzerland. The site, Info.cern.ch, is hosted on a NeXT computer of the research center. The source code of the embryo of HTTP is published in parallel by Tim Berners-Lee, as well as the first browser (WorldWideWeb) that he developed with the help of a center engineer, Robert Cailliau. The site gathers documentation on the project, and information to create its own site.

Tim Berners-Lee’s server technology then extends to other research centers across Europe and the world. In November 1992, there were 26  web servers and 200 in October 1993.

Manage scientific information

Originally, Tim Berners-Lee developed this technology, which he called Mesh, to manage the large amounts of scientific information manipulated within CERN. It is conceptualized a year and a half ago by the engineer. Its goal is to find a method to allow researchers to navigate, but also to collaborate via co-publication spaces.

First website published on Info.cern.ch (this is a reconstitution visible on the W3C website at this address – the original site no longer exists).

More than 900 million websites today

In April 1993, version 1.0 of the first graphical Internet browser was published by the National Center for Supercomputing Applications (NCSA) at the University of Illinois in the United States. It’s Mosaic . It will be renamed in 1994 Netscape. Starting in 2004, Firefox will rely on its latest generation kernel.

Appointed a researcher at the Massachusetts Institute of Technology (MIT) in 1994, Tim Berners-Lee founded the World Wide Web Consortium ( W3C ) the same year. The organization brings together private and public research stakeholders ready to engage in the standardization of web technologies and the improvement of their quality, with a view to the generalization and promotion of the network. Tim Berners-Lee is ennobled by Elisabeth II in 2004 for his role in the development of the Internet.

Cloud, historical publishers have to adapt

Traditional software publishers have not yet moved into the cloud, especially on their pricing model, which is not suited to the principles of capacity on demand and elasticity.

The Cloud has revolutionized IT for several years now with recognized and pure player publishers (Saleforce.com, Amazon Web Services, Google …) or historical players who have decided to switch to the Cloud to enrich their service offerings and especially not not risk ending up on the downside of this revolution in their sector. But a problem remains as soon as the proposed solution enters a hybrid logic, application On Premise installed on a Cloud environment.
Today, the fact is that most of the historical players have not yet adapted their business model and therefore their pricing to the expectations of their customers. Indeed, more and more companies have IaaS type facilities and therefore wish to equip themselves with solutions from the traditionally On Premise market to install them on these infrastructures. Of course, publishers are always quick to answer that they have the ability to provide different levels of supply to meet such needs. Whether it is to provide the software setup or a virtual appliance both to be installed by the client on its IaaS environments, the publisher thinks to have solved the problem of the client, let’s say that on the form it seems to answer, but on the bottom there is still some way to go.

What are the benefits of IaaS solutions? 
Let’s go back on the basis of what an “Infrastructure as a Service” or IaaS. These infrastructures bring a level of flexibility and quality of service that is almost impossible to achieve without oversized means, which can be done by major players by pooling the costs of all their customers and by creating new innovative infrastructures. By doing this, the pure players are able to price their services with high value added to the fair and offer real models of pricing on demand. Thus you will only have to pay the exact time of use of the service you consume (server, RAM, processor, bandwidth, disk space …).
Take a concrete example, your business needs to run an annual treatment that requires a lot of computing power, if you have an infrastructure of its own, you will need this power at the time of execution of this treatment. The fact is that you will have invested in this power and it will therefore be available at all times even if the rest of the year, every day you only need a small percentage of this power. You will therefore have to cover the costs associated with this occasional need throughout the year, whether for hardware, especially servers, RAM and processors, or for software related to execution. of these treatments which are invoiced according to the power of the machines on which they are installed (number of processors,
Now take the same case on IaaS, at the time of the execution of your treatment, you will have temporarily provisioned, and with a few clicks on the interface web of your supplier, the power necessary to the execution of this one. You benefit from what is called the elasticity of your infrastructure, its ability to adapt quickly, even dynamically, to your needs with a cost accordingly. In this case, the cost of your punctual charge will be calculated according to the use you make of the infrastructure at this very precise moment, but the rest of the year you return to a pricing adapted to your use real. In fact when we talk about IaaS, it is only the infrastructure that you will pay on demand not the software that will have been used to perform this treatment,
As you will have understood, when you choose IaaS you decide when you want it, or you have programmed this decision making, that your infrastructure adapts to your needs and especially you only pay according to the actual use you have of this infrastructure. As a bonus, you no longer need to worry about the obsolescence of your equipment or its maintenance in operational condition, and all this in highly secure environments. 

Now let’s talk about software vendorsRemains these historical editors of On Premise solutions, able according to them to turn on cloud mode, allowing you to benefit from the power of capacity on demand … but unfortunately not the pricing that goes with it. Take the example of the treatment that runs once a year and for which you use the power and elasticity of your IaaS to handle this difficult passage, your favorite editor will not be able to adapt its pricing to the maximum power of which you have need a few hours in the year.

At the end of the day, whatever you’ve earned with your infrastructure, most publishers do not know how to earn it on licensing costs. So of course, some of them will propose after long hours of negotiation to make sure to help you pass this complicated course when it is an isolated treatment for a peak once in the year. But if you have, like many companies, several treatments of this type, see monthly and not only annual, the negotiation becomes at once much more difficult.

Let’s take a moment from the previous example, which may seem anecdotal, and take the case of an intermediation platform (ESB, API Gateway …) that will meet the same types of issues with peaks of activity related to a certain seasonality or to marketing campaigns inducing a large and punctual solicitation of your infrastructure, you will encounter the same problem with your publishers. Today, there is virtually no option to adapt the pricing of your software to your actual need for use. 

Threats underway for these publishers

The challenge for all these publishers and to take into account a real threat, that of new entrants on the market of the software or pure player whose service offer will be natively compatible with the elasticity and the use on demand IaaS platforms. Just look at what’s happening in Amazon’s marketplace, for example, to find publishers who offer software solutions that are priced with the level of service provided by AWS. Worse still, AWS services are growing at a tremendous rate, every month new services are created, existing services evolve strongly and refer to the place of well-established actors. Competition becomes tough.

Take the API Gateway-type solutions whose leaders are today historical players in the software market and who do not offer any pricing offer adapted to an IaaS environment. In this niche Amazon has released a new API Gateway service whose pricing model is based on your call volume web service. It should be noted that this new service still young in the AWS environment does not currently meet the same functionalities as those of the leaders, but at the speed Amazon is advancing, it’s a safe bet that in a few months this option will become more than relevant, and above all much more economical than a choice of a traditional publisher.

Finally this subject will become more and more essential for software publishers who still play on their name and on their loyal customers to not move too fast. However, even companies as famous as Kodak who did not believe in a major evolution of their sector ended up being left behind. The difficulty lies in the ability of major players to reform to come up with a service offer that meets the expectations of demanding customers who do not want to spend locked-down contracts over years at prohibitive prices.

These same customers are now looking for purchasing models that allow software load variability rather than fixed-cost models that also lock firms into long-term commitments. In a world in the midst of a digital transformation in which no one knows what tomorrow will be, let alone what solution will best meet tomorrow’s new business goals, it becomes necessary for large software publishers to envision a profound paradigm shift. It is at this price that they will survive and allow their customers to continue to innovate and transform themselves to remain competitive.