Google makes it easy for AI developers to keep

Google Makes |

Google makes it easy for AI developers to keep User’s Data Private

Recently, tech giant, Google has announced its new module for the famous machine learning framework, TensorFlow. TensorFlow is a machine learning framework build by Google itself. This new model will help developers to improve the privacy of the AI models and projects. With just a few lines of code, now developers can improve the privacy of AI models.

Google makes it easy for AI developers

Talking about TensorFlow, it has been the most popular framework for building machine learning models and applications. This framework is used by many developers from all around the world. Machine learning models like text, audio, and image recognition algorithms are all can be build using this framework. Now that Google has announced the new module i.e. TensorFlow Privacy, it will help the developers to keep the users’ data more secure. The company has used a statistical technique known as differential privacy in this TensorFlow Privacy module to safeguard the users’ data. Moreover, you can also get to know about this technique through books and avail those using Flipkart coupons.

Not only it will keep the users’ data private but also make the AI development more reliable and trustable. As per Carey Radebaugh, Google‘s product manager, “If we don’t get something like differential privacy into TensorFlow, then we just know it won’t be as easy for teams inside and outside of Google to make use of it,” says Radebaugh. “So for us, it’s important to get it into TensorFlow, to open source it, and to start to create this community around it.” He also said, “To use TensorFlow Privacy, no expertise in privacy or its underlying mathematics should be required: those using standard TensorFlow mechanisms should not have to change their model architectures, training procedures, or processes.”

Talking about this differential privacy, the approaches are quite unique and complex. But, this mathematical approach will make the AI models more smart and secure. So, the new mathematical approach trains AI models on user data which can’t encode the personal information of any user. This is something we saw in Apple’s iOS 10 for its AI services. Also, Google has been using it for its AI features like Smart Reply of Gmail.

Have you ever thought that how much data and useful information Smart Reply catches to give smart suggestions? Have you ever thought about cracking those keywords, the company can misuse it? What if Google Smart Reply suggests you the exact same email you want to reply? It will be very dangerous for the users. The trust between the brand and users will destroy and along with that, the company can get personal info easily. You can also apply for Angular Developer jobs to know about this field.

Differential privacy breaks that. To make this more secure and packed, Differential privacy uses mathematical certainty. As per a research scientist of Google who is working in data privacy for last 20 years, Úlfar Erlingsson, “You have an outcome that is independent of any one person’s [data] but that is still a good outcome.” The identifiable outliers from the data can be eradicated without changing the actual meaning of the context. “Instead, to train models that protect privacy for their training data, it is often sufficient for you to make some simple code changes and tune the hyper parameters relevant to privacy,” he added.

Yes, with every pro there is a con attached too. In this case of using differential privacy, there is one major concern. As per Erlingsson, “By masking outliers, it can sometimes remove relevant or interesting data, especially in varied datasets, like those involving language. Differential privacy literally means that it’s impossible for the system to learn about anything that happens just once in the dataset, and so you have this tension. Do you have to go get more data of a certain type? How relevant or useful are those unique properties in the dataset?”

All said and done, Google still believes that by releasing this module of TensorFlow which is TensorFlow Privacy, many developers around the world can start working on this technique and make a secure shell environment As per Radebaugh, and these problems can be ameliorated. “There’s work to do to make it easier to figure out this tradeoff.”

The new tools overall will bring and nourish the new talent chunk which is at the age of popping up and bring something big. And, by integrating differential privacy to an AI model will just lift the security. As per Erlingsson, “by adding four or five lines [of code] and some hyper-parameter tuning. This is a very different sort of world to what we were in even just a few months ago, so we’re quite proud of that.”


Deja una respuesta

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *


En utilizamos cookies, tanto propias como de terceros para recopilar información estadística, Si continúa navegando, consideramos que acepta su uso. Leer Más