How Amazon makes Machine Learning Trustworthy

With all the discussion around Bias in ChatGPT and Machine Learning, these techniques might be very helpful

Devansh
Geek Culture

--

Join 31K+ People keeping in touch with the most important ideas and development in AI and Machine Learning through my free newsletter over here

Machine Learning has swept the world recently. Thanks to all the amazing results companies have been rushing to adopt Data-Driven decision-making into their processes. Given all the amazing demos by DALLE, StableDiffusion, and now ChatGPT, more and more people are waking up to the potential of AI. However, some people have been raising concerns about the potential for harm that these models have. Recently, ChatGPT has gained some attention, because users have discovered that it can generate some spicy outputs. Take a look at how ChatGPT can identify good scientists based on their race and gender.

For a non-clickbait, non-inflammatory analysis of this output and why this matters, check out this post. Too many commentators have been focusing on the wrong things.

Bias in Data Science and Deep Learning is nothing new. Neither is the susceptibility of large models to bias and replicating the prejudice encoding in datasets. I’ve been talking about it for 2 years now. But understanding how to handle this is now more important than ever. And luckily, the tech giant Amazon has made some great strides in this area. In this article, I will be breaking down their publication- Advances in trustworthy machine learning at Alexa AI- where they share some of the techniques they use to create AI that is safer and fairer.

One example of trying to get rid of bias in the dataset. Amazon’s Model ignores the gendered language in the prompt. FROM “MITIGATING GENDER BIAS IN DISTILLED LANGUAGE MODELS VIA COUNTERFACTUAL ROLE REVERSAL

If you are interested in creating better ML pipelines, then this is not a topic you want to miss out on. Let’s get right into it.

Technique 1: Privacy-preserving ML

To understand this and why this is important, let’s first understand a fundamental fact about machine learning. ML models take some input and generate some outputs. The outputs generated depend on the rules that the model discovered during the training phase. This goes without saying, but the rules depend on the input we feed the model. Pretty obvious to most people. So why am I talking about it?

Turns out this carries some privacy risks with it. There is a chance that outputs can be used to infer details about the inputs. Your data can end up in the hands of people you never consented to. This is where the idea of differential privacy comes in. To quote the publication, “The intuition behind differential privacy (DP) is that access to the outputs of a model should not provide any hint about what inputs were used to train the model.

How is this calculated? “DP quantifies that intuition as a difference (in probabilities) between the outputs of a model trained on a given dataset and the outputs of the same model trained on the same dataset after a single input is removed. ” In this manner, it reminds me of the permutation-based feature importance, but instead of shuffling through features, we are dropping values. This is an interesting way to quantify the impact of a single sample on your training process.

One of the primary ways that Amazon accomplishes this is through the use of input noise. I’ve covered the greatness of adding randomness to your Deep Learning pipelines extensively. However, based on their writing, it seems Amazon tries a slightly different direction. Instead of using noise as a means of adding chaos, they use the noise to hide the relationship of parameters to the training data. The image below gives an example.

Taken from their publication Improving the accuracy of privacy-preserving neural networks

The way I normally recommend is meant to improve the generalization and robustness of pipelines. And obviously, adding completely random noise would help with privacy. However, it can lead to a drop in performance. Amazon’s approach is better for maintaining higher performance. However, remember that to make the most of Amazon’s approach, you need to make sensible substitutions. You can’t replace Boston with Sausage. Finding the right substitutions might add to your computing costs. This is acknowledged by the authors-

Another side effect of adding a DP mechanism is increased training time.

Amazon also did some pretty interesting research into how people could reconstruct training samples using various techniques. To learn more about it, and its defensive countermeasures, check out their publication Canary extraction in natural language understanding models

AlphaSignal is a free weekly Summary of the top developments in Machine Learning. They use AI to rank and send you the top developments in the field. If you’re looking for something to help stay in touch with the pace of Machine Learning, check them out. Reading them is a great way to stay in touch with the field and support my writing at no cost to yourself.

Technique 2: Federated Learning

Think of all the Alex Devices, Prime Video apps, and different devices people use for their Amazon accounts. If Amazon directly sent the data back to the centers, their costs would spiral out of control. Not to mention, the huge privacy red flag of Amazon Data centers storing your conversations, shopping, etc. Clearly, this is not a good idea. But then, how would you update the models based on new user interactions?

What if you just let the models be updated on the local device? Say, one day I watch a lot of horror movies on Prime on my phone. So we update the recommendation systems on my phone to account for my new tastes. Once these updates have been made, I share the updates with the Amazon centers. You, my love, have just learned about federated learning.

This has several benefits. Firstly, the data of model updates are much smaller than the raw data, which makes it much cheaper to process and store. Secondly, this comes with a huge benefit when it comes to privacy. Even if someone did gain access to this data, all they’d see is mumbo jumbo. The model update data is not human-readable, so no one can see what shows you’ve been binging. And without knowing the exact architecture, it can’t be plugged into models to reconstruct your habits.

Photo by Mick Haupt on Unsplash

Technique 3: Fairness in ML

An overlooked problem in Machine Learning is the presence of biased datasets. Biased datasets typically occur when you sample data from data sources that don’t accurately represent underlying stats. For example, imagine you wanted to get the national opinion on a new policy. But in your survey, you get responses from mostly college kids. In this case, your analysis will be biased because your nation is not mostly college kids. Biased datasets are a bigger problem than most people realize.

Take the racist ChatGPT example I shared at the start of this article. Most big-brained LinkedIn influencers were happy just calling it racist and ending their analysis there. However, that is inaccurate. In reality, this is most likely the case of biased data samples. ChatGPT probably scraped datasets that were predominately American, and thus created its analysis on that. I’m basing this on the fact that my race (Indian/South-East Asian) wasn’t even mentioned in its ranking of races based on intellect (we were put in the other category). Given how many SE Asians there are, it doesn’t make sense to not have them as their own race. Unless you consider the fact that in American datasets, Latinos are mentioned a lot more than SE-Asians (or Aboriginals, etc). Here is the aforementioned ranking-

The problem of LLMs having biased datasets was mentioned by Amazon-

Natural-language-processing applications’ increased reliance on large language models trained on intrinsically biased web-scale corpora has amplified the importance of accurate fairness metrics and procedures for building more robust models.

Their publication “Mitigating social bias in knowledge graph embeddings” goes into this in a lot more detail. It covers several interesting ways the biases exist.

They use various techniques like attribute substitution to counter the biases that would otherwise become encoded in the knowledge graphs.

Aside from this, they also studied the metrics used to quantify fairness. In the paper, “On the intrinsic and extrinsic fairness evaluation metrics for contextualized language representations”, they showed that the usual metrics used to measure fairness reflect the biases of their datasets-

To combat this, Amazon created a few metrics of its own.

FROM “MEASURING FAIRNESS OF TEXT CLASSIFIERS VIA PREDICTION SENSITIVITY

To overcome the problem of gendered biases in public datasets, Amazon implements the following procedure-

We propose two modifications to the base knowledge distillation based on counterfactual role reversal — modifying teacher probabilities and augmenting the training set.

-Source, Mitigating gender bias in distilled language models via counterfactual role reversal

Once again, Data Augmentation seems to be a very important element. In this case, it is used to balance the underlying data distributions. By doing so, they are able to create much fairer models.

I’m going to end this article with an interesting observation. Much of the procedures that Amazon uses to achieve are nothing special. There are no gimmicks, nothing that really makes you scratch your head. Instead, the majority of techniques mentioned here (and in their papers) are just reasonable solutions executed at very high levels. Yes, AI is a rapidly changing field with constant changes. However, many of these improvements are based on good solid fundamentals. Learning about them will allow you to stay in touch with the most important developments.

If you liked this write-up, you would like my daily email newsletter Technology Made Simple. It covers topics in Algorithm Design, Math, AI, Data Science, Recent Events in Tech, Software Engineering, and much more to make you a better developer. I am currently running a 20% discount for a WHOLE YEAR, so make sure to check it out. Using this discount will drop the prices-

800 INR (10 USD) → 533 INR (8 USD) per Month

8000 INR (100 USD) → 6400INR (80 USD) per year

You can learn more about the newsletter here. If you’d like to talk to me about your project/company/organization, scroll below and use my contact links to reach out to me.

Reach out to me

Use the links below to check out my other content, learn more about tutoring, reach out to me about projects, or just to say hi.

Free Weekly Summary of the important updates in Machine Learning(sponsored)- https://lnkd.in/gCFTuivn

To help me understand you fill out this survey (anonymous)

Check out my other articles on Medium. : https://rb.gy/zn1aiu

My YouTube: https://rb.gy/88iwdd

Reach out to me on LinkedIn. Let’s connect: https://rb.gy/m5ok2y

My Instagram: https://rb.gy/gmvuy9

My Twitter: https://twitter.com/Machine01776819

If you’re looking to build a career in tech: https://codinginterviewsmadesimple.substack.com/

Get a free stock on Robinhood: https://join.robinhood.com/fnud75

--

--

Devansh
Geek Culture

Writing about AI, Math, the Tech Industry and whatever else interests me. Join my cult to gain inner peace and to support my crippling chocolate milk addiction