Why you should implement Evolutionary Algorithms in your Machine Learning Projects

Evolutionary Algorithms are seriously underutilized

Devansh
6 min readDec 22, 2021

Join 31K+ AI People keeping in touch with the most important ideas in Machine Learning through my free newsletter over here

If you’re into Machine Learning Research, you’ve come across all kinds of interesting objective functions and optimizers. They are crucial for your model learning behavior. The optimizers and different network architectures all have one goal: to traverse the search space in the best way, given some input.

However, often you will come across a lot of situations where your data features in domains are hard to extract/model. In such a case, writing effective machine learning agents to traverse the space would be hard. However, this is often where we might use Reinforcement Learning. RL can be a very powerful tool to solve many complex problems.

RL has a lot in common with EAs. They both rely on feedback on the environment to get better. RL learns the good and bad actions, while EAs focus on the good.

RL however comes with its own set of problems. The simulations come with some assumptions, it can be very costly and hasn’t been great at multi-objective optimization. For a more comprehensive understanding of reinforcement learning, check out this video. So what do we do here?

Enter Evolutionary Algorithms (EAs)

Evolutionary Algorithms are relatively straightforward. They are based on the process of evolution in biology. They follow the following steps: We create an initial set of candidate solutions. These solutions are treated as individuals and are iteratively updated. Each new generation is produced by stochastically removing less desired solutions (and often) mixing between the viable candidates. Small random changes might also be added to some remaining “fit” candidates. As a result, the population will gradually evolve to increase in fitness.

This might seem like simple trial and error. And it is. But remember that we have criteria for selection, so we filter out weak solutions. The methodology for this is remarkably similar to Gradient Descent and other loss minimization methods that explore the search space to reduce loss/maximize the likelihood. When viewed in that context, you can see why EAs are very powerful.

Starting with multiple candidate solutions allows you to sample larger portions of the search space. This is especially true when we work in recombination and mutation into fit solutions. This should not be controversial. However, there are several benefits that might not be obvious we’ll cover them next.

Selling Points of Evolutionary Algorithms

Range

Here a,b,c show 3 ways a function can not be differentiable

The biggest benefit of EAs comes from their flexibility. Since they don't evaluate the gradient at a point, they don’t need differentiable functions. This is not to be overlooked. For a function to be differentiable, it needs to have a derivative at every point over the domain. This requires a regular function, without bends, gaps, etc. EAs don’t care about the nature of these functions. They can work well on continuous and discrete functions. EAs can thus be (and have been)used to optimize for many real-world problems with fantastic results. I will be elaborating on this in the next section.

RL and Genetic Algorithms combined

This flexibility also gives them the advantage of being easy to integrate into other methods. You will often see EAs being used in one of the loops. The paper, “Population-Based Evolution Optimizes a Meta-Learning Objective” is an interesting read for those interested. I will be breaking it down soon, so make sure you’re connected with me to stay updated.

We can use EAs here. Since they are flexible, they will be able to traverse the landscape without issues.

Performance

Keep in mind that Deep Neural Networks take millions of times more resources.

The range means nothing if not backed by solid performances. And EAs can even outperform more expensive gradient-based methods. Take the fantastic One Pixel Attack paper(article coming soon). It is able to fool Deep Neural Networks trained to classify images by changing only one pixel in the image (look left). The team uses Differential Evolution to optimize since DE “Can attack more types of DNNs (e.g. networks that are not differentiable or when the gradient calculation is difficult).” And the results speak for themselves. “On Kaggle CIFAR-10 dataset, being able to launch non-targeted attacks by only modifying one pixel on three common deep neural network structures with 68:71%, 71:66% and 63:53% success rates.”

Google AI Blog, Evolutionary AutoML

Google’s AI blog has an article called, “AutoML-Zero: Evolving Code that Learns” uses EAs to create ML algorithms. The results were impressive with Evolutionary methods even outperforming Reinforcement Learning. This is quite impressive. Following is a quote from the authors,

“The approach we propose, called AutoML-Zero, starts from empty programs and, using only basic mathematical operations as building blocks, applies evolutionary methods to automatically find the code for complete ML algorithms. Given small image classification problems, our method rediscovered fundamental ML techniques, such as 2-layer neural networks with backpropagation, linear regression and the like, which have been invented by researchers throughout the years. This result demonstrates the plausibility of automatically discovering more novel ML algorithms to address harder problems in the future.”

From the blog

Ease of Bootstrapping

Evolutionary Algorithms are generally pretty easy to write. Once you have enough domain knowledge to write the fitness evaluations and the recombinations/initialization/mutation protocols, it becomes a straightforward implementation. This can be good where you need a solution that is good enough (most uses) and you don’t have the resources to fully transform the input into something more traditional ML compatible.

Hopefully, this article convinced you of some of the benefits of working in Evolutionary Algorithms into your Machine Learning Pipelines. I find that lots of people overlook them in their pipelines, which is a shame because they can be so powerful.

If you liked this article, check out my other content. I post regularly on Medium, YouTube, Twitter, and Substack (all linked below). I focus on Artificial Intelligence, Machine Learning, Technology, and Software Development. If you’re preparing for coding interviews check out: Coding Interviews Made Simple.

For one-time support of my work following are my Venmo and Paypal. Any amount is appreciated and helps a lot:

Venmo: https://account.venmo.com/u/FNU-Devansh

Paypal: paypal.me/ISeeThings

Reach out to me

If that article got you interested in reaching out to me, then this section is for you. You can reach out to me on any of the platforms, or check out any of my other content. If you’d like to discuss tutoring, text me on LinkedIn, IG, or Twitter. If you’d like to support my work, using my free Robinhood referral link. We both get a free stock, and there is no risk to you. So not using it is just losing free money.

Check out my other articles on Medium. : https://rb.gy/zn1aiu

My YouTube: https://rb.gy/88iwdd

Reach out to me on LinkedIn. Let’s connect: https://rb.gy/m5ok2y

My Instagram: https://rb.gy/gmvuy9

My Twitter: https://twitter.com/Machine01776819

My Substack: https://codinginterviewsmadesimple.substack.com/

Get a free stock on Robinhood: https://join.robinhood.com/fnud75

--

--

Devansh
Devansh

Written by Devansh

Writing about AI, Math, the Tech Industry and whatever else interests me. Join my cult to gain inner peace and to support my crippling chocolate milk addiction

Responses (2)