The framework is simpler AND outperforms everything

Photo by Shahadat Rahman on Unsplash

Machine Learning is a diverse field with lots of different aspects. One of the chief concerns of ML is the learning of visual representations (pictures/diagrams etc corresponding to areas). This has applications in all kinds of problems, ranging from Computer Vision, Object Detection, to more futuristic applications like learning from and developing schematics. Recently, people have started to look into Contrastive Learning as an alternative to supervised and unsupervised learning. It involves teaching the model to learn the general features of a dataset without labels by teaching the model which data points are similar or different. The focus thus shifts…


Hint: Did not bribe the hiring team

Software Engineering at one of the big firms is an extremely lucrative job. According to Glassdoor, the level 3 (entry-level) software engineer at Facebook earns … (take a guess. Then take a sip of water and sit down, because the number is higher than what you were expecting)

The typical Facebook Software Engineer III salary is $120,261- glassdoor.com

That’s for entry-level. Just the interns at Facebook are said to earn around 8.5K USD a month with benefits. It’s only natural that these jobs are very prestigious and very competitive (A Google Interview is 10 times harder to get than acceptance…


Robust models, Stronger performance, Lesser Data Needed

Recently my AI class went over Hill Climbing and Different modifications to the protocol. We had the opportunity to implement several improvements and tweaks like stochastic hill climbing (we don’t always pick the best one, just something better), simultaneous hill climbing ( we run a batch of k algorithms at once, and pick the k best states at every turn), etc. What was most interesting to me, however, was the random restart hill-climbing. Here, once we reach local maxima, we restart from a new random state. We keep track of the best performer, and once we finish (run out of…


Why am I doing what I am doing

Recently, I picked up new research work in Deepfakes Detection. I am interested in this field for a number of reasons, a major one being that there is a lot of potential for exploration here. Unlike fields like Customer Segmentation, Health System Analysis, or Parkinson’s Disease detection (my prior work experience), the rules of engagement are messy.

With the ever-changing nature of DeepFakes, we don’t have a true representative dataset that I can just analyze.

People find new ways to create a DeepFake, which will throw off detection algorithms. The addition of more types of videos (especially as the world…


You mean that we can use the best of both worlds?

Recently I’ve been learning about Discriminative and Generative Modeling in honor of a breakdown of a very special paper, coming soon. Learning about these topics, I was fascinated by the nuances behind these two approaches, and how they are implemented. As I learned more, I came across the fact that models now tend to combine the best of both worlds. In this article I will be going over some of the hybrid approaches, talking about the problems they solve. By the end, you will hopefully have some knowledge of these approaches and may even choose to implement one of these…


This way of measuring Digital Strategy can be huge for investors

I am someone into Investing and Machine Learning. I think these are both fields that have a rich complexity, ones that when understood can be leveraged to drastically improve quality of life. The authors of the groundbreaking “Deep Learning Framework for Measuring the Digital Strategy of Companies from Earnings Calls” combine the fields in an interesting way. The authors take the earnings calls of Fortune 500 Companies, and apply Natural Language Processing (NLP) with Deep Learning to classify their strategy into various labels.

In this article, I will talk about the paper, breaking down some interesting points to note. I…


Apparently, we can now detect viruses by converting them into images

As the world goes digital, cybersecurity becomes exceedingly important. Manually programming rules to catch the viruses is slow and will never keep up with ever-changing malware. This is why the idea of representing malware binaries as an image was groundbreaking. The procedure is relatively simple and is done by the following procedure:

How to catch malware

This approach comes with several benefits. Accurate conversion to images will allow us to use the various deep-learning based CNNs in image classification. Visualizing the malware will let us spot patterns. And all this can be done without having to run the malware. This is a big plus…


The paper that shook costly Deep Learning Image Classifiers

Results across models. Stick around to understand the terms.

If you have been alive recently, you are likely aware of the Deep Learning Hype. People are convinced that with enough Computing Power and Data, DNNs will solve any problem. By this point, DNNs have established themselves as reliable Image Classifiers. This is why the paper “One Pixel Attack for Fooling Deep Neural Networks” by Jiawei Su et al should not be ignored. By changing only one pixel, they are able to reliably fool expensive DNNs across different architectures. The image on the left shows the results using various metrics on the different metrics. As we can see, the results…


Will it be the key to a happy life? Will you need to buy crystals?

My parents love reading. This means that from a young age, I’ve lived in houses filled with books. Every time I come home for vacations, I like to scroll through their collections and pick whatever I find interesting. This time, I picked up Ikigai: The Japanese secret to a long and happy life, by Hector Garcia and Albert Liebermann. I knew it was a famous book, and something a lot of people thought profound. So I thought, I’d give it a whirl. Maybe I would find something in it that would change my life. Maybe it will validate my views…


What does this mean for their performance?

Neural Architecture Search (NAS) is being touted as one Machine Learning’s big breakthroughs. It is a technique for automating the design of neural networks. As someone interested in automation and machine learning, this is something I’ve been following for a while. Recently a paper titled “ Understanding the wiring evolution in differentiable neural architecture search” by Sirui Xie et al caught my attention. It delves into the question of whether “neural architecture search methods discover wiring topology effectively”. This paper provides a framework for evaluating bias by proposing “a unified view on searching algorithms of existing frameworks, transferring the global…

Devansh

Data Analyst @Johns Hopkins University, Student, and Sports Enthusiast. 4 Years of Machine Learning, including 1 commercialized patent.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store