Why am I doing what I am doing

Recently, I picked up new research work in Deepfakes Detection. I am interested in this field for a number of reasons, a major one being that there is a lot of potential for exploration here. Unlike fields like Customer Segmentation, Health System Analysis, or Parkinson’s Disease detection (my prior work experience), the rules of engagement are messy.

With the ever-changing nature of DeepFakes, we don’t have a true representative dataset that I can just analyze.

People find new ways to create a DeepFake, which will throw off detection algorithms. The addition of more types of videos (especially as the world…


You mean that we can use the best of both worlds?

Recently I’ve been learning about Discriminative and Generative Modeling in honor of a breakdown of a very special paper, coming soon. Learning about these topics, I was fascinated by the nuances behind these two approaches, and how they are implemented. As I learned more, I came across the fact that models now tend to combine the best of both worlds. In this article I will be going over some of the hybrid approaches, talking about the problems they solve. By the end, you will hopefully have some knowledge of these approaches and may even choose to implement one of these…


This way of measuring Digital Strategy can be huge for investors

I am someone into Investing and Machine Learning. I think these are both fields that have a rich complexity, ones that when understood can be leveraged to drastically improve quality of life. The authors of the groundbreaking “Deep Learning Framework for Measuring the Digital Strategy of Companies from Earnings Calls” combine the fields in an interesting way. The authors take the earnings calls of Fortune 500 Companies, and apply Natural Language Processing (NLP) with Deep Learning to classify their strategy into various labels.

In this article, I will talk about the paper, breaking down some interesting points to note. I…


Apparently, we can now detect viruses by converting them into images

As the world goes digital, cybersecurity becomes exceedingly important. Manually programming rules to catch the viruses is slow and will never keep up with ever-changing malware. This is why the idea of representing malware binaries as an image was groundbreaking. The procedure is relatively simple and is done by the following procedure:

How to catch malware

This approach comes with several benefits. Accurate conversion to images will allow us to use the various deep-learning based CNNs in image classification. Visualizing the malware will let us spot patterns. And all this can be done without having to run the malware. This is a big plus…


The paper that shook costly Deep Learning Image Classifiers

Results across models. Stick around to understand the terms.

If you have been alive recently, you are likely aware of the Deep Learning Hype. People are convinced that with enough Computing Power and Data, DNNs will solve any problem. By this point, DNNs have established themselves as reliable Image Classifiers. This is why the paper “One Pixel Attack for Fooling Deep Neural Networks” by Jiawei Su et al should not be ignored. By changing only one pixel, they are able to reliably fool expensive DNNs across different architectures. The image on the left shows the results using various metrics on the different metrics. As we can see, the results…


Will it be the key to a happy life? Will you need to buy crystals?

My parents love reading. This means that from a young age, I’ve lived in houses filled with books. Every time I come home for vacations, I like to scroll through their collections and pick whatever I find interesting. This time, I picked up Ikigai: The Japanese secret to a long and happy life, by Hector Garcia and Albert Liebermann. I knew it was a famous book, and something a lot of people thought profound. So I thought, I’d give it a whirl. Maybe I would find something in it that would change my life. Maybe it will validate my views…


What does this mean for their performance?

Neural Architecture Search (NAS) is being touted as one Machine Learning’s big breakthroughs. It is a technique for automating the design of neural networks. As someone interested in automation and machine learning, this is something I’ve been following for a while. Recently a paper titled “ Understanding the wiring evolution in differentiable neural architecture search” by Sirui Xie et al caught my attention. It delves into the question of whether “neural architecture search methods discover wiring topology effectively”. This paper provides a framework for evaluating bias by proposing “a unified view on searching algorithms of existing frameworks, transferring the global…


Why FB is the best Tech Stock. Period.

My Stock Performance September 11, 2020. Shoutout to the Federal Reserve

I write code and build tools. I have demonstrated my machine learning and math abilities over my work experiences. I am not a financial guru. I read about the markets. I have money invested but my approach is as simple as it comes: Buy and Hold solid companies. No charts, no hours spent looking at trends. It’s worked out pretty well. Now I shall integrate my tech background with my interest in investing to present to you 2 of FaceBook’s verticals that are overlooked by traditional financial gurus, mostly because of ignorance. Read on to find out.

What are these secrets that everyone overlooks at when analyzing FaceBook

Stock pickers will…


What is this sorcery?

I’ve been reading about different optimization techniques, and was introduced to Differential Evolution, a kind of evolutionary algorithm. It didn’t strike me as something revolutionary. And therein lies its greatest strength: It’s so simple. It optimizes a large set of functions (more than gradient-based optimization such as Gradient Descent).

In this article, I will breakdown what Differential Evolution is. We will do a breakdown of their strengths and weaknesses. After this article, you will know the kinds of problems you can solve. As always, if you find this article useful, be sure to clap and share (it really helps). …


Two words Timothy: Performance and Cost-Efficiency

So if you read my last article (check it out here), I broke down how researchers at Google substantially outperformed current Image Classification systems while using exponentially fewer resources (spoiler: magic). The researchers mentioned using RandAugment to create input noise. Naturally, I was curious about this. And what I learnt was amazing. So read on about why RandAugment is the best in the game. Through this article, we will understand how the GOAT works. Presenting RandAugment: Practical automated data augmentation with a reduced search space by Cubuk et al.

Background Information

Because my content has crossed 1000 views, I thought I’d add…

Devansh

Data Analyst @Johns Hopkins University, Student, and Sports Enthusiast. 4 Years of Machine Learning, including 1 commercialized patent.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store