Interesting Content in AI, Software, Business, and Tech- 10/11/2023

Content to help you keep up with Machine Learning, Deep Learning, Data Science, Software Engineering, Finance, Business, and more

Devansh
7 min readOct 12, 2023

A lot of people reach out to me for reading recommendations. I figured I’d start sharing whatever AI Papers/Publications, interesting books, videos, etc I came across each week. Some will be technical, others not really. I will add whatever content I found really informative (and I remembered throughout the week). These won’t always be the most recent publications- just the ones I’m paying attention to this week. Without further ado, here are interesting readings/viewings for 10/11/2023. If you missed last week’s readings, you can find it here.

Reminder- We started an AI Made Simple Subreddit. Come join us over here- https://www.reddit.com/r/AIMadeSimple/

Community Spotlight- Abhinav Upadhyay

Abhinav Upadhyay writes the excellent Confessions of a Code Addict newsletter, where he pops the hood on the internal workings of various software products. Unlike most software engineering newsletters, Abhinav is not scared to get into the gory details, making his work an extremely useful technical resource. Every post is detailed and informative and you can really tell that Abhinav puts a lot of love into his posts. He crossed 1024 Subscribers recently, but someone with his quality deserves 10 times this (I actually recommend his newsletter on my sister publication- Tech Made Simple). Sign up for his newsletter here.

If you’re doing interesting work and would like to be featured in the spotlight section, just drop your introduction in the comments/by reaching out to me. There are no rules- you could talk about a paper you’ve written, an interesting project you’ve worked on, some personal challenge you’re working on, ask me to promote your company/product, or anything else you consider important. The goal is to get to know you better, and possibly connect you with interesting people in our chocolate milk cult. No costs/obligations are attached.

Because I’ve been busy over the last week, I only went through a few papers/publications over the last week. On the plus side, this week’s selection is highly curated. PS- we have a very special article coming soon. It’s related to one of the topics covered recently. I haven’t mentioned any sources from my readings into the topic to keep things a mystery. Try to guess the topic 🕵️♀🕵️

Join 150K+ tech leaders and get insights on the most important ideas in AI straight to your inbox through my free newsletter- AI Made Simple

Highly Recommended

These are pieces that I feel are particularly well done. If you don’t have much time, make sure you atleast catch these works.

Ahead of AI #12: LLM Business and Busyness

I really liked this edition of Ahead of AI because Sebastian managed to hit a lot of different kinds of news, some of which even I had missed. His work is always amazing, but the width of topics this time was spectacular.

In Ahead of AI, I try to strike a balance between discussing recent research, explaining AI-related concepts, and delving into general AI-relevant news and developments. Given that the previous issues leaned heavily towards research, I aim to address the latest trends in this issue.

Specifically, I’ll explore the current endeavors of major tech companies. It appears that every one of these entities is either training or developing LLMs, with a noticeable shift of their core operations towards AI — thus the title “LLM Businesses and Busyness.”

Read here

Jevons paradox and AI

Arvind Narayanan has a great LinkedIn post on why improved efficiency with AI might increase the load it puts on the climate. It’s not long, but it should be emphasized.

Jevons paradox makes it hard to predict the future impacts of AI. For example, more efficient GPUs might blunt the environmental impact. Or they might worsen it because people use AI for more things. There are too many unknowns that will determine which way the chips will fall.

Post here

Who’s Harry Potter? Making LLMs forget

This has some interesting security and reliability implications when it comes to AI and LLMs. Highly recommend reading it.

In a new paper, we decided to embark on what we initially thought might be impossible: make the Llama2–7b model, trained by Meta, forget the magical realm of Harry Potter. Several sources claim that this model’s training data included the “books3” dataset, which contains the books among many other copyrighted works (including the novels written by a co-author of this work). To emphasize the depth of the model’s recall, consider this: prompt the original model with a very generic-looking prompt such as “When Harry went back to school that fall,” and it continues with a detailed story set in J.K. Rowling’s universe.

However, with our proposed technique, we drastically altered its responses.

Paper here

Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution (Paper Explained)

At this point, we have a set protocol- Yannic Kilcher posts a great paper breakdown, I share the video with y’all. I really liked this breakdown because it teased out several nuances with this approach (especially the parts where it’s not so great).

Promptbreeder is a self-improving self-referential system for automated prompt engineering. Give it a task description and a dataset, and it will automatically come up with appropriate prompts for the task. This is achieved by an evolutionary algorithm where not only the prompts, but also the mutation-prompts are improved over time in a population-based, diversity-focused approach.

Watch here

Primates vs Snakes (An Evolutionary Arms Race)

Turns out that Snakes that developed their ability to spit venom might have developed from their interactions with humans. Nature is wild.

The Snake Detection Hypothesis proposes that the ability to quickly spot and avoid snakes is deeply embedded in primates, including us — an evolutionary consequence of the danger snakes have posed to us over millions of years.

Watch here

Amazons Union-Busting Training Video

If you’re reading this, chances are that you make good money and aren’t too concerned about unionization. This is short-sighted. Remember Union Busting anywhere is a threat to employees everywhere. Also, pretty funny that the video labels terms like “living-wage” to be red-flags and has the line “We’re not anti-union, but we’re not neutral either”.

Watch here

Self-Consuming Generative Models Go MAD

Another facet of LLM research that you should absolutely be tracking.

Seismic advances in generative AI algorithms for imagery, text, and other data types has led to the temptation to use synthetic data to train next-generation models. Repeating this process creates an autophagous (self-consuming) loop whose properties are poorly understood. We conduct a thorough analytical and empirical analysis using state-of-the-art generative image models of three families of autophagous loops that differ in how fixed or fresh real training data is available through the generations of training and in whether the samples from previous generation models have been biased to trade off data quality versus diversity. Our primary conclusion across all scenarios is that without enough fresh real data in each generation of an autophagous loop, future generative models are doomed to have their quality (precision) or diversity (recall) progressively decrease. We term this condition Model Autophagy Disorder (MAD), making analogy to mad cow disease.

Read here

How Your Computer Draws Lines

Computer graphics have been a fundamental field of computer science and has interesting roots. How were simple shapes like lines, which are the basis of all other graphics, drawn efficiently back in the day?

Watch here

Nicer Trees Spend Fewer Bytes: compressing 12947 Wordle words into 12155 bytes

What’s the smallest javascript program you can write whose output is the Wordle wordlist? A lively “code golf” competition to answer that question is currently underway at the website http://golf.horse/. This video describes how one particular entry achieved an impressive amount of compression by using binary trees to divide the space of possible words.

Watch here

If you liked this article and wish to share it, please refer to the following guidelines.

If you find AI Made Simple useful and would like to support my writing- please consider becoming a premium member of my cult by subscribing below. Subscribing gives you access to a lot more content and enables me to continue writing. This will cost you 400 INR (5 USD) monthly or 4000 INR (50 USD) per year and comes with a 60-day, complete refund policy. Understand the newest developments and develop your understanding of the most important ideas, all for the price of a cup of coffee.

Become a premium member

Reach out to me

Use the links below to check out my other content, learn more about tutoring, reach out to me about projects, or just to say hi.

Small Snippets about Tech, AI and Machine Learning over here

AI Newsletter- https://artificialintelligencemadesimple.substack.com/

My grandma’s favorite Tech Newsletter- https://codinginterviewsmadesimple.substack.com/

Check out my other articles on Medium. : https://rb.gy/zn1aiu

My YouTube: https://rb.gy/88iwdd

Reach out to me on LinkedIn. Let’s connect: https://rb.gy/m5ok2y

My Instagram: https://rb.gy/gmvuy9

My Twitter: https://twitter.com/Machine01776819

--

--

Devansh

Writing about AI, Math, the Tech Industry and whatever else interests me. Join my cult to gain inner peace and to support my crippling chocolate milk addiction