Google’s High-Performance Computing Expert shares his thoughts on how to use AI

Partnering with AI to reimagine problem-solving: the ‘intelligence’ that is artificial may be our own

13 min readFeb 15, 2024

Barak Epstein has been a senior technology leader for over a decade. He has led efforts in Cloud Computing and Infrastructure at Dell and now at Google. Currently, he is leading efforts to leverage the ambitious DAOS open-source project for Google Cloud’s High-Performance Computing initiatives. Barak and I have had several interesting conversations about infrastructure, strategy, and how investments in large-scale computing can introduce new paradigms for next-gen AI (instead of just enabling more of the same, which has been the current approach). This piece will be a summary of some of the conversations we had around strategy, how what we choose to solve is a key signal about our own priorities and about those of the organizations we work in, and on navigating changing Human-Computer interaction dynamics.

Disclosure: I am currently in conversations with two members from the DAOS Foundation (Intel and HPE) exploring partnerships to speed up Open Source adoption of DAOS. This post has not been sponsored by anyone and has very little to do with the those partnerships, but given Barak’s role in the DAOS community, I wanted to disclose that relationship.

I work in tech and live in Brooklyn. I help shape cloud storage products that support AI/ML use cases (among others) and have extensively studied data science as a professional; but studied history in college, have a master’s in education (before my MBA), and read philosophy in my free time. My digital life and working hours are tied in with Silicon Valley, AI researchers, and, generally, techno-optimists. During weekends and evenings, and at parties, though, I am more likely to hear friends give voice to dyspepsia about the pace of development and (lack of) governance of AI and other rapidly advancing technologies.

As such, I’ve spent a lot of time thinking about how to engage with technology generally, and AI specifically, in a way that is dynamic, safeguards humanistic values, and leads to a sense of well-being and productive partnership.

The article below is a first attempt to weave together the human-centric values of the traditional, East Coast cultural elite with the technocentrism (often technophilia) of the Silicon Valley Tribe. I hope that my day-to-day experience in the tech industry grounds this piece enough to make it interesting to tech professionals and AI experts, and that my background in “Letters” will make this piece interesting to the ‘tech-anxious’.

Join 150K+ tech leaders and get insights on the most important ideas in AI straight to your inbox through my free newsletter- AI Made Simple

Ultimately, we will need a more fluid and generative way to think about how human and machine intelligence interact and inform each other; and a more courageous way to think about how our identities and goals are shaped by tech, and shape it.

The Future We Simulate is the One We Create” grabbed my intention, partly due to the suggestion from the headline, that the problems we choose to investigate help shape not only our computing investments but also who we are to become. In my mind’s eye, I draw a straight line from early human cave paintings to computer simulation: Hasn’t the attempt to simulate our world been an ongoing obsession since, at least, the Cognitive Revolution?

More concretely, the article presents an argument for increased investment in High Performance Computing, positing that major advances in cancer and climate research, weather prediction, and nuclear fusion are worth the investment and the risk. The author knowledgeably and practically includes nuclear bomb simulation as one focal point for expanded HPC investment, understanding that the past of HPC investment was disproportionately motivated by this target, and that governments are likely to be motivated by a similar focus in the future (for those to whom this causes moral qualms, please consider that such simulations helped to replace real world nuclear testing).

The author states, “HPC is a bit like machine learning back in the 1980s, when all of the groundwork was laid for success in the 2010s and beyond.”

This argument, that HPC development is in its infancy, is transparently attractive to those of us whose careers are invested in this domain, and it may also be true. The title of the piece, though, and as noted, suggests a deeper philosophical argument than the article explicitly presents. Namely, the simulations we choose to invest in do not only improve our odds for solving problems such as those identified above, but they help to define who we are as humans, at least on the generational time scale. By investing deeply in addressing climate change or cancer or nuclear bomb simulation, we state to ourselves, even before any particular problem is solved, “Yes! These are the problems worth solving!”

This is all to say: humans are remarkable not only because we sometimes solve grand problems, but also (even more so?) because we select the problems we wish to solve. This perspective is often overlooked in breathless conversations ongoing in HPC’s (now) ‘sister field’ of Artificial Intelligence. Fears (and promises) of Artificial General Intelligence generally(!) ignore the question of how the goals of any calculation are chosen.

We are not talking about the famous AI Paperclip Problem, since that thought experiment problematizes the methods that an AI may choose to pursue a human-determined goal. We are talking instead about how the goals of an AI are selected. If humans select the goal, then we are still ‘in the loop’ and the AI is not, in fact, acting Generally. AGI proponents and opponents are then missing the critical question of “What should the AI be used for?” — in other words, “The AI We Select is the Future We Create.”

Our interaction with AI models has accordingly changed us already, and will continue to do so. For example, our collective sense of what a conversation is was upended in late 2022/early 2023 and our sense of how decisions are made (i.e. through a hybrid carbon-silicon substrate) will be the next to change. Those interested in preserving a continuous sense of human purpose would be best served by engaging in the accelerating vortex of human-computer interaction and identifying those critical points at which humans can determine and/or influence the goals of the AIs that are destined to become our partners. Yes, we should be (very) concerned about AIs that run amok, but the more persistent (and more fruitful) challenge to take on has to do with governing the goals of AIs.

A focus on our human agency over ‘goal-setting’ challenge would, to take one specific example, encourage more careful thinking about how the low-fidelity, “parametric identification” methods of AI could be married with the higher-fidelity, outcome-centric approach of HPC to deliver more useful output. Such a combination of low-precision and high-precision methods has been demonstrated where LLMs have been integrated with math libraries to better leverage the strengths of each respective tool. Human judgment will continue to be relevant as we attempt to strike a balance between methods with varying strengths, and inevitably incorporate normative evaluations about which types of output are useful and meaningful.

Let’s focus more deeply on two of the broader ideas from the section above, before returning to more grounded examples:

Idea 1: Humans “select the problems we wish to solve.”

  • Translated into AI Techspeak, we would say that “humans select the objective function that the model must optimize for.”

Idea 2: “Our interaction with AI models has changed us, and will continue to do so.”

  • The objective functions we choose are, in turn, influenced by the tools we have at our disposal.

To be more concrete, before my high school-aged daughter had the use of an AI Chat tool, her objective function was to “write a good essay.” Now that she has an AI Chat tool, her objective function is to “use the AI Chat tool without getting into trouble.” As a parent, my objective function used to be “make sure she writes a good essay” but it is now, “make sure she reads the output of the AI Chat, understands it, edits it, and uses it to build her understanding” and also to ensure that she “follows the rules of the school in letter and in spirit.” The AI Chat tool has forced me to think more deeply about what I want her to learn and what I think she will need in ordwr to survive in the future. The use of AI Chat will, in her specific case, be part of the scaffolding for her to become a better writer, but I also must acknowledge that the world and its tools changed, and that the skills necessary for survival and happiness change in the context of these changes.

The Philosopher Jose Ortega y Gasset (Man the Technician, p. 92) contrasts humans with animals and expounds on how humans reshape and determine their goals according to the tools available in their environment.

If, for lack of fire or a cave, he is unable to perform the act of warming himself . . . man mobilizes a second line of activities . . . he lights a fire . . . the animal, when it cannot satisfy its vital needs — when there is neither fire nor a cave, for example — does nothing about it and lets itself die. Man, on the other hand, comes forward with a new type of activity; he produces what he does not find in nature . . . Thus he lights a fire . . . Be it well noted: making a fire is an act very different from keeping warm.

In this sense, the development of the “AI Tool” falls into accord with millions of years of human history, as well as with Ortega y Gasset’s discourse–tool building activities have (increasingly) often displaced direct reward-collection activities. But AI is more revolutionary than most other New Tools, since it intervenes, so far as we can tell, in the goal-setting and meaning-making that Ortega y Gasset defines as fundamentally human.

The fact that AI participates in our decision process ‘so far as we can tell’ is what makes it appear intelligent. In that sense, it doesn’t matter whether we say that AI is intelligent or that it just appears intelligent. Once AI appears intelligent, it becomes part of our thought process.

One of my favorite (ok, my favorite) Digital Age Philosophers, Venkatesh Rao, quotes the well-known (in some circles), saying that “computers are ‘rocks we tricked into thinking with lightning.’ ” He extends the thought:

While lithography is a more complex transformation process than simple cooling, there’s a deep thought lurking there. “Tricking” rocks with suspiciously simple physical/chemical processes and structural patterns (compared to CPUs, GPUs and AI accelerator chips have remarkably simple physical layouts; more like crystal structure patterns than complicated machinery) doesn’t seem like it should be enough to spark “thinking” but apparently it is . . .

So in 2023, we discovered that “intelligence,” far from being the culmination of an evolutionary ascent construed in linear terms, is simply a natural phenomenon that can emerge in more than one way through relatively simple transformations of matter . . .

To me, it is actually kinda exciting that “intelligence” appears to be a latent property of data [emph. mine], which can be transformed into an explicit property, rather than an attribute of a processing technology.

The takeaway is that AI is not only competing with our intelligence or participating in our process of thought. More deeply, it is upsetting our sense of what intelligence is.


  • being human is (in part) about choosing our own objective functions → we must think smarter
  • AI (now and increasingly) participates in that choice, that thinking → we must think in partnership with AI
  • AI upsets our foundational assumptions about what thinking is → we must have a sense of self that does not depend on our intelligence

The last point is the most challenging, but it’s also the one that opens up the most creativity in our relationship to AI and to nature and experience more broadly. By choosing the future we wish to simulate, we are not just making a functional choice, but a spiritual choice, about who we wish to be as we partner with AI in World Design. That choice is fraught but also expansive.

The perspectives above will not only help us keep our sanity and sense of purpose as we interact with AI, but they will help us to more flexibly think about how to manipulate, intertwine, and extend AI models. I addressed this opportunity lightly above, but here is a bit of a deeper look at some ‘’meta-AI” problems that a less-technical person could help us all think about:

Combining low-precision and high-precision methods to solve problems

Google DeepMind’s AlphaGeometry recently demonstrated the great strides that AIs have taken in solving problems from the Mathematical Olympiad. From SingularityHub:

In a way, solving geometry problems is a bit like playing chess. Given some rules — called theorems and proofs — there’s a limited number of solutions to each step, but finding which one makes sense relies on flexible reasoning conforming to stringent mathematical rules.

In other words, tackling geometry requires both creativity and structure. While humans develop these mental acrobatic skills through years of practice, AI has always struggled.

AlphaGeometry cleverly combines both features into a single system. It has two main components: A rule-bound logical model that attempts to find an answer, and a large language model to generate out-of-the-box ideas. If the AI fails to find a solution based on logical reasoning alone, the language model kicks in to provide new angles. The result is an AI with both creativity and reasoning skills that can explain its solution.

Conceptually, the idea of combining language-based reasoning and mathematical reasoning is also represented by the Wolfram plugin for ChatGPT. On the hardware side of things, HPC engineers are leveraging their human judgment to optimally apply various levels of operational precision to different parts of their calculations.

Developing greater and greater levels of abstraction in learning methods

This article is about how AI models can develop the “systematic generalization” that characterizes human learning. In the process of thinking about this problem, you may also gain insight into how your mind works. In other words, developing AI is not just an opportunity to build new tools. It’s also an opportunity to think more deeply about what makes us human, and how our brains work:

Lake & Baroni wanted to teach neural networks to solve a more general task: performing systematic generalization from just a few examples on tasks generated from different grammars.

To automatically generate tasks from different grammars, Lake & Baroni needed an automatic way to generate different grammars — namely, a “meta-grammar.” The meta-grammar had simple rules for generating grammars [detail follows].

How much are we interested in solving problems vs just “looking in the mirror”?

The author of the same article on meta-learning asked why AI models were specifically being trained to make mistakes in patterns similar to those made by humans:

One thing that confused me in this paper was the explicit training to make the system act more “human-like.” As I described above, after cataloging the frequency and kinds of errors made by humans on these tasks, Lake & Baroni trained their network explicitly on examples having the same frequency and kinds of errors.

This last observation points to an interesting aspect of our research in, and discussion of, AI. We aren’t always interested in how AI can help be “Faster, Higher, Stronger” but sometimes are just obsessively interested in how (and whether) it can be “more like us”.

The pursuit of Artificial Intelligence (as well as the fear of it) is often motivated by fascination and/or discomfort with our ability to make “carbon” copies of ourselves.

Ultimately, the invitation here is to think more deeply about what AI is, how to use it, and how we might partner with it. Non-mathy and non-computery types should not be overwhelmed; rather, they should engage more deeply, seeking out the subtle junctures at which their reflection and insight can help guide us and our Machines of Growing Capability. There is so much about this interaction that is non-obvious, open-ended, and dynamic. It turns out that there is one constant in this whirling and refractory universe: the greatest thing to fear is fear itself.

If you liked this article and wish to share it, please refer to the following guidelines.

That is it for this piece. I appreciate your time. As always, if you’re interested in working with me or checking out my other work, my links will be at the end of this email/post. And if you found value in this write-up, I would appreciate you sharing it with more people. It is word-of-mouth referrals like yours that help me grow.

If you find AI Made Simple useful and would like to support my writing- please consider becoming a premium member of my cult by subscribing below. Subscribing gives you access to a lot more content and enables me to continue writing. This will cost you 400 INR (5 USD) monthly or 4000 INR (50 USD) per year and comes with a 60-day, complete refund policy. Understand the newest developments and develop your understanding of the most important ideas, all for the price of a cup of coffee.

Become a premium member

I regularly share mini-updates on what I read on the Microblogging sites X(, Threads(, and TikTok( so follow me there if you’re interested in keeping up with my learnings.

Reach out to me

Use the links below to check out my other content, learn more about tutoring, reach out to me about projects, or just to say hi.

Small Snippets about Tech, AI and Machine Learning over here

AI Newsletter-

My grandma’s favorite Tech Newsletter-

Check out my other articles on Medium. :

My YouTube:

Reach out to me on LinkedIn. Let’s connect:

My Instagram:

My Twitter:




Writing about AI, Math, the Tech Industry and whatever else interests me. Join my cult to gain inner peace and to support my crippling chocolate milk addiction