Hi there 👋 Welcome to the first issue of our AI for Science newsletter. In the noise of the internet, our goal is to provide you with an insightful signal into AI and emerging technologies that are reshaping scientific discovery. Ready to dive in? Let’s go.
Last month, Dario Amodei, CEO of the major AI lab Anthropic, penned an in-depth post on the upsides of continued AI development. An early member of the OpenAI team, Dario co-founded Anthropic with his sister Daniela in part to more seriously research AI safety and alignment. His post – Dario explains – serves as a counterweight to the risk-focused conversations he and Anthropic prioritize.
The first area Dario called out? Biology and health. Amodei takes pains to make a distinction between AI models that conduct better data analysis, and AI scientists who perform all of the tasks that biologists do – including controlling labs or giving instruction to scientists – much in the same way a PI might direct their graduate students.
Amodei – once a postdoc at Stanford University School of Medicine – writes that much of our progress in biology has come from a handful of fundamental breakthroughs, such as CRISPR for gene editing, genomic sequencing and synthesis, optogenetic techniques for firing neurons, and mRNA vaccines. “There’s perhaps ~1 of these major discoveries per year and collectively they arguably drive >50% of progress in biology.”
Further, Dario writes that many of these discoveries didn’t hinge on new technology; rather, they were waiting in the wings, until a scientist connected insights from various fields. An AI biologist, he claims, would be effective at precisely these sorts of discoveries, contributing a foundational set of technologies upon which the rest of modern life science could accelerate its work.
Why This Matters
Major AI labs such as OpenAI and Anthropic have gone all-in on creating productized enterprise platforms to provide end-user capabilities in areas such as developing better marketing copy, market analysis, and integrated in-app chat-bots. The strategy, in some ways, make sense. These models – trained on large text-based datasets – lend themselves more easily to these applications, which themselves are easier to package and scale for revenue growth. Important considerations when AI talent and compute are exceedingly expensive.
That said, it's also clear that scientific discovery lies close to the hearts of their chief executives. When asked what insights he would be most excited for an AGI system to provide, OpenAI’s CEO shared: a grand unified theory of physics. In their most recent “o1” model launch, OpenAI showcased the way their model helped Catherine Brownstein, a geneticist, conduct her research more effectively. Last year, news organizations claimed that a major reason for Amazon’s $4B investment in Anthropic was to strengthen Amazon’s own platform for drug development, Bedrock.
Google’s DeepMind has been playing in this field for over a decade, recently winning the Nobel Prize in Chemistry for its protein-folding prediction model AlphaFold, and even spinning out Isomorphic Labs, a startup focused on drug discovery. Could this post from Amodei, and recent science-focused marketing clips from OpenAI, signal an increasing focus on joining Google DeepMind in developing and productizing models to accelerate scientific discovery? Would these major AI labs become partners or competitors to existing players, especially in the biotech sector which has been applying AI for life sciences for the better part of a decade? We’re eager to watch where this goes.
Roei Herzig and team share RoboPrompt, a framework that enables off-the-shelf text-only LLMs to directly predict robot actions. #autonomouslabs
Insilico Medicine – two years into their partnership with Sanofi – announces an AI-facilitated lead with first-in-class (FIC) potential against an undruggable transcription factor target for treating oncology diseases.
Siemens augments its mechanical and electromagnetic simulation capabilities by purchasing US industrial software make Altair for $10B.
Quanta Magazine article discusses an exciting proposal to detect gravitons – hypothetical particles thought to carry the force of gravity. #physics
MIT’s Schwarzman College of Computing announces the Tayebati Postdoctoral Fellowship to support researchers accelerating AI for Materials Science and Engineering, among other science domains.
The Meta FAIR team is seeking researchers interns for their AI for Chemistry work.
The NYU Center for Data Science (CDS) seeks applicants for a Faculty Fellow position aimed at working at the boundaries between the data science and domain sciences such as biology, physics and chemistry.
SF Bay Area, November 14: FutureHouse, the autonomous science nonprofit, hosts experts from Berkeley's Autonomous Labs of the Future among others for a talk, demo and pitch night. Tickets here.
SF Bay Area, November 20: The MIT Club of Northern California host experts from NVIDIA, Toyota, the DOE and UC Berkeley to explore recent efforts in AI for Autonomous Science. Tickets here.
Online (Zoom), November 22: The NSF’s Institute for AI & Fundamental Interactions hosts Yuan-Sen Ting, Associate Professor in Astrophysics from Ohio State for a Public Colloquium on Expediting Astronomical Discovery with Large Language Models. Watch live here.
What happens when physicist Philip Moriarty of the University of Nottingham takes an early version of ChatGPT out for a spin? Lots of quirks and features. Take a look:
Find this newsletter useful? Subscribe for regular insights, and share it with friends who are passionate about AI for Science!
RLHF! Like a good neural network, this newsletter is only as effective as the data it’s trained on. We’d love your feedback—drop us a note on how we can make it better for you.
See you next week. –Nabil 🙏🙌