Artificial scientists
AI companies frequently invoke the possibility of AI-enabled scientific discovery as a justification for their existence: If the technology eventually cures cancer and solves climate change, then all the carbon emissions and slop videos will have been well worth it.
Already, LLMs can assist scientists in all sorts of ways. They can point people to relevant studies in the literature, draft journal articles, and, of course, write code. But AI companies and academic researchers alike have a much more ambitious vision for AI co-scientists. They want to develop systems that can act as a full member of a scientific team or, even more ambitiously, initiate and carry out research projects with limited human guidance.
Google DeepMind has invested heavily in scientific AI for years, and it paid off in 2024 when Demis Hassabis and John Jumper, the company’s CEO and director, won the Nobel Prize in chemistry for AlphaFold, a specialized system that can predict the three-dimensional structure of a protein.
Now its competitors are working to catch up. In October 2025, OpenAI launched a team devoted to AI for science, and Anthropic announced several Claude features geared toward the biological sciences around the same time. OpenAI in particular has called building an autonomous researcher its “North Star.” It just announced GPT‑Rosalind, the first in a planned series of specialized scientific models. Google released its own AI co-scientist tool last February.
Under the hood, many of these AI-for-science systems are in fact multiple specialized AI agents working in concert. Google’s co-scientist uses a supervisor agent, a generation agent, and a ranking agent, among several others, in order to generate potential hypotheses and research plans in response to a goal provided by a human scientist. More recently, researchers at Stanford’s AI for Science Lab, led by James Zou, devised a “virtual lab” made up of agents that took on the roles of specialists in different scientific fields. They found that their system could design new antibody fragments that bind to SARS-CoV-2, the virus that causes covid.
Unlike human scientists, however, those teams of agents can’t yet go out and test their ideas in the lab. To overcome that limitation, some researchers are plugging LLMs into experiment-running robots. In February, OpenAI announced that it had connected GPT-5 directly with automated biological laboratories built by the company Ginkgo Bioworks so that the AI system could iteratively propose experiments and interpret the results with limited human involvement. This approach allowed the system to run a gargantuan number of experiments and create a recipe that reduced the cost of synthesizing a particular protein by 40%.
AI-powered science seems like a win for frontier labs and for society at large. But research suggests it could have unintended consequences. A recent Nature study found that while individual scientists see professional advantages from adopting AI, science on the whole may suffer, because AI reduces the scope of what the scientific community investigates. That might be because AI is especially good at analyzing preexisting data sets and literature, so scientists who use it gravitate toward established topic areas where large-scale data is available. That could leave fewer scientists to study problems less amenable to AI. Integrating AI effectively into science is more than just a technical problem: Maintaining the vibrance and diversity of science in the AI era may require concerted effort from the scientific community.






