NASA's James Webb Space Telescope is revealing the universe with
spectacular, unprecedented clarity. The observatory's ultrasharp infrared
vision has cut through the cosmic dust to illuminate some of the earliest
structures in the universe, along with previously obscured stellar nurseries
and spinning galaxies lying hundreds of millions of light years away.
In addition to seeing farther into the universe than ever before, Webb will
capture the most comprehensive view of objects in our own galaxy—namely,
some of the 5,000 planets that have been discovered in the Milky Way.
Astronomers are harnessing the telescope's light-parsing precision to decode
the atmospheres surrounding some of these nearby worlds. The properties of
their atmospheres could give clues to how a planet formed and whether it
harbors signs of life.
But a new MIT study suggests that the tools astronomers typically use to
decode light-based signals may not be good enough to accurately interpret
the new telescope's data. Specifically, opacity models— the tools that model
how light interacts with matter as a function of the matter's properties—may
need significant retuning in order to match the precision of Webb's data,
the researchers say.
If these models are not refined? The researchers predict that properties of
planetary atmospheres, such as their temperature, pressure, and elemental
composition, could be off by an order of magnitude.
"There is a scientifically significant difference between a compound like
water being present at 5% versus 25%, which current models cannot
differentiate," says study co-leader Julien de Wit, assistant professor in
MIT's Department of Earth, Atmospheric, and Planetary Sciences (EAPS).
"Currently, the model we use to decrypt spectral information is not up to
par with the precision and quality of data we have from the James Webb
telescope," adds EAPS graduate student Prajwal Niraula. "We need to up our
game and tackle together the opacity problem."
De Wit, Niraula, and their colleagues have published their study in Nature
Astronomy. Co-authors include spectroscopy experts Iouli Gordon, Robert
Hargreaves, Clara Sousa-Silva, and Roman Kochanov of the Harvard-Smithsonian
Center for Astrophysics.
Leveling up
Opacity is a measure of how easily photons pass through a material. Photons
of certain wavelengths can pass straight through a material, be absorbed, or
be reflected back out depending on whether and how they interact with
certain molecules within a material. This interaction also depends on a
material's temperature and pressure.
An opacity model works on the basis of various assumptions of how light
interacts with matter. Astronomers use opacity models to derive certain
properties of a material, given the spectrum of light that the material
emits. In the context of exoplanets, an opacity model can decode the type
and abundance of chemicals in a planet's atmosphere, based on the light from
the planet that a telescope captures.
De Wit says that the current state-of-the-art opacity model, which he likens
to a classical language translation tool, has done a decent job of decoding
spectral data taken by instruments such as those on the Hubble Space
Telescope.
"So far, this Rosetta Stone has been doing OK," de Wit says. "But now that
we're going to the next level with Webb's precision, our translation process
will prevent us from catching important subtleties, such as those making the
difference between a planet being habitable or not."
Light, perturbed
He and his colleagues make this point in their study, in which they put the
most commonly used opacity model to the test. The team looked to see what
atmospheric properties the model would derive if it were tweaked to assume
certain limitations in our understanding of how light and matter interact.
The researchers created eight such "perturbed" models. They then fed each
model, including the real version, "synthetic spectra"—patterns of light
that were simulated by the group and similar to the precision that the James
Webb telescope would see.
They found that, based on the same light spectra, each perturbed model
produced wide-ranging predictions for the properties of a planet's
atmosphere. Based on their analysis, the team concludes that, if existing
opacity models are applied to light spectra taken by the Webb telescope,
they will hit an "accuracy wall." That is, they won't be sensitive enough to
tell whether a planet has an atmospheric temperature of 300 Kelvin or 600
Kelvin, or whether a certain gas takes up 5% or 25% of an atmospheric layer.
"That difference matters in order for us to constrain planetary formation
mechanisms and reliably identify biosignatures," Niraula says.
The team also found that every model also produced a "good fit" with the
data, meaning, even though a perturbed model produced a chemical composition
that the researchers knew to be incorrect, it also generated a light
spectrum from that chemical composition that was close enough to, or "fit"
with the original spectrum.
"We found that there are enough parameters to tweak, even with a wrong
model, to still get a good fit, meaning you wouldn't know that your model is
wrong and what it's telling you is wrong," de Wit explains.
He and his colleagues raise some ideas for how to improve existing opacity
models, including the need for more laboratory measurements and theoretical
calculations to refine the models' assumptions of how light and various
molecules interact, as well as collaborations across disciplines, and in
particular, between astronomy and spectroscopy.
"There is so much that could be done if we knew perfectly how light and
matter interact," Niraula says. "We know that well enough around the Earth's
conditions, but as soon as we move to different types of atmospheres, things
change, and that's a lot of data, with increasing quality, that we risk
misinterpreting."
Reference:
Julien de Wit, The impending opacity challenge in exoplanet atmospheric
characterization, Nature Astronomy (2022).
DOI: 10.1038/s41550-022-01773-1. Material source: MIT News
Tags:
Space & Astrophysics