![]() |
by Evan Ackerman on (#70TYW)
|
IEEE Spectrum
Link | https://spectrum.ieee.org/ |
Feed | http://feeds.feedburner.com/IeeeSpectrum |
Updated | 2025-10-18 07:45 |
![]() |
by IEEE Foundation on (#70T87)
|
![]() |
by Margo Anderson on (#70T38)
|
![]() |
by Jacob Balma on (#70T39)
|
![]() |
by Perri Thaler on (#70S42)
|
![]() |
by Joe Jones on (#70S41)
|
by Alexandru Voica on (#70S17)
This is a sponsored article brought to you by MBZUAI.If you've ever tried to guess how a cell will change shape after a drug or a gene edit, you know it's part science, part art, and mostly expensive trial-and-error. Imaging thousands of conditions is slow; exploring millions is impossible.A new paper in Nature Communications proposes a different route: simulate those cellular after" images directly from molecular readouts, so you can preview the morphology before you pick up a pipette. The team calls their model MorphDiff, and it's a diffusion model guided by the transcriptome, the pattern of genes turned up or down after a perturbation.At a high level, the idea flips a familiar workflow. High-throughput imaging is a proven way to discover a compound's mechanism or spot bioactivity but profiling every candidate drug or CRISPR target isn't feasible. MorphDiff learns from cases where both gene expression and cell morphology are known, then uses only the L1000 gene expression profile as a condition to generate realistic post-perturbation images, either from scratch or by transforming a control image into its perturbed counterpart. The claim is that competitive fidelity on held-out (unseen) perturbations across large drug and genetic datasets plus gains on mechanism-of-action (MOA) retrieval can rival real images.aspect_ratioThis research led by MBZUAI researchers starts from a biological observation: gene expression ultimately drives proteins and pathways that shape what a cell looks like under the microscope. The mapping isn't one-to-one, but there's enough shared signal for learning. Conditioning on the transcriptome offers a practical bonus too: there's simply far more publicly accessible L1000 data than paired morphology, making it easier to cover a wide swath of perturbation space. In other words, when a new compound arrives, you're likely to find its gene signature which MorphDiff can then leverage.Under the hood, MorphDiff blends two pieces. First, a Morphology Variational Autoencoder (MVAE) compresses five-channel microscope images into a compact latent space and learns to reconstruct them with high perceptual fidelity. Second, a Latent Diffusion Model learns to denoise samples in that latent space, steering each denoising step with the L1000 vector via attention. Wang et al., Nature Communications (2025), CC BY 4.0Diffusion is a good fit here: it's intrinsically robust to noise, and the latent space variant is efficient enough to train while preserving image detail. The team implements both gene-to-image (G2I) generation (start from noise, condition on the transcriptome) and image-to-image (I2I) transformation (push a control image toward its perturbed state using the same transcriptomic condition). The latter requires no retraining thanks to an SDEdit-style procedure, which is handy when you want to explain changes relative to a control.It's one thing to generate photogenic pictures; it's another to generate biologically faithful ones. The paper leans into both: on the generative side, MorphDiff is benchmarked against GAN and diffusion baselines using standard metrics like FID, Inception Score, coverage, density, and a CLIP-based CMMD. Across JUMP (genetic) and CDRP/LINCS (drug) test splits, MorphDiff's two modes typically land first and second, with significance tests run across multiple random seeds or independent control plates. The result is consistent: better fidelity and diversity, especially on OOD perturbations where practical value lives.The bigger picture is that generative AI has finally reached a fidelity level where in-silico microscopy can stand in for first-pass experiments.More interesting for biologists, the authors step beyond image aesthetics to morphology features. They extract hundreds of CellProfiler features (textures, intensities, granularity, cross-channel correlations) and ask whether the generated distributions match the ground truth.In side-by-side comparisons, MorphDiff's feature clouds line up with real data more closely than baselines like IMPA. Statistical tests show that over 70 percent of generated feature distributions are indistinguishable from real ones, and feature-wise scatter plots show the model correctly captures differences from control on the most perturbed features. Crucially, the model also preserves correlation structure between gene expression and morphology features, with higher agreement to ground truth than prior methods, evidence that it's modeling more than surface style. Wang et al., Nature Communications (2025), CC BY 4.0The drug results scale up that story to thousands of treatments. Using DeepProfiler embeddings as a compact morphology fingerprint, the team demonstrates that MorphDiff's generated profiles are discriminative: classifiers trained on real embeddings also separate generated ones by perturbation, and pairwise distances between drug effects are preserved. Wang et al., Nature Communications (2025), CC BY 4.0That matters for the downstream task everyone cares about: MOA retrieval. Given a query profile, can you find reference drugs with the same mechanism? MorphDiff's generated morphologies not only beat prior image-generation baselines but also outperform retrieval using gene expression alone, and they approach the accuracy you get using real images. In top-k retrieval experiments, the average improvement over the strongest baseline is 16.9 percent and 8.0 percent over transcriptome-only, with robustness shown across several k values and metrics like mean average precision and folds-of-enrichment. That's a strong signal that simulated morphology contains complementary information to chemical structure and transcriptomics which is enough to help find look-alike mechanisms even when the molecules themselves look nothing alike.MorphDiff's generated morphologies not only beat prior image-generation baselines but also outperform retrieval using gene expression alone, and they approach the accuracy you get using real images.The paper also lists some current limitations that hint at potential future improvements. Inference with diffusion remains relatively slow; the authors suggest plugging in newer samplers to speed generation. Time and concentration (two factors that biologists care about) aren't explicitly encoded due to data constraints; the architecture could take them as additional conditions when matched datasets become available. And because MorphDiff depends on perturbed gene expression as input, it can't conjure morphology for perturbations that lack transcriptome measurements; a natural extension is to chain with models that predict gene expression for unseen drugs (the paper cites GEARS as an example). Finally, generalization inevitably weakens as you stray far from the training distribution; larger, better-matched multimodal datasets will help, as will conditioning on more modalities such as structures, text descriptions, or chromatin accessibility.What does this mean in practice? Imagine a screening team with a large L1000 library but a smaller imaging budget. MorphDiff becomes a phenotypic copilot: generate predicted morphologies for new compounds, cluster them by similarity to known mechanisms, and prioritize which to image for confirmation. Because the model also surfaces interpretable feature shifts, researchers can peek under the hood. Did ER texture and mitochondrial intensity move the way we'd expect for an EGFR inhibitor? Did two structurally unrelated molecules land in the same phenotypic neighborhood? Those are the kinds of hypotheses that accelerate mechanism hunting and repurposing.The bigger picture is that generative AI has finally reached a fidelity level where in-silico microscopy can stand in for first-pass experiments. We've already seen text-to-image models explode in consumer domains; here, a transcriptome-to-morphology model shows that the same diffusion machinery can do scientifically useful work such as capturing subtle, multi-channel phenotypes and preserving the relationships that make those images more than eye candy. It won't replace the microscope. But if it reduces the number of plates you have to run to find what matters, that's time and money you can spend validating the hits that count.
by Kathy Pretz on (#70RGP)
![]() |
by Dina Genkina on (#70QFE)
|
![]() |
by Perri Thaler on (#70QFF)
|
![]() |
by Willie D. Jones on (#70PBR)
|
![]() |
by Evan Ackerman on (#70PBS)
|
by ADIPEC on (#70NK8)
This is a sponsored article brought to you by ADIPEC.Returning to Abu Dhabi between 3 and 6 November, ADIPEC 2025 - the world's largest energy event - aims to show how AI is turning ideas into real-world impact across the energy value chain and redrawing the global opportunity map. At the same time, it addresses how the world can deliver more energy - by adding secure supply, mobilizing investment, deploying intelligent solutions, and building resilient systems.AI as energy's double-edged swordAcross heavy industry and utilities, AI is cutting operating costs, lifting productivity, and improving energy efficiency, while turning data into real-time decisions that prevent failures and optimize output. Clean-energy and enabling-technology investment is set to reach US$2.2 trillion this year out of US$3.3 trillion going into the energy system, highlighting a decisive swing toward grids, renewables, storage, low-emissions fuels, efficiency and electrification.At the same time, AI's own growth is reshaping infrastructure planning, with electricity use from data centers expected to more than double by 2030. The dual challenge is to keep energy reliable and affordable, while meeting AI's surging compute appetite.A global energy convergenceTaking place in Abu Dhabi from 3-6 November 2025, ADIPEC will host 205,000+ visitors and 2,250+ exhibiting companies from the full spectrum of the global energy ecosystem, to showcase the latest breakthroughs shaping the future of energy. Under the theme Energy. Intelligence. Impact.", the event is held under the patronage of H.H. Sheikh Mohamed Bin Zayed Al Nahyan, President of the United Arab Emirates and hosted by ADNOC.With a conference program featuring 1,800+ speakers across 380 sessions and its most expansive exhibition ever, ADIPEC 2025 examines how scaling intelligent solutions like AI and building resilience can transform the energy sector to achieve inclusive global progress.Engineering the futureTwo flagship programs anchor the engineering agenda at ADIPEC's Technical Conferences: the SPE-organized Technical Conference and the Downstream Technical Conference.Technical Conference attendees can expect upwards of 1,100 technical experts across more than 200 sessions focused on field-proven solutions, operational excellence, and AI-powered optimization. From cutting-edge innovations reshaping the hydrogen and nuclear sectors to AI-driven digital technologies embedded across operations, the Conference showcases practical applications and operational successes across the upstream, midstream, and downstream sectors.Clean-energy and enabling-technology investment is set to reach US$2.2 trillion this year out of US$3.3 trillion going into the energy system.Technical pioneers demonstrate solutions that transform operations, enhance grid reliability, and enable seamless coordination between energy and digital infrastructure through smart integration technologies. In 2025, submissions hit a record 7,086, with about 20% centered on AI and digital technologies, and contributions arriving from 93 countries.Running in parallel to the engineering deep-dive, the ADIPEC Strategic Conference convenes ministers, CEOs, investors, and policymakers across 10 strategic programs to tackle geopolitics, investment, AI, and energy security with practical, long-term strategies. Over four days, a high-level delegation of 16,500+ participants will join a future-focused dialogue that links policy, capital, and technology decisions.Core program areas include Global Strategy, Decarbonization, Finance and Investment, Natural Gas and LNG, Digitalization and AI, Emerging Economies, and Hydrogen, with additional themes spanning policy and regulation, downstream and chemicals, diversity and leadership, and maritime and logistics. The result is a system-level view that complements the Technical Conference by translating boardroom priorities into roadmaps that operators can execute.Why AI matters now
![]() |
by Joanna Goodrich on (#70N4E)
|
![]() |
by Margo Anderson on (#70MYC)
|
![]() |
by Rahul Pandey on (#70MYD)
|
![]() |
by Liz Dennett on (#70MYE)
|
![]() |
by Margo Anderson on (#70KZ9)
|
![]() |
by Kate Park on (#70KWC)
|
by Kathy Pretz on (#70K87)
![]() |
by Kate Park on (#70K1K)
|
by IEEE on (#5QC98)
![]() |
by Julia Tilton on (#70JAH)
|
by Bjorn Sjodin on (#70J07)
![]() |
by Prachi Jain on (#70H41)
|
![]() |
by Evan Ackerman on (#70GR6)
|
![]() |
by Willie D. Jones on (#70G0D)
|
![]() |
by Yu-Tzu Chiu on (#70FN7)
|
![]() |
by Harry Goldstein on (#70FN8)
|
![]() |
by Rahul Pandey on (#70FN9)
|
![]() |
by Gwendolyn Rak on (#70FNA)
|
![]() |
by Glenn Zorpette on (#70FHN)
|
![]() |
by The Mobility House on (#70FF4)
|
![]() |
by Perri Thaler on (#70EMY)
|
![]() |
by Andrew Moseman on (#70EMZ)
|
![]() |
by Matthew S. Smith on (#70EN0)
|
![]() |
by Christopher Irick on (#70EN1)
|
![]() |
by Joanna Goodrich on (#70E50)
|
![]() |
by Willie D. Jones on (#70DXD)
|
![]() |
by Margo Anderson on (#70DP8)
|
![]() |
by Evan Ackerman on (#70DP9)
|
![]() |
by Steven Searcy on (#70D3C)
|
![]() |
by Julia Tilton on (#70D3D)
|
by Quanscient on (#70CTS)
![]() |
by Stephen Cass on (#70CR6)
|
![]() |
by Stephen Cass on (#70C3B)
|
![]() |
by Julianne Pepitone on (#70B67)
|
![]() |
by Evan Ackerman on (#70B43)
|
by Liquid Instruments on (#70A2G)
![]() |
by Evan Ackerman on (#70A2H)
|