Feed ieee-spectrum-recent-content IEEE Spectrum

Favorite IconIEEE Spectrum

Link https://spectrum.ieee.org/
Feed http://feeds.feedburner.com/IeeeSpectrum
Updated 2025-11-06 14:30
Discover’s Data Manager Helps Foil Credit Card Fraudsters
Inside Hyundai’s Massive Metaplant
Inside the Massive Effort to Sequence All of Europe’s Lepidoptera
This Professor’s Open-Source Robots Make STEM More Inclusive
A Challenge to Roboticists: My Humanoid Olympics
Volcanologists Turn to a High-Tech Suitcase to Study Eruptions
Novel Geothermal System to Come Online in Germany
Chips Need to Chill Out
Why I Admire Walt Downing’s Volunteerism
Video Friday: Happy Robot Halloween!
In 1953, the Ford X-100 Concept Car Had It All
Special Report: The Hot, Hot Future of Chips
A Hassle-Free Battery Charger
‘Liquid Jets’ Could Be Key to Studying Cancer Cells
AI Model Growth Outpaces Hardware Improvements
New Thermal Battery Supplies Clean Heat for Oil Extraction
New Thermal Battery Supplies Clean Heat for Oil Extraction
From Bottleneck to Breakthrough: AI in Chip Verification
Advancing Magnetized Target Fusion by Solving an Inverse Problem with COMSOL Multiphysics
Scientists Need a Positive Vision for AI
How to Land a Job in Quantum Computing
Teens Explore Aerospace and AI at TryEngineering Summer Camp
Your AI Agent Is Now a Target for Email Phishing
The 7 Phases of the Internet
Tips for Success From Crowd Supply’s Helen Leigh
Go Go Gadgets!
Video Friday: Unitree’s Human-Size Humanoid Robot
User-Centered Design Shapes Assistive Tech for Cerebral Palsy
Jill Gostin Is 2026 IEEE President-Elect
4 Weird Things You Can Turn into a Supercapacitor
Progress in Your Career by Managing Up
Agentic AI’s Hidden Data Trail—and How to Shrink It
Microcredentials Boost Employment in High-Tech Sectors
Inside the Best Weather-Forecasting AI in the World
Data Centers Look to Old Airplane Engines for Power
Diamond Blankets Will Keep Future Chips Cool
Why Mesh Networks Break When Big Crowds Gather
Video Friday: Multimodal Humanoid Walks, Flies, Drives
IEEE Memorial Fund Honors Magnet Tech Pioneer Swarn Kalsi
Electrifying Everything Will Require Multiphysics Modeling
You Can Cool Chips With Lasers?!?!
Faster, Smaller AI Model Found for Image Geolocation
How Roomba Got Its Vacuum
Teaching AI to Predict What Cells Will Look Like Before Running Any Experiments
This is a sponsored article brought to you by MBZUAI.If you've ever tried to guess how a cell will change shape after a drug or a gene edit, you know it's part science, part art, and mostly expensive trial-and-error. Imaging thousands of conditions is slow; exploring millions is impossible.A new paper in Nature Communications proposes a different route: simulate those cellular after" images directly from molecular readouts, so you can preview the morphology before you pick up a pipette. The team calls their model MorphDiff, and it's a diffusion model guided by the transcriptome, the pattern of genes turned up or down after a perturbation.At a high level, the idea flips a familiar workflow. High-throughput imaging is a proven way to discover a compound's mechanism or spot bioactivity but profiling every candidate drug or CRISPR target isn't feasible. MorphDiff learns from cases where both gene expression and cell morphology are known, then uses only the L1000 gene expression profile as a condition to generate realistic post-perturbation images, either from scratch or by transforming a control image into its perturbed counterpart. The claim is that competitive fidelity on held-out (unseen) perturbations across large drug and genetic datasets plus gains on mechanism-of-action (MOA) retrieval can rival real images.aspect_ratioThis research led by MBZUAI researchers starts from a biological observation: gene expression ultimately drives proteins and pathways that shape what a cell looks like under the microscope. The mapping isn't one-to-one, but there's enough shared signal for learning. Conditioning on the transcriptome offers a practical bonus too: there's simply far more publicly accessible L1000 data than paired morphology, making it easier to cover a wide swath of perturbation space. In other words, when a new compound arrives, you're likely to find its gene signature which MorphDiff can then leverage.Under the hood, MorphDiff blends two pieces. First, a Morphology Variational Autoencoder (MVAE) compresses five-channel microscope images into a compact latent space and learns to reconstruct them with high perceptual fidelity. Second, a Latent Diffusion Model learns to denoise samples in that latent space, steering each denoising step with the L1000 vector via attention. Wang et al., Nature Communications (2025), CC BY 4.0Diffusion is a good fit here: it's intrinsically robust to noise, and the latent space variant is efficient enough to train while preserving image detail. The team implements both gene-to-image (G2I) generation (start from noise, condition on the transcriptome) and image-to-image (I2I) transformation (push a control image toward its perturbed state using the same transcriptomic condition). The latter requires no retraining thanks to an SDEdit-style procedure, which is handy when you want to explain changes relative to a control.It's one thing to generate photogenic pictures; it's another to generate biologically faithful ones. The paper leans into both: on the generative side, MorphDiff is benchmarked against GAN and diffusion baselines using standard metrics like FID, Inception Score, coverage, density, and a CLIP-based CMMD. Across JUMP (genetic) and CDRP/LINCS (drug) test splits, MorphDiff's two modes typically land first and second, with significance tests run across multiple random seeds or independent control plates. The result is consistent: better fidelity and diversity, especially on OOD perturbations where practical value lives.The bigger picture is that generative AI has finally reached a fidelity level where in-silico microscopy can stand in for first-pass experiments.More interesting for biologists, the authors step beyond image aesthetics to morphology features. They extract hundreds of CellProfiler features (textures, intensities, granularity, cross-channel correlations) and ask whether the generated distributions match the ground truth.In side-by-side comparisons, MorphDiff's feature clouds line up with real data more closely than baselines like IMPA. Statistical tests show that over 70 percent of generated feature distributions are indistinguishable from real ones, and feature-wise scatter plots show the model correctly captures differences from control on the most perturbed features. Crucially, the model also preserves correlation structure between gene expression and morphology features, with higher agreement to ground truth than prior methods, evidence that it's modeling more than surface style. Wang et al., Nature Communications (2025), CC BY 4.0The drug results scale up that story to thousands of treatments. Using DeepProfiler embeddings as a compact morphology fingerprint, the team demonstrates that MorphDiff's generated profiles are discriminative: classifiers trained on real embeddings also separate generated ones by perturbation, and pairwise distances between drug effects are preserved. Wang et al., Nature Communications (2025), CC BY 4.0That matters for the downstream task everyone cares about: MOA retrieval. Given a query profile, can you find reference drugs with the same mechanism? MorphDiff's generated morphologies not only beat prior image-generation baselines but also outperform retrieval using gene expression alone, and they approach the accuracy you get using real images. In top-k retrieval experiments, the average improvement over the strongest baseline is 16.9 percent and 8.0 percent over transcriptome-only, with robustness shown across several k values and metrics like mean average precision and folds-of-enrichment. That's a strong signal that simulated morphology contains complementary information to chemical structure and transcriptomics which is enough to help find look-alike mechanisms even when the molecules themselves look nothing alike.MorphDiff's generated morphologies not only beat prior image-generation baselines but also outperform retrieval using gene expression alone, and they approach the accuracy you get using real images.The paper also lists some current limitations that hint at potential future improvements. Inference with diffusion remains relatively slow; the authors suggest plugging in newer samplers to speed generation. Time and concentration (two factors that biologists care about) aren't explicitly encoded due to data constraints; the architecture could take them as additional conditions when matched datasets become available. And because MorphDiff depends on perturbed gene expression as input, it can't conjure morphology for perturbations that lack transcriptome measurements; a natural extension is to chain with models that predict gene expression for unseen drugs (the paper cites GEARS as an example). Finally, generalization inevitably weakens as you stray far from the training distribution; larger, better-matched multimodal datasets will help, as will conditioning on more modalities such as structures, text descriptions, or chromatin accessibility.What does this mean in practice? Imagine a screening team with a large L1000 library but a smaller imaging budget. MorphDiff becomes a phenotypic copilot: generate predicted morphologies for new compounds, cluster them by similarity to known mechanisms, and prioritize which to image for confirmation. Because the model also surfaces interpretable feature shifts, researchers can peek under the hood. Did ER texture and mitochondrial intensity move the way we'd expect for an EGFR inhibitor? Did two structurally unrelated molecules land in the same phenotypic neighborhood? Those are the kinds of hypotheses that accelerate mechanism hunting and repurposing.The bigger picture is that generative AI has finally reached a fidelity level where in-silico microscopy can stand in for first-pass experiments. We've already seen text-to-image models explode in consumer domains; here, a transcriptome-to-morphology model shows that the same diffusion machinery can do scientifically useful work such as capturing subtle, multi-channel phenotypes and preserving the relationships that make those images more than eye candy. It won't replace the microscope. But if it reduces the number of plates you have to run to find what matters, that's time and money you can spend validating the hits that count.
Nokia Bell Labs Breaks Ground for Its New N.J. Headquarters
Next-Gen AI Needs Liquid Cooling
In a First, Artificial Neurons Talk Directly to Living Cells
Solid-State Transformer Design Unlocks Faster EV Charging
Video Friday: Non-Humanoid Hands for Humanoid Robots
Intelligence Meets Energy: ADIPEC 2025 and the AI Revolution in the Energy Sector
This is a sponsored article brought to you by ADIPEC.Returning to Abu Dhabi between 3 and 6 November, ADIPEC 2025 - the world's largest energy event - aims to show how AI is turning ideas into real-world impact across the energy value chain and redrawing the global opportunity map. At the same time, it addresses how the world can deliver more energy - by adding secure supply, mobilizing investment, deploying intelligent solutions, and building resilient systems.AI as energy's double-edged swordAcross heavy industry and utilities, AI is cutting operating costs, lifting productivity, and improving energy efficiency, while turning data into real-time decisions that prevent failures and optimize output. Clean-energy and enabling-technology investment is set to reach US$2.2 trillion this year out of US$3.3 trillion going into the energy system, highlighting a decisive swing toward grids, renewables, storage, low-emissions fuels, efficiency and electrification.At the same time, AI's own growth is reshaping infrastructure planning, with electricity use from data centers expected to more than double by 2030. The dual challenge is to keep energy reliable and affordable, while meeting AI's surging compute appetite.A global energy convergenceTaking place in Abu Dhabi from 3-6 November 2025, ADIPEC will host 205,000+ visitors and 2,250+ exhibiting companies from the full spectrum of the global energy ecosystem, to showcase the latest breakthroughs shaping the future of energy. Under the theme Energy. Intelligence. Impact.", the event is held under the patronage of H.H. Sheikh Mohamed Bin Zayed Al Nahyan, President of the United Arab Emirates and hosted by ADNOC.With a conference program featuring 1,800+ speakers across 380 sessions and its most expansive exhibition ever, ADIPEC 2025 examines how scaling intelligent solutions like AI and building resilience can transform the energy sector to achieve inclusive global progress.Engineering the futureTwo flagship programs anchor the engineering agenda at ADIPEC's Technical Conferences: the SPE-organized Technical Conference and the Downstream Technical Conference.Technical Conference attendees can expect upwards of 1,100 technical experts across more than 200 sessions focused on field-proven solutions, operational excellence, and AI-powered optimization. From cutting-edge innovations reshaping the hydrogen and nuclear sectors to AI-driven digital technologies embedded across operations, the Conference showcases practical applications and operational successes across the upstream, midstream, and downstream sectors.Clean-energy and enabling-technology investment is set to reach US$2.2 trillion this year out of US$3.3 trillion going into the energy system.Technical pioneers demonstrate solutions that transform operations, enhance grid reliability, and enable seamless coordination between energy and digital infrastructure through smart integration technologies. In 2025, submissions hit a record 7,086, with about 20% centered on AI and digital technologies, and contributions arriving from 93 countries.Running in parallel to the engineering deep-dive, the ADIPEC Strategic Conference convenes ministers, CEOs, investors, and policymakers across 10 strategic programs to tackle geopolitics, investment, AI, and energy security with practical, long-term strategies. Over four days, a high-level delegation of 16,500+ participants will join a future-focused dialogue that links policy, capital, and technology decisions.Core program areas include Global Strategy, Decarbonization, Finance and Investment, Natural Gas and LNG, Digitalization and AI, Emerging Economies, and Hydrogen, with additional themes spanning policy and regulation, downstream and chemicals, diversity and leadership, and maritime and logistics. The result is a system-level view that complements the Technical Conference by translating boardroom priorities into roadmaps that operators can execute.Why AI matters now
12345678910...