3 Indian-origin scientists win top US award for AI, supercomputing
05 May 2026
Three Indian-origin researchers have been awarded the prestigious 2025 Outstanding Postdoctoral Performance Awards by Argonne National Laboratory.
The award recognizes early-career scientists who are advancing scientific knowledge and contributing to national missions in energy and security.
The recipients of 2025 Outstanding Postdoctoral Performance Awards are Kiran Kumar Yalamanchi, FNU Shilpika, and Krishna Teja Chitty-Venkata. Their work spans artificial intelligence (AI), high-performance computing, and sustainability.
Yalamanchi's work in computational science
Research focus
Yalamanchi's research focuses on computational science, where he integrates traditional physics-based modeling with machine learning. His work particularly concentrates on fluid dynamics and energy applications.
One of his major contributions is the development of multimodal foundation models that can analyze different types of data to predict behavior and design new materials.
He also uses AI for "inverse molecular design," identifying and creating fuel molecules with improved efficiency and sustainability.
Shilpika's contributions to digital twin development
AI transparency
Shilpika, who works at Argonne's Leadership Computing Facility, is focused on making complex algorithms more transparent.
She has developed a "digital twin" of Aurora, one of the world's most powerful exascale supercomputers.
This virtual replica mimics the real system and lets engineers and scientists monitor performance, predict failures, and optimize operations in real time.
Her work is crucial for maintaining efficiency and trust as supercomputers become increasingly complex.
Chitty-Venkata's focus on AI efficiency
AI optimization
Chitty-Venkata's research focuses on making AI systems more efficient. He has worked on techniques such as pruning and quantization to streamline neural networks without significantly compromising performance.
He has also developed open-source tools like LLM-Inference-Bench, which measures how efficiently AI models run on high-performance systems.
These efforts are crucial for scaling AI in a cost-effective and environmentally sustainable manner.
-
Luxury or budget? Guess which fashion items cost more

-
Kunal Kemmu shares heartfelt side with ‘Nindiyan’

-
Why Buying A Coffee Every Day Isn’t The Reason Younger Generations Can’t Afford A House

-
Why Buying A Coffee Every Day Isn’t The Reason Younger Generations Can’t Afford A House

-
American tourist dies after falling from 29th-floor pool deck at Hong Kong hotel, injuring 7 pedestrians
