Research

Pushing the frontiers of AI for equitable innovation through autonomous research groups.

Research Circle

Research Areas

Knowledge Representation & Reasoning
Computer Vision
Natural Language Processing
Federated & Privacy- Preserving Learning
Multimodal AI
Speech & Audio Processing
Learning Algorithms & Optimization
Research DecorationsScientific Progress

Expand human knowledge through high-integrity research that pushes the frontiers of AI.

Research DecorationsSectoral Impact

Address critical real-world challenges where AI can create valuable impact.

Research DecorationsGlobal Equity

Ensure benefits of AI reach everyone and its progress is shaped by all.

Our Impact

0+

publications in top-tier venues

$0M+

research funding accross 9 international grants

0%

international collaboration rate

0

open-source repositories impacting the AI community

Featured Projects

AI-Powered Task-Shifting for High-Quality Fetal Ultrasound Service in Community Healthcare Settings
image5
AI-Powered Task-Shifting for High-Quality Fetal Ultrasound Service in Community Healthcare Settings
Computer Vision | Healthcare Task-Shifting | Low-Resource AI | AI in Maternal & Reproductive Health
RKDB
AI-Enhanced Coronary Artery Disease Diagnostics from X-Ray Angiography
Screenshot 2026-03-10 at 12-07-04 2310.05990v2.pdf
AI-Enhanced Coronary Artery Disease Diagnostics from X-Ray Angiography
Medical Image Analysis | Computer Vision | Clinical Decision Support | Low-Resource AI
YRDBSB
+1
Systems Genomics Modeling of Multi-drug Resistance in Mycobacterium tuberculosis
featured_hu77c663289137996788fbddc3ee51bb1a_92558_680x500_fill_q90_lanczos_smart1
Systems Genomics Modeling of Multi-drug Resistance in Mycobacterium tuberculosis
Computational Genomics | Infectious Disease Genomics | Drug Resistance Modeling
DRDBPP
+1
AI-Enhanced Flood Recovery Initiative
ai_flood_recovery
AI-Enhanced Flood Recovery Initiative
Computer Vision | Remote Sensing & Satellite Imagery
AI-Powered Surgical Planning for Knee Osteotomy
image1
AI-Powered Surgical Planning for Knee Osteotomy
Medical Image Analysis | Computer Vision | Clinical Decision Support | Precision Medicine
AI-Assisted Diarrheal Parasite Detection with Smartphone Microscopy
suprim_smartphone_microscopy1
AI-Assisted Diarrheal Parasite Detection with Smartphone Microscopy
Medical Image Analysis | Computer Vision | Infectious Disease Genomics | Low-Resource AI | Public Health
UCST

Feature Publications

2023
FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare
Karim Lekadir, Bishesh Khanal, Martijn Starmans
2025
Transforming healthcare through just, equitable and quality driven artificial intelligence solutions in South Asia
Sushmita Adhikari, Iftikhar Ahmed, Deepak Bajracharya, Bishesh Khanal, Chandrasegarar Solomon, Kapila Jayaratne, Khondaker Abdullah Al Mamum, Muhammad Shamim Hayder Talukder, Sunila Shakya, Suresh Manandhar, Zahid Ali Memon, Moinul Haque Chowdhury, Ihtesham ul Islam, Noor Sabah Rakhshani & M. Imran Khan
2025
Assistive Artificial Intelligence in Epilepsy and Its Impact on Epilepsy Care in Low- and Middle-Income Countries
Nabin Koirala, Shishir Raj Adhikari, Mukesh Adhikari, Taruna Yadav, Abdul Rauf Anwar, Dumitru Ciolac, Bibhusan Shrestha, Ishan Adhikari, Bishesh Khanal, Muthuraman Muthuraman
2025
Multimodal Federated Learning With Missing Modalities through Feature Imputation Network
Pranav Poudel, Aavash Chhetri, Prashnna Gyawali, Georgios Leontidis, Binod Bhattarai
2024
NLPineers@ NLU of Devanagari Script Languages 2025: Hate speech detection using ensembling of BERT-based models
Anmol Guragain, Nadika Poudel, Rajesh Piryani, Bishesh Khanal

News & Updates

Dr. Bipendra Basnyat on AI and the Future of Farming in Nepal
November 1, 2025
Dr. Bipendra Basnyat on AI and the Future of Farming in Nepal

Dr. Bipendra Basnyat, Adjunct Research Scientist leading NAAMII’s Agri AI (A²) Innovation Lab, recently shared his insights on AI-driven agriculture in a podcast hosted by Sushant Pradhan. With over two decades of experience in AI and machine learning, Dr. Basnyat discussed the challenges and opportunities of integrating advanced technologies into Nepali farming, sustainable agriculture, and climate-smart practices.NAAMII’s A² Innovation Lab combines cutting-edge technology with traditional farming knowledge to develop resilient, scalable, and sustainable agricultural systems. Its work spans climate-smart agriculture, regenerative practices, and permaculture, using AI, IoT, satellite imagery, and computer vision to optimize resource management, improve crop resilience, and preserve local knowledge.Dr. Basnyat’s conversation also addressed the misconceptions around AI, data security, and the role of technology in empowering farmers. He highlighted practical applications of AI in Nepali agriculture, from precision farming to intelligent systems for real-world use, demonstrating how innovation can bridge the technology gap and strengthen local farming communities.The A² Innovation Lab continues to develop tools and insights that support farmers, researchers, and communities, while advancing AI for conscious living and sustainable agriculture.

Decoration
NAAMII at MICCAI 2025
September 2, 2025
NAAMII at MICCAI 2025

NAAMII made a strong presence at this year's MICCAI conference in Daejeon, South Korea with seven accepted papers from TOGAI (Transforming Global Health with AI) and MMLL(B. Bhattarai Multimodal Learning Lab). Dr. Bishesh Khanal (TOGAI) and Dr. Binod Bhattarai (MMLL) represented the institute at one of the most competitive venues for medical imaging and AI research, along with Dr. Sharib Ali (University of Leeds) and Dr. François Rameau (SUNY Korea), contributing through multiple accepted papers, spotlight presentations, and active participation in conference panels, workshops, and outreach activities. Main Conference & Spotlight PapersNERO: Explainable Out-of-Distribution Detection with Neuron-Level Relevance in Gastrointestinal Imaging (Oral) Anju Chhetri, Jari Korhonen, Prashnna Gyawali, Binod BhattaraiAdaptive Frame Selection for Gestational Age Estimation from Blind Sweep Fetal Ultrasound Videos (Oral)Tanya Akumu, Marawan Elbatel, Victor M. Campello, Richard Osuala, Carlos Martin-Isla, Ignacio Valenzuela, Xiaomeng Li, Bishesh Khanal, Karim LekadirHallucination-Aware Multimodal Benchmark for Gastrointestinal Image Analysis with Large Vision-Language Models (Spotlight) Bidur Khanal, Sandesh Pokhrel, Sanjay Bhandari, Ramesh Rana, Nikesh Shrestha, Ram Bahadur Gurung, Cristian Linte, Angus Watson, Yash Raj Shrestha, Binod BhattaraiWorkshop PapersMMLL contributed multiple papers at the DEMI (Data Engineering in Medical Imaging) workshop:Estimating 2D Keypoints of Surgical Tools Using Vision-Language Models with Low-Rank Adaptation Krit Duangprom, Tryphon Lambrou, Binod Bhattarai Effect of Data Augmentation on Conformal Prediction for Diabetic Retinopathy Rizwan Ahmad, Anamika Amirekaandar, Joel Pallo, Carl Laxson, Binod Bhattarai, Prashnna Gyawali Addressing Bias in Vision-Language Models for Glaucoma Detection without Protected Attribute Supervision Ahsan Habib Akash, Garge Murnagh, Anamika Amirekaandar, Joel Pallo, Carl Laxson, Binod Bhattarai Surgical Vision World Model Saurabh Koju, Saurav Bastola, Prashnna Shrestha, Sandesh Pokhrel, Rida Ruq Poudel, Binod Bhattarai Challenge ParticipationA team from MMLL also participated in the SAGES Critical View of Safety (CVS) Challenge, one of only three Lighthouse Challenges at MICCAI 2025 (out of nearly 50 challenges hosted).The CVS Challenge advances AI for surgical safety by having participants analyze laparoscopic cholecystectomy videos to detect the Critical View of Safety, a key step in preventing bile duct injuries. The team, led by Pratik Shrestha, earned 2nd place in Hepatocystic Anatomy Segmentation (Sub-Challenge C) and 3rd place in CVS Classification (Sub-Challenge A), receiving prizes of $1,500 and $1,000 respectively. Leadership & EngagementAlthough most team members were unable to attend in person, Dr. Bishesh Khanal contributed to the international medical AI community through several roles. As a panelist in the "Challenges and Opportunities of Medical AI in Asia" session, Dr. Khanal shared his insights on the specific barriers emerging regions face in developing robust AI and discussed collaborative pathways for inclusive medical AI research across Asia. He also engaged in MICCAI's outreach initiatives, enabling such regions to participate meaningfully in global medical AI research, and served on the MICCAI Young Scientist Publication Impact Award selection committee, helping recognize outstanding early-career contributions to the field. We’re proud of all our researchers for their dedication and technical excellence in advancing the global medical AI dialogue, and congratulations to everyone who made these achievements possible!

Decoration
NAAMII–BPEF AI Screening Bootcamp 2025 Concludes
September 1, 2025
NAAMII–BPEF AI Screening Bootcamp 2025 Concludes

The NAAMII–BPEF AI-Assisted Disease Screening Bootcamp 2025 concluded on September 19, wrapping up a five-week collaborative program between NAAMII and the B.P. Eye Foundation (BPEF). The program equipped 30 selected participants with hands-on experience in developing AI-based tools for clinical disease screening through lectures, paper discussions, and a week-long project phase.Lectures were delivered by Mahesh Shakya, Bishram Acharya, and Angelina Ghimire, covering clinical problem formulation, deep learning workflows, and AI model evaluation in healthcare. Two virtual paper reading sessions further familiarized participants with current AI research and model design strategies.During HackWeek (September 6–11), six interdisciplinary teams, each guided by a NAAMII research assistant mentor, worked on real-world clinical AI topics, including oral cancer screening, glaucoma detection, and diabetic retinopathy grading.The final presentation, held at B.P. Eye Foundation, was judged by a panel of clinical and AI experts, with Prof. Dr. Badri Prasad Badhu delivering closing remarks. The bootcamp strengthened participants’ technical and clinical understanding and reinforced NAAMII’s ongoing collaboration with BPEF in advancing AI-driven disease screening and digital health innovation in Nepal.This initiative supports the NAAMII's vision of training the next generation of clinicians and enhancing their expertise in AI research.

Decoration
Four MMLL Papers Accepted at MICCAI and MIUA Conferences
August 1, 2025
Four MMLL Papers Accepted at MICCAI and MIUA Conferences

B Bhattarai Multimodal Learning Lab (MMLL) specializes in advancing AI techniques that integrate heterogeneous data sources, including vision, text, and speech, to enable computers to understand, interpret, and reason across different modalities. MMLL has added four more papers to NAAMII's growing portfolio of accepted research, with publications at MICCAI 2025 and MIUA 2025. MICCAI is among the most competitive conferences in the field, with an acceptance rate hovering around 30%. This year, two of MMLL’s papers have been accepted at the conference, with one of them ranked in the top 9% of submissions, based on peer review scores.At MIUA, the UK’s premier venue for medical image analysis, one of MMLL’s papers, out of two, has been nominated for the Best Paper Award, placing it among the top few contributions at the conference.Presentation DatesMIUA 2025 15–17 July, University of Leeds (UK) MICCAI 2025 23–27 September, Daejeon Convention Center (South Korea)  Paper 1: NERO: Explainable Out-of-Distribution Detection with Neuron-level RelevanceAnju Chhetri, Jari Korhonen, Prashnna Gyawali, Binod BhattaraiMICCAI 2025See full paper:  arXivDeep learning models in medical imaging can fail silently when faced with unfamiliar or out-of-distribution (OOD) inputs, a critical concern in clinical settings. This research introduces NERO, a novel method for OOD detection that focuses on neuron-level relevance patterns rather than high-level features or logits. By clustering relevance maps for known classes and measuring how far a new sample deviates from these clusters, NERO not only improves detection accuracy but also offers explainable outputs. Tested on gastrointestinal datasets (Kvasir, GastroVision), NERO consistently outperformed existing methods across model architectures.   Paper 2: NCDD: Nearest Centroid Distance Deficit for Out‑of‑Distribution Detection in Gastrointestinal Vision Sandesh Pokhrel, Sanjay Bhandari, Sharib Ali, Tryphon Lambrou, Anh Nguyen, Yash Raj Shrestha, Angus Watson, Danail Stoyanov, Prashnna Gyawali, Binod Bhattarai MIUA 2025 Best Paper AwardSee full paper: arXivReliable deep learning in medical imaging requires the ability to flag unfamiliar or anomalous inputs. This challenge is particularly acute in gastrointestinal imaging, where in-distribution and out-of-distribution (OOD) examples often share similar visual features. NCDD frames anomaly detection as an OOD problem and proposes a simple yet effective solution: compute how far a new sample’s feature representation deviates from its nearest class centroid. In-distribution samples cluster close to class centroids, while OOD samples tend to lie farther away. Evaluated on Kvasir2 and GastroVision datasets across different architectures, NCDD consistently outperformed state-of-the-art methods—demonstrating a more reliable way to flag anomalies in medical images.   Paper 3: Multimodal Federated Learning With Missing Modalities through Feature Imputation Network Pranav Poudel, Aavash Chhetri, Prashnna Gyawali, Georgios Leontidis, Binod Bhattarai See full paper: arXivFederated learning enables multi-institutional collaboration without sharing raw data, but in healthcare settings, missing data modalities (like uncollected scans or tests) are common challenges. This paper introduces a lightweight feature imputation network to reconstruct missing modality data at the feature level instead of synthesizing raw inputs. Tested across three major chest X-ray datasets (MIMIC‑CXR, NIH Open‑I, CheXpert), both in uniform and varied data conditions, this method improved performance over standard baselines. The approach is efficient, preserves privacy, and supports real-world clinical AI deployment even when data is incomplete.  Paper 4: Hallucination-Aware Multimodal Benchmark for Gastrointestinal Image Analysis with Large Vision‑Language Models Bidur Khanal, Sandesh Pokharel, Sanjay Bhandari, Ramesh Rana, Nikesh Shrestha, Ram Bahadur Gurung, Cristian Linte, Angus Watson, Yash Raj Shrestha, Binod Bhattarai MICCAI 2025See full paper: arXivVision-Language Models (VLMs), designed to interpret medical images and generate clinical text, can sometimes produce descriptions that do not match the visual content, known as hallucinations. To address this in gastrointestinal (GI) image analysis, the researchers created Gut‑VLM, a dataset built in two stages: initially generating reports using ChatGPT for Kvasir‑v2 images (which may contain hallucinations), followed by expert review to correct and tag inaccuracies. Rather than solely fine‑tuning VLMs to generate descriptive reports, they propose hallucination‑aware fine‑tuning, training models to detect and correct hallucinations. This approach outperformed traditional report‑generation fine‑tuning, and the work establishes a new benchmark for evaluating VLM fidelity in GI image analysis.   These acceptances reflect NAAMII’s continued focus on practical clinical challenges in medical AI: from out-of-distribution detection to hallucination-aware vision-language models and federated learning under real-world constraints. The work spans foundational methods and applied problems, with a shared aim of improving reliability and safety in healthcare AI systems.We’re proud of all the researchers and teams behind this work, for their rigor, creativity, and sustained effort. Your commitment continues to set the tone for what research at NAAMII stands for.Congratulations to everyone involved!

Decoration
Three TOGAI Papers Accepted at MICCAI and ICFP
August 1, 2025
Three TOGAI Papers Accepted at MICCAI and ICFP

TOGAI specializes in AI for intelligent and affordable health technologies in low-resource settings, tackling critical gaps in diagnostics, specialist access, and information equity. OGAI has achieved acceptances at international conferences, with two publications at MICCAI 2025 (including the MIRASOL satellite workshop) and one at ICFP 2025.MICCAI focuses on medical image computing and computer-assisted intervention. TOGAI's paper has been accepted for oral presentation. This work represents PhD student Tanya Akumu at the University of Barcelona, mentored by Dr. Bishesh Khanal at TOGAI.ICFP focuses on family planning and reproductive health research and practice. Out of over 5,000 abstracts submitted from more than 125 countries, TOGAI's paper was selected for the Poster Session category. Conference DetailsMICCAI 2025 23–27 September 2025 ( Daejeon Convention Center, South Korea)ICFP 2025 3–6 November 2025 (Bogotá, Colombia)   Paper 1:From Development to Deployment of AI-assisted Telehealth and Screening for vision- and hearing-threatening diseases in resource-constrained settings: Field Observations, Challenges and Way ForwardMahesh Shakya, Bijay Adhikari, Nirsara Shrestha, Bipin Koirala, Arun Adhikari, Prasanta Poudyal, Luna Mathema, Sarbagya Buddhacharya, Bijay Khatri, Bishesh KhanalMICCAI 2025: MIRASOL (Medical Image Computing in Resource-Constrained Settings & Knowledge Interchange Workshop)Vision- and hearing-threatening diseases cause preventable disability in settings with few specialists and limited screening infrastructure. AI-assisted screening and telehealth can expand early detection, but deployment is challenging and few field experiences existThis work shares lessons from developing scalable AI-assisted screening programs. Iterative co-design, interdisciplinary collaboration, prototyping, shadow deployments, and continuous feedback are critical for reducing usability hurdles. Similarly, public AI models and datasets are valuable despite domain shift, and automated image quality checks are essential for capturing gradable images in high-volume camps. By documenting these challenges, the work fills a gap in actionable field knowledge for real-world AI-assisted telehealth and mass-screening in resource-constrained settings.  Paper 2:Adaptive Frame Selection for Gestational Age Estimation from Blind Sweep Fetal Ultrasound VideosTanya Akumu, Marawan Elbatel, Victor M. Campello, Richard Osuala, Carlos Martin-Isla, Ignacio Valenzuela, Xiaomeng Li, Bishesh Khanal, Karim LekadirMICCAI 2025See GitHub CodeBlind sweep ultrasound is promising for prenatal care in low-resource settings, but current AI methods for gestational age estimation face key challenges: manual segmentation, inefficient frame processing, and suboptimal sampling with small datasets.This work introduces SelectGA, a framework that adaptively selects the most informative and least redundant frames from ultrasound videos. Tested on data from diverse ultrasound devices, SelectGA reduced mean absolute error in gestational age prediction by 27%. The study shows that adaptive frame selection can make AI-powered ultrasound more accurate and computationally efficient, laying the foundation for sustainable prenatal care in resource-constrained healthcare systems.  Paper 3:Can Chatbots Bridge SRH Information Gaps? A Community-Level Evaluation of ChatGPT in Nepal Medha Sharma*, Supriya Khadka, Shilpa Lamichhane, Udit Chandra Aryal, Bishnu Hari Bhatta, Bijayan Bhattarai, Santosh, Bishesh Khanal *Corresponding author: Medha SharmaICFP 2025See full paper: ArxivThis study evaluates ChatGPT-3.5’s performance in providing sexual and reproductive health (SRH) information to diverse community users across Nepal. Analyzing over 13,000 chatbot responses from students, community members, and Female Community Health Volunteers, experts assessed accuracy, usability, safety, and potential bias. Results showed that while 62% of responses were accurate, only 35% met full quality criteria, with gaps especially on anatomy and contraception topics. No significant demographic bias was found. The study highlights ChatGPT’s potential and limitations in low-resource, multilingual settings and offers key insights for developing safer, culturally relevant AI chatbots to improve equitable SRH access.   We congratulate the TOGAI team for their work, which continues NAAMII's commitment to leveraging AI for public health outcomes.

Decoration