Health Innovation

New AI tool reads medical images with minimal data

This could make diagnostic tools faster and more affordable.

A new AI tool could simplify and reduce the cost of training medical imaging software, even with limited patient scans.

This tool enhances medical image segmentation, where each pixel in an image is labeled to show what it represents, like cancerous or healthy tissue. Normally, this requires a highly trained expert, but deep learning can automate it. However, deep learning needs lots of labeled images, which are time-consuming and expensive to create, especially for rare conditions.

Developed by UC San Diego researchers, including Ph.D. student Li Zhang and professor Pengtao Xie, this AI tool learns from just a few expert-labeled images, needing up to 20 times less data than usual. This could make diagnostic tools faster and more affordable, especially for resource-limited hospitals.

Published in Nature Communications, the tool was tested on tasks like identifying skin lesions, breast cancer, and foot ulcers in various medical images. It improved performance by 10-20% compared to other methods, using far less data.

For example, a dermatologist could label just 40 images instead of thousands, and the AI could still accurately spot suspicious lesions in real time, speeding up diagnoses.

The tool works by creating synthetic images from labeled masks, which highlight healthy or diseased areas. These artificial images, combined with real ones, train the system. A feedback loop refines the synthetic images to improve accuracy.

In the future, the team aims to make the tool smarter and include clinician feedback to ensure it meets real-world medical needs.

Press release – University of California – San Diego