Can Machines Really Tell Us If We’re Sick?

0
1354
A lump that bleeds or develops a crust may indicate skin cancer. Credit: Wikimedia commons
A lump that bleeds or develops a crust may indicate skin cancer. Credit: Wikimedia commons

This week US scientists announced they have developed an algorithm, or a computerised tool, to identify skin cancers through analysis of photographs.

Rather than relying on human eyes, the new method scans a photo of a patch of skin to look for common and dangerous forms of skin cancer. The authors report their approach performs on par with board-certified dermatologists to distinguish two forms of cancer, keratinocyte carcinoma and malignant melanoma, from benign skin lesions.

The skin cancer diagnostic tool is based on a powerful type of machine learning that extracts information from images. The critical factor in achieving the accuracy and reliability required for a medical diagnostic tool is the large volume of training data the authors have used. This data consists of 129,450 skin images, and a label for each which indicates whether it contains a cancerous region. The machine is trained on this data to make the distinction automatically.

Part of what distinguishes this approach is that it can analyse images taken with a simple hand-held camera, such as the ones on most phones. This means a GP, or even a patient, could take a photo of a patch of skin that presents concerns and receive an indication as to whether it contains a cancerous region.

But translating this research result into a clinical product that can be used for practical diagnosis will require significant further development, documentation, and testing.

Like humans, machines can learn through experience

At its core, machine learning is a very simple idea. Instead of telling a computer how to solve a problem, you instead give it a set of examples from which to learn how to solve the problem itself.

As an example, the task of distinguishing images of cats from those of dogs seems relatively simple, to the point where a toddler can do it. However no human can write down a set of instructions for a computer to perform it accurately. Both sets of images contain furry animals in various poses, but no obvious distinction between the two sets is suitable to form the basis of a computer program to partition them.

Machine learning solves this problem by avoiding the need for a human to articulate a decision rule separating the two classes. Instead, under the machine learning approach, we simply provide labelled examples of both classes, and the system learns to make the distinction on its own.

Many problems in the interpretation of medical data can be cast in terms that machine learning can understand. The problem of identifying cancerous skin legions is very similar to that of separating pictures of cats and dogs. A set of examples of each class is provided, suitably labelled, and the machine learning system learns to distinguish between them.

Machine learning works a bit like your brain

In this recently published case, the machine learning system is based on a neural network, and is particularly well suited to processing images. A neural network is a form of machine learning that is loosely based on the architecture of the brain, as it is made up of a large, hierarchical collection of small, simple processing units.

Neural networks came to rise in the 1980s but faded from prominence as their performance failed to meet expectations. The revolution in machine learning, and the resurgence of neural networks, is due to rapid and recent advances in the collection and storage of large volumes of data, and the computing power required to process these.

Developments in computer graphics technology, developed largely for computer games, has given rise to hardware capable of processing thousands of images a second. Achieving reasonable results can still take millions of images and weeks of processing, as more than a billion factors may need to be fine-tuned, but computer graphics technology makes this achievable.

Machines are better than humans for some types of decisions

For medical and other decisions, humans have the edge when high-level analysis, or background knowledge, is required. Answering general questions about the content of an image (for example, “Is the tall girl wearing a red shirt?”, or “What kind of party is this?”) requires background knowledge about the types of objects humans are interested in.

In situations where the only information required to make the decision is in the signal itself, machine learning wins by a small margin. The first prominent example of this was in street sign recognition in images. Street signs are designed specifically to attract the attention of the human visual system, and yet the machine learning approach outperforms humans in both accuracy and reliability. This result has been repeated for all types of signals, however, from speech to medical records, and now to images of skin legions.

The value of the machine learning approach is not only that it is more accurate than humans, though. It is also cheaper, and more consistent in its diagnoses.

These factors combined will allow the deployment of machine learning based medical devices in GP’s offices, and in country and military hospitals. These systems will provide near instant access to information that would previously have required a scan and a trip to a specialist, thus allowing the doctor to react immediately to the information, rather than months down the track. This will greatly improve patient outcomes and reduce medical costs.

The above article originally appeared in theconversation.com and was written by Prof Professor of Computer Science, University of Adelaide