Investor’s Guide: How to Recognize Medical Startups with Highest Potential

startup doctor smart

Even if you don’t keep up with all the developments in the global health care industry, you probably realize it’s on the verge of major advances. If you don’t, I’ve got news for you: within the next 10 years how we fight diseases and whohelps us will change beyond recognition.

Medical image recognition and analysis is one of the most promising areas. Soon, artificial intelligence will be there to decide whether to show your x-ray or MTI scan to the diagnostician for further consideration, or to confirm that you have no health problems, sparing the doctor’s time. The next generation of neural networks will pre-analyze medical images, preparing a summary and marking important points, to speed up diagnosis. They will also provide recommendations regarding the frequency of health examinations, and will estimate individual risks of developing certain diseases, including oncological conditions, based on the medical records.

Technological advances in medicine, and artificial intelligence in particular, open incredible opportunities for investors. McKinsey estimated the market potential at 5–10 billion US dollars. However, distinguishing between promising startups and fake or weak projects may become a real challenge.

When we began seeking partners for our own project, we spoke with many different teams only to find out that often they have very little idea of what they are doing. We wanted to research the market and discover technologies in the commercial stage, but it turned out to be harder than we expected. That is why, after dozens of talks and meetings, we put together a guide to evaluating medical artifical intelligence projects.

First of all, find out more about the team

It is important to know who heads the project. Ask about their professional background — Which medical institutions have they worked at? What have they achieved there? Have they published scientific papers in national or international journals? Answers to these questions will show you whether they have relevant experience and sufficient understanding of modern medical science.

Also, try to find out what prior experience with neural networks the team have, and whether they have a practical vision for their current project.

The second set of questions concerns training datasets

You should understand what kind of data they plan to use to train neural networks, and where it will come from.

For example, if it is said that their AI will be analyzing medical records, aiming to find new ways of treatment, try to clarify the following:

  • Which languages can the neural network process?
  • Have they consulted medical specialists to discuss the model?
  • What is the expected accuracy with anamnesis (medical records)?

Here is an example. IBM Watson, the world’s most famous neural network, has attracted 6 billion US dollars in investment, but can only process English. And, as far as I know, no accuracy rates have been published for their semantic analysis.

If it is a medical image recognition project you are looking at, you can go into more detail — What technical requirements apply to medical images? What kind of data have they used for training? Have the results been validated, against what sources?

Have they performed quality assurance and validation? Can you feed your own data to see the results?

The third thing to ask is the current project stage

If it’s no more than just an idea, drop it and move on. There are already a dozen projects in the market in later stages. A project at the idea stage is rarely worth the risk.

If the project you are considering has at least some results to show, go on and take a closer look.

The fourth step is to check the legal status of training data

Make sure that the data for the project has been acquired legally, and the team are authorized to use it for their purpose. It’s a decisive factor for any AI project.

Investors must verify the legal status of medical datasets. Otherwise, even the most promising project will have no future.

The fifth criterion is the amount of data used for training

  • If it’s less than 10,000 instances, then the team have barely even started.
  • If it’s between 10,000 and 100,000, it might be worth continuing, but the team are yet to encounter major challenges.
  • If it’s between 100,000 and 1 million, the project should seem interesting. But this raises another question — Why do the still lack funding? In this case, you should pay special attention when performing due diligence.
  • If it’s above 1 million, you’ve come across a serious project. We would be glad to hear from such teams to discuss partnership opportunities.

If the project you are considering has satisfied the above five criteria, you should go ahead and meet in person to discuss two more matters. Take note of their reactions to your questions, and try to evaluate their grasp of the subject.

Ask them about the accuracy of their neural network model

A proper answer should include a set of indicators: overall accuracy (which is less useful), false positive rate, and false negative rate. The accuracy talk may take 10–15 minutes, but there’s only one thing you are looking for. You should be able to confirm that the team a good understanding of these rates and the practical effect when it comes to real-life medicine.

The most meaningful indicator is the false negative rate. It’s the percentage of cases when AI tells patients with health problems that they are fine and don’t need medical attention. This rate must be as low as possible, and the acceptable theoretical accuracy should stand between 98% and 99.99%.

Today, the false negative rate for oncologic diseases in the US stands at around 25%. This means that one out of every four patients does not get diagnosed with the condition they actually have — by human doctors.

The approach to computer vision solutions is different, and much stricter requirements apply. Projects with inadequate accuracy, below 90%, are simply out of the question.

Which diseases, diagnosis and symptoms can the neural network recognize, and how many of them it identifies unmistakably. How many conditions can actually be diagnosed using a certain type of medical imaging? How many of those can a leading specialist identify (e.g. doctors in a capital city)? What about an average specialist (e.g. provincial doctors)? Once you hear the replies, be sure to check them against a trusted medical professional, the one whose opinion is enough to support your investment decision-making.

Let’s ask our hypothetical team some more questions, a little harder this time. They might not necessarily have the answers, but you should be able to draw some conclusions from the manner in which they react. If they are willing to speak on the subject, you can be more confident about the project.

Which markets are they planning to target? (There’s a catch — many teams might mention the US market, which enables you to press on, going into more detail):

  • How easy it is to interpret the output of the neural network? Are the results readable without prior training?
  • What language options are available for the output? Russian, English?
  • Does the output format comply with the US standards for patient data (interoperability requirements)?
  • Replying to this question, they should refer to thea ISA and NIEM guidelines
  • Does the output use a common, US-standardized vocabulary?

This brief guide makes up only about a third of our guidelines for evaluating medical neural networks startups. If you are lucky to have discovered a project that satisfactorily passes all of the above, you should proceed with your investment. But don’t stop there — come speak with us to discover more opportunities for the project.

Pavel Roytberg, Co-Founder of Doctor Smart

Читайте такжеRelated Topics