New Developments in How Data Science is Helping in the Battle Against Covid-19

New Developments in How Data Science is Helping in the Battle Against Covid-19

 

New Developments in How Data Science is Helping in the Battle Against Covid-19

Authored by Ayesha Rajan, Research Analyst at Altheia Predictive Health

 

Introduction

As we move into the winter months, we are seeing many states hit new and unprecedented records of Covid-19 cases. This is something that has been expected since lockdowns began in March, however, that does not make it any less concerning to healthcare professionals and researchers. In an effort to continue to prevent deaths and long-term side effects of Covid-19, researchers are always looking for new ways to apply artificial intelligence and machine learning to different areas. In today’s article we will take a look at some of these upcoming applications of data science to the battle against Covid-19. 

Discussion 

In a previous article, we detailed the ways in which machine learning can be applied to medical imaging. Due to the fact that machine learning has a great capacity for learning and understanding images and anomalies within them, the application of data science to imaging has potential to help identify Covid-19 cases in x-rays. Researchers at Northwestern Medicine Bluhm Cardiovascular Institute have been working with a machine learning algorithm that was trained by over 17,000 chest x-rays and found that the algorithm could “detect COVID-19 in x-ray images about ten times faster and one to six percent more accurately than specialized thoracic radiologists.” 

Though the algorithm is still in the research phase, it has great potential to be a significant tool and is an open source resource which means that other researchers can contribute and build on top of the existing algorithm.

Another application of data science in the battle against Covid-19 is actually in the development of a vaccine. The pharmaceutical giant, Pfizer, was recently approved in the U.K. to distribute an emergency vaccine and, with a 90% efficacy rate, the United States is likely not far behind. However, researchers at MIT’s Computer Science and Artificial Intelligence Lab have found results in their data that “suggest that the vaccines may not have the same impact among all patient populations” and may leave minority groups, specifically those of Black and Asian descent. This is incredibly important because it affects how minority groups will experience a “post-Covid” world in that they may have to continue to take precautions until a fully effective vaccine has been developed. As artificial intelligence algorithms continue to study these vaccines, they may find that these minority groups are more likely to have a gene or gene sequence that prevents the vaccine from being effective for everyone; if found, researchers can begin to create targeted gene therapies that allow the vaccine to perform its functions effectively. 

Conclusion 

As we continue to try to prevent the spread of Covid-19 across the United States, machine learning and artificial intelligence play an important role in improving our understanding of the virus and its effects on our bodies and society. 

 

Applying Data Science to Nutrition

Applying Data Science to Nutrition

Applying Data Science to Nutrition

Authored by Ayesha Rajan, Research Analyst at Altheia Predictive Health

 

Introduction 

In all of our articles regarding the application of data science to various diseases, we always include tips for prevention. One tip we always include is to make smart decisions when it comes to nutrition and the foods you ingest. This means avoiding processed when possible and opting for choices that provide your body with necessary and beneficial nutrients. The benefits of a healthy diet are well documented and in this week’s article we will take a look at how data science can improve our understanding of our dietary choices, as well improve our health. 

Discussion 

When it comes to looking at how nutrition affects our health, we are usually looking specifically at the field of nutrigenomics which is essentially the study of the biological processes that take place after the ingestion of a certain food or combination of foods. To conduct tests in this space, a researcher will usually take bodily measurements such as height, weight, health conditions, drug intake and dosage, blood pressure, glucose levels and more. Then, similar to how studies are performed for diabetes or hypertension studies, data sets collected are processed by a range of data science tools, such as “cluster detection, memory-based reasoning, genetic algorithms, link analysis, decision trees, and neural networks.”  

One of the biggest struggles in regards to understanding the results of these studies is that no two bodies are exactly the same so there is a lot of variation when it comes to how certain foods affect our bodies. For this reason, large scale studies must be the norm in this field, especially when it comes to understanding how certain foods affect those with a specific condition.  

One company leading the way in making nutrition a data science related field of study is Nutrino. Nutrino leverages AI and machine learning to understand how measurable nutrition decisions affect user inputs such as “allergies, physical activity, sleep, mood, glucose, and insulin level.” It also takes in user information in regards to preexisting conditions to analyze how dietary decisions affect those users more accurately, as well as to help them manage their chronic conditions. 

Conclusion

The application of the information discovered through the application of data science to nutrition can be extremely impactful for those with chronic conditions by figuring out which foods can best support their health goals. However, this can also be very helpful to the general population and those without chronic conditions by simply helping us better understand how our nutrition decisions impact our lives and can help prevent diseases. As the health and functional foods industry continues to grow faster than ever before, there is certainly market demand for further research in this space. 

 

 

Using Data Science to Increase the Usability of Prosthetics

Using Data Science to Increase the Usability of Prosthetics

Authored by Ayesha Rajan, Research Analyst at Altheia Predictive Health

Introduction 

Prior to its merge with the field of technology, most prosthetics existed for the purpose of aesthetics and balance but were not functional. However, utilizing new developments in computer engineering and data science has dramatically changed the abilities of prosthetics. Now, prosthetic limbs can help those who need them improve their walking patterns or even hold items. We currently see around one million new amputees daily and that, combined with a proven lack of disability accommodation globally, shows us that there is an important need to continue developing improvements for prosthetics. 

Discussion 

As we have mentioned in previous articles, artificial intelligence and machine learning have demonstrated immense capabilities for image recognition. The codes behind the logic that helps these tools distinguish between certain objects is vital to the usability of prosthetic hands and arms. At Newcastle University further research is being performed to see how to improve this process but the technology is already proven – prosthetic hands/arms can be implemented with technology that helps distinguish between objects in order to help the prosthetic hold or handle the object better. For example, one would hold a teacup differently than a brick so knowing how to, not only tell them apart, but how to handle them differently can make a huge difference to those who need support in that area. 

For those with lower body amputations, the biggest struggle is often in how to walk. Without the ability to walk, many amputees see themselves struggling with several new health issues – lack of exercise can lead to complications like diabetes and heart disease, which also leads to decreased quality of life. Thus, it is highly important for amputees to be able to walk but walking with a struggling gait or unevenly distributed weight can lead to muscle and nerve issues that need to be avoided. Companies like ReWalk are coming out with smart prosthetics that analyze walk patterns and adjust the prosthetics accordingly to improve the impact and smoothness of how its user walks. 

Conclusion 

The fact that this technology exists is amazing in its potential to positively impact millions of people. However, the next step is to make this technology as widely available as possible which means that it needs to clear clinical and regulatory steps so that it can be approved to use by insurance providers. As we look at the legislatures and processes involved in approving new medical technology, it is important to remember that technology is moving at a faster pace than ever and, in general, a lot faster than government processes often happen. Ensuring that technology like this can be available to consumers as soon as possible though, is vital to improving the quality of life for those who need these tools. 

 

 

Data Science in Drug Discovery

Data Science in Drug Discovery

Data Science in Drug Discovery

Authored by Ayesha Rajan, Research Analyst at Altheia Predictive Health

Introduction

Creating new drugs is often a painstaking process. First, the molecular structure of a disease has to be determined, along with its development and processes; then, researchers have to find a target gene or protein that heavily influences the actions of the disease in order to begin creating targeted gene/protein therapy before drug development begins. Next, research regarding absorption, interactions, effects, effectiveness and dosage measuring begins before moving into clinical trials which has several phases. Finally, FDA review and approval must come through before a new drug can go to market. The issue that many people have with drug development is the stage that involves clinical trials and deciding when to move into that stage and this is where data science can be a huge asset in drug discovery. 

Discussion

On average, “it costs up to $2.6 billion and takes 12 years to bring a drug to market.” This number is upsetting to both drug developers and to patients alike who may not survive the wait for the drug or whose lives would be significantly improved if development was sped up. Ultimately, the clinical trial phase could use some modernization. Artificial intelligence and machine learning techniques have already proven their immense capabilities in being able to mimic the processes of the human body which can be leveraged to analyze and understand how a drug interacts with a generally healthy person’s body as well as understanding how it interacts with the body of someone who has certain conditions, such as diabetes or hypertension. This addition means that by the time a drug goes to clinical trials, it has actually already been tested against the human body in some ways. This lowers risk for those participating in clinical trials significantly. Another significant development in this process is that AI can imitate the aging process of a patient as well which means that drugs can be released in small batches to those who absolutely need it while long term trials are still on going.  Mark Ramsey, Chief Data Officer at GSK, says that he hopes this type of mapping can expedite the process from over a decade to less than two years Additionally, analysts at McKinsey have estimated that merging AI with drug discovery could “create a value of up to $100 billion.” 

Conclusion

This type of technology can be highly impactful for patients suffering from diseases that could be cured or at least have symptoms eased by drugs still in production. Furthermore, this technology can also be leveraged in the fight against Covid-19 by utilizing test cases for treatment plans and vaccines. 

 

 

What Role Can Analytics Play in Imaging?

What Role Can Analytics Play in Imaging?

What Role Can Analytics Play in Imaging?

Authored by Ayesha Rajan, Research Analyst at Altheia Predictive Health

Introduction 

X-rays are a key part of many treatment plans because of the valuable information they provide. However, they do come with a small increased risk of certain cancers. While this is not a worrisome point for most people who are having diagnostic imaging performed, it can be of concern for those patients that are monitoring symptoms and need more frequent imaging performed. This creates a demand for more efficient image reading techniques. Artificial intelligence methods have been very successful in image recognition and have become an important and useful tool in improving x-ray readings. Today we will look at the methodologies used in this process, as well as one of the companies leading the way in these endeavors. 

Discussion

How: Applying AI and Machine Learning to imaging happens through intensive training of models. Engineers have to create incredibly specific parameters within their algorithms that tell models how to identify pixelated or 3-dimensional characteristics of abnormalities, such as tumors. Their algorithms are generally focused on finding flagged biomarkers and the methodology is generally supported by support vector machines and random forest. Learning architectures can also be supported by convolutional neural networks that map images and focus on the extraction of key figures/points. All of these methods increase the quality and sensitivity of image readings such that they are more accurate when being processed through an algorithm than they are when read by the human eye, i.e. a radiologist. 

Who: CheXNeXt is an algorithm created by researchers out of Stanford University who sought to increase the accuracy of diagnoses of chest conditions by applying artificial intelligence and machine learning techniques to the imaging process. CheXNeXt works by training its dataset which consists of “112,120 frontal-view chest radiographs of 30,805 unique patients [and] using… automatic extraction method on radiology reports” before training its data. The training process “consists of 2 consecutive stages to account for the partially incorrect labels in the ChestX-ray14 dataset. First, an ensemble of networks is trained on the training set to predict the probability that each of the 14 pathologies is present in the image. The predictions of this ensemble are used to relabel the training and tuning sets. A new ensemble of networks are finally trained on this relabeled training set. Without any additional supervision, CheXNeXt produces heat maps that identify locations in the chest radiograph that contribute most to the network’s classification using class activation mappings (CAMs).” With these datasets CheXNeXt is able to accurately diagnose 14 chest-related diseases with more accuracy than a radiologist. 

Conclusion

Artificial intelligence techniques have not yet made their way into the mainstream. However, initial research and testing suggests that the application of AI and machine learning can have an important impact on the diagnosis of many conditions picked up by x-rays. CheXNeXt is one of a few companies that is leading the way on this initiative and, hopefully, as time goes on we will see applications of this technology to x-rays in search of conditions such as bone cancer, digestive tract issues, osteoporosis and arthritis. Additionally, this is a hopeful step that researchers can reduce the need for repetitive x-rays by making diagnoses happen in a more efficient manner – one in which artificial intelligence supports a radiologist.