Applying Data Science to Nutrition

Applying Data Science to Nutrition

Applying Data Science to Nutrition

Authored by Ayesha Rajan, Research Analyst at Altheia Predictive Health

 

Introduction 

In all of our articles regarding the application of data science to various diseases, we always include tips for prevention. One tip we always include is to make smart decisions when it comes to nutrition and the foods you ingest. This means avoiding processed when possible and opting for choices that provide your body with necessary and beneficial nutrients. The benefits of a healthy diet are well documented and in this week’s article we will take a look at how data science can improve our understanding of our dietary choices, as well improve our health. 

Discussion 

When it comes to looking at how nutrition affects our health, we are usually looking specifically at the field of nutrigenomics which is essentially the study of the biological processes that take place after the ingestion of a certain food or combination of foods. To conduct tests in this space, a researcher will usually take bodily measurements such as height, weight, health conditions, drug intake and dosage, blood pressure, glucose levels and more. Then, similar to how studies are performed for diabetes or hypertension studies, data sets collected are processed by a range of data science tools, such as “cluster detection, memory-based reasoning, genetic algorithms, link analysis, decision trees, and neural networks.”  

One of the biggest struggles in regards to understanding the results of these studies is that no two bodies are exactly the same so there is a lot of variation when it comes to how certain foods affect our bodies. For this reason, large scale studies must be the norm in this field, especially when it comes to understanding how certain foods affect those with a specific condition.  

One company leading the way in making nutrition a data science related field of study is Nutrino. Nutrino leverages AI and machine learning to understand how measurable nutrition decisions affect user inputs such as “allergies, physical activity, sleep, mood, glucose, and insulin level.” It also takes in user information in regards to preexisting conditions to analyze how dietary decisions affect those users more accurately, as well as to help them manage their chronic conditions. 

Conclusion

The application of the information discovered through the application of data science to nutrition can be extremely impactful for those with chronic conditions by figuring out which foods can best support their health goals. However, this can also be very helpful to the general population and those without chronic conditions by simply helping us better understand how our nutrition decisions impact our lives and can help prevent diseases. As the health and functional foods industry continues to grow faster than ever before, there is certainly market demand for further research in this space. 

 

 

Using Data Science to Increase the Usability of Prosthetics

Using Data Science to Increase the Usability of Prosthetics

Authored by Ayesha Rajan, Research Analyst at Altheia Predictive Health

Introduction 

Prior to its merge with the field of technology, most prosthetics existed for the purpose of aesthetics and balance but were not functional. However, utilizing new developments in computer engineering and data science has dramatically changed the abilities of prosthetics. Now, prosthetic limbs can help those who need them improve their walking patterns or even hold items. We currently see around one million new amputees daily and that, combined with a proven lack of disability accommodation globally, shows us that there is an important need to continue developing improvements for prosthetics. 

Discussion 

As we have mentioned in previous articles, artificial intelligence and machine learning have demonstrated immense capabilities for image recognition. The codes behind the logic that helps these tools distinguish between certain objects is vital to the usability of prosthetic hands and arms. At Newcastle University further research is being performed to see how to improve this process but the technology is already proven – prosthetic hands/arms can be implemented with technology that helps distinguish between objects in order to help the prosthetic hold or handle the object better. For example, one would hold a teacup differently than a brick so knowing how to, not only tell them apart, but how to handle them differently can make a huge difference to those who need support in that area. 

For those with lower body amputations, the biggest struggle is often in how to walk. Without the ability to walk, many amputees see themselves struggling with several new health issues – lack of exercise can lead to complications like diabetes and heart disease, which also leads to decreased quality of life. Thus, it is highly important for amputees to be able to walk but walking with a struggling gait or unevenly distributed weight can lead to muscle and nerve issues that need to be avoided. Companies like ReWalk are coming out with smart prosthetics that analyze walk patterns and adjust the prosthetics accordingly to improve the impact and smoothness of how its user walks. 

Conclusion 

The fact that this technology exists is amazing in its potential to positively impact millions of people. However, the next step is to make this technology as widely available as possible which means that it needs to clear clinical and regulatory steps so that it can be approved to use by insurance providers. As we look at the legislatures and processes involved in approving new medical technology, it is important to remember that technology is moving at a faster pace than ever and, in general, a lot faster than government processes often happen. Ensuring that technology like this can be available to consumers as soon as possible though, is vital to improving the quality of life for those who need these tools. 

 

 

Data Science in Drug Discovery

Data Science in Drug Discovery

Data Science in Drug Discovery

Authored by Ayesha Rajan, Research Analyst at Altheia Predictive Health

Introduction

Creating new drugs is often a painstaking process. First, the molecular structure of a disease has to be determined, along with its development and processes; then, researchers have to find a target gene or protein that heavily influences the actions of the disease in order to begin creating targeted gene/protein therapy before drug development begins. Next, research regarding absorption, interactions, effects, effectiveness and dosage measuring begins before moving into clinical trials which has several phases. Finally, FDA review and approval must come through before a new drug can go to market. The issue that many people have with drug development is the stage that involves clinical trials and deciding when to move into that stage and this is where data science can be a huge asset in drug discovery. 

Discussion

On average, “it costs up to $2.6 billion and takes 12 years to bring a drug to market.” This number is upsetting to both drug developers and to patients alike who may not survive the wait for the drug or whose lives would be significantly improved if development was sped up. Ultimately, the clinical trial phase could use some modernization. Artificial intelligence and machine learning techniques have already proven their immense capabilities in being able to mimic the processes of the human body which can be leveraged to analyze and understand how a drug interacts with a generally healthy person’s body as well as understanding how it interacts with the body of someone who has certain conditions, such as diabetes or hypertension. This addition means that by the time a drug goes to clinical trials, it has actually already been tested against the human body in some ways. This lowers risk for those participating in clinical trials significantly. Another significant development in this process is that AI can imitate the aging process of a patient as well which means that drugs can be released in small batches to those who absolutely need it while long term trials are still on going.  Mark Ramsey, Chief Data Officer at GSK, says that he hopes this type of mapping can expedite the process from over a decade to less than two years Additionally, analysts at McKinsey have estimated that merging AI with drug discovery could “create a value of up to $100 billion.” 

Conclusion

This type of technology can be highly impactful for patients suffering from diseases that could be cured or at least have symptoms eased by drugs still in production. Furthermore, this technology can also be leveraged in the fight against Covid-19 by utilizing test cases for treatment plans and vaccines. 

 

 

What Role Can Analytics Play in Imaging?

What Role Can Analytics Play in Imaging?

What Role Can Analytics Play in Imaging?

Authored by Ayesha Rajan, Research Analyst at Altheia Predictive Health

Introduction 

X-rays are a key part of many treatment plans because of the valuable information they provide. However, they do come with a small increased risk of certain cancers. While this is not a worrisome point for most people who are having diagnostic imaging performed, it can be of concern for those patients that are monitoring symptoms and need more frequent imaging performed. This creates a demand for more efficient image reading techniques. Artificial intelligence methods have been very successful in image recognition and have become an important and useful tool in improving x-ray readings. Today we will look at the methodologies used in this process, as well as one of the companies leading the way in these endeavors. 

Discussion

How: Applying AI and Machine Learning to imaging happens through intensive training of models. Engineers have to create incredibly specific parameters within their algorithms that tell models how to identify pixelated or 3-dimensional characteristics of abnormalities, such as tumors. Their algorithms are generally focused on finding flagged biomarkers and the methodology is generally supported by support vector machines and random forest. Learning architectures can also be supported by convolutional neural networks that map images and focus on the extraction of key figures/points. All of these methods increase the quality and sensitivity of image readings such that they are more accurate when being processed through an algorithm than they are when read by the human eye, i.e. a radiologist. 

Who: CheXNeXt is an algorithm created by researchers out of Stanford University who sought to increase the accuracy of diagnoses of chest conditions by applying artificial intelligence and machine learning techniques to the imaging process. CheXNeXt works by training its dataset which consists of “112,120 frontal-view chest radiographs of 30,805 unique patients [and] using… automatic extraction method on radiology reports” before training its data. The training process “consists of 2 consecutive stages to account for the partially incorrect labels in the ChestX-ray14 dataset. First, an ensemble of networks is trained on the training set to predict the probability that each of the 14 pathologies is present in the image. The predictions of this ensemble are used to relabel the training and tuning sets. A new ensemble of networks are finally trained on this relabeled training set. Without any additional supervision, CheXNeXt produces heat maps that identify locations in the chest radiograph that contribute most to the network’s classification using class activation mappings (CAMs).” With these datasets CheXNeXt is able to accurately diagnose 14 chest-related diseases with more accuracy than a radiologist. 

Conclusion

Artificial intelligence techniques have not yet made their way into the mainstream. However, initial research and testing suggests that the application of AI and machine learning can have an important impact on the diagnosis of many conditions picked up by x-rays. CheXNeXt is one of a few companies that is leading the way on this initiative and, hopefully, as time goes on we will see applications of this technology to x-rays in search of conditions such as bone cancer, digestive tract issues, osteoporosis and arthritis. Additionally, this is a hopeful step that researchers can reduce the need for repetitive x-rays by making diagnoses happen in a more efficient manner – one in which artificial intelligence supports a radiologist. 

 

Analyzing Alzheimer’s

Analyzing Alzheimer’s Disease

Authored by Ayesha Rajan, Research Analyst at Altheia Predictive Health

 

Introduction 

Alzheimer’s disease is a progressive disease that sees cells and cell connections die resulting in the loss of memories and mental function. It is a condition that affects 5 million Americans, with projections to affect 14 million people by 2050 and while there are medications to help treat symptoms, there is no cure. When trying to battle diseases without a cure, the best thing to do is to look at prevention and applying data science to that process can improve our understanding of how mitigating factors can benefit us individually. It can also help scientists create targeted gene therapies to help those affected by Alzheimer’s.   

Discussion 

Currently, a lot of the research being done in regards to Alzheimer’s disease revolves around genetics because of the genetic expressions and patterns that seem to be consistent in many of those that have the disease. To apply analytics to this knowledge, researchers at Icahn School of Medicine at Mount Sinai and Emory University formed a joint ventures with other research institutions to perform deep data analysis on mined DNA, RNA, protein, and clinical data. With this data they hoped to identify regulators and predictors of the disease in order to create targeted gene therapy treatments. What they found was that, though there are correlations between genetic expressions and Alzheimer’s, they were not strong enough to identify a singular cause. However, they did find that a protein called VGF plays an “plays an important role in protecting the brain against the onset and progression of Alzheimer’s disease.” Once they identified this connection, scientists could create a testing environment in which they “ramp[ed] up levels of the gene or protein in mice” and saw that those mice had a significantly lower risk of having Alzheimer’s or saw the progression of their disease slow down. Another gene therapy study out of Stanford University made a similar connection with the ApoE4 gene variant which is present in more than half of Alzheimer’s patients. Again, they did not find this to be a direct cause of the disease but prevalent enough that increased expressions of that gene increases the risk of Alzheimer’s. Studies like these are extremely important because understanding our genetic risk factors can help us understand how dedicated we should be to focusing on mitigating factors.

Conclusion 

Alzheimer’s is a difficult disease to live with and, though genetic risk factors are unavoidable, living a healthy lifestyle can help prevent Alzheimer’s by lowering blood pressure and cholesterol, as well as lowering the risk of contracting diabetes, all of which have been connected to Alzheimer’s. Incorporating healthy foods and exercise into your lifestyle can help support these goals. Additionally, staying mentally active by continuing to learn and keeping up social connections has been shown to decrease the risk of Alzheimer’s as well. Finally, avoiding head trauma by taking precautions such as wearing a seatbelt, wearing a helmet and avoiding falls are all important steps you should take. Though we are discussing these prevention steps in terms of Alzheimer’s, they are all important steps to mitigating many other conditions.