Personal tools

You are here: Home / Training / CDT in Healthcare Innovation / Student Research Areas / 2010 Abstracts

2010 Abstracts

Optimising and Assessing the Effects of Deep Brain Stimulation on Tremors in Parkinson’s Disease using Magnetoencephalography.

Adam Baker, St John's College.

Deep brain stimulation (DBS) often produces dramatic improvements in the symptoms of a number of neurological disorders such as Parkinson's disease. However, the underlyinging mechanisms by which stimulation affects brain activity remain poorly understood. Magnetoencephalography (MEG), which measures the magnetic field produced by neural activity in the brain, is ideally suited to elucidate the underlying mechanisms of DBS. By using MEG in conjunction with measurements of tremor from electromyography (EMG) it is hoped that it will be possible to identify brain oscillations that are associated with the patient's tremor, in order to allow more efficacious placement of the stimulating electrodes. MEG analysis of patients treated with DBS is confounded by artefacts in the data resulting from the introduction of the device into the head. In order to identify regions of the brain associated with tremor, accurate source reconstruction is crucial. In this work we show that it is possible to accurately reconstruct sources of activity in the brain using beamforming even in the presence of highly correlated artefacts.

An Ontology-Driven Decision Support Prototype for Breast Cancer Chemotherapy Planning

M. Berkan Sesen, New College

The need for an intelligent and reliable decision support tool, especially in a multi-disciplinary clinical environment such as cancer treatment is a very real one. Breast cancer is the most common indication for chemotherapy among female cancer patients and therefore breast cancer chemotherapy planning stands out as a suitable application domain for decision support with a remarkably high utilitarian score. This report focuses on one possible subset of the domain, pulling together state of the art technology in the area of decision support in addressing this need. The decision support prototype developed with this goal, is intended recommend a personalised and evidence-based chemotherapy treatment plan, based on clinical data. This is achieved by an integrated platform which comprises an OWL ontology to model and reason with the domain-specific concepts and a PROforma guideline to manage the clinical workflow and provide argumentation-based decision support. This work is novel in being the first OWL-driven guideline prototype in PROforma.

The Use of Ultrasound to Measure Healing in Human Achilles Tendon

Phillip G. M. Brown, Wolfson College

This report describes the potential uses of ultrasound for measuring healing in Achilles tendons. Various methods are at first discussed, including speed of sound estimation, speed of sound imaging, reflectivity measures, Doppler effect and elastography imaging. Most of these techniques are either in their infancy or technically very difficult to achieve in a clinically relevant way.

Ultrasound elastography imaging is further investigated, and the sources of error and decorrelation that could contribute from tendon measurements of displacement and strain are discussed. These include acquisition errors, frame selection, down sample errors, quantisation errors, out of plane movement, lateral motion, material compression and rotation, inappropriate search range and artefacts from strain estimation methods. Some of these are tested with a simple displacement estimation algorithm. While many do decrease the effectiveness of the displacement estimation, the estimates remain effective for any one error source.

Finally, an experiment is conducted where bovine tendon is stretched in a load cell at forces <50N to simulate low strain used in clinical measurements of Achilles tendon in tendon-axial loading (articulation of the foot). The strain was measured on the load cell and with ultrasound elastography imaging. While similar levels of strain were recorded (<1%), the algorithm used was found to be highly sensitive to input factors and could easily generate much higher estimates of strain.

Mortality Prediction in the ICU

Alistair Johnson, Wolfson College

This report introduces a new methodology for generating severity scores in intensive care units, using particle swarm optimization (PSO score). After briefly summarizing prior art in intensive care unit mortality prediction and severity scoring metrics, specifically the mortality probability model (MPM), the acute physiology score (APS), the simplified acute physiology score (SAPS), and the acute physiology and chronic health evaluation (APACHE) system, this report then overviews the various statistical tests used to assess the proposed models. The PSO score was developed using a particle swarm optimization routine with particle bouncing. This routine optimized multiple scores for each physiological variable in a set of continuous ranges across the respective variable space. The algorithm was trained using a clinical database of 40,000 patients provided by Cerner Corporation. The data was split into training and test set; which comprised of 80% and 20% of all data prior to and including 2005, and an external validation set; which comprised of data from 2006 and 2007. Missing data was replaced using the mean data from the training set. The PSO algorithm was shown to have higher discrimination than APS III on the Cerner validation data and higher discrimination than SAPS III on the prior validation set it was originally published with (direct comparison unavailable due to missing data). The AUROCs were 0.8637 (PSO), 0.8583 (APS III), and 0.848 (SAPS III). The PSO was also shown to work well with large amounts of missing data, with its AUROC decreasing from 0.8864 to 0.8703 with 11 physiological variables missing. Possible areas of future work are discussed including; refinements to the presented algorithm, other possible predictable outcomes, approaches to missing data, and personalized prediction modelling.

Monitoring Stroke Using pH

Yee Kai Tee, Wolfson College

Stroke is one of the leading causes of death worldwide. It is reported to be the third leading cause of death in the UK and US. The majority of strokes are ischemia which is usually induced by a blocked blood vessel. Each year billions of pounds is spent on the direct and indirect cost of stroke. On average, someone dies of a stroke every few minutes and the majority of them who avoid death suffer from some form of permanent disabilities.

Recently, amide proton transfer (APT) imaging has been found can aid decision making for stroke treatment because the output of the technique allows for the first time the quantification of the effect of stroke through intracellular pH value obtained. Although the breakthrough has only been shown and tested on animal models, the potential of applying APT imaging in human stroke management has generated significant attention.

In this study, the sampling schedule of APT imaging will be analysed. Normally, the sampling points of APT imaging are uniformly distributed across the investigated frequencies. When the uniform sampling strategy is used, some of the collected data will not be very informative for the parameters of interest. This kind of sampling schedule is also poor in separating the effect of the amide proton chemical exchange rate and amide to water proton concentration ratio on the output spectrum. Since intracellular pH is inferred from the chemical exchange rate, it is very important to determine the parameter value accurately.

The proposed sampling schedule is to place the sampling points at the highest magnitude of the sensitivity functions. From the simulated results generated, the proposed method has been shown to separate the effect of the chemical exchange rate and proton concentration ratio better than the uniform sampling schedule. The developed sampling schedule has also been found is able to reduce the required scanning time by sampling at a limited number of high sensitive points only while maintaining the accuracy in estimating the parameters of interest.

In a nutshell, the proposed optimal sampling schedule has several advantages over the currently widely used uniform sampling schedule and could replace the latter in APT imaging for more accurate parameter estimation which would have a direct effect on the accuracy of the intracellular tissue pH calculated.

Investigating the role of apoe-_4, a risk gene for Alzheimer’s disease, on functional brain networks using magnetoencephalography.

Henry Luckhoo, Trinity College

Alzheimer’s disease (AD) is growing into one of the greatest challenges facing modern healthcare services. Unfortunately the disease is poorly understood and the few treatments that are available have moderate benefits at best. Two areas of active research into Alzheimer’s disease are the increased risk associated with the gene apoe-_4 and the overlap between sites of AD pathology and the default-mode network (DMN).

The long term aim of this study is to discover if any changes in the DMN due to apoe-_4 can be detected using magnetoencephalography (MEG). MEG is a non-invasive neuro-imaging tool that measures the magnetic fields generated by neuronal activity with an array of sensors placed close to the head. A map of neural activity can be reconstructed from these measurements.

In order to detect differences in the DMN with MEG, a robust method for detecting correlations in signal power across the brain must be developed. In this study beamformers are used to reconstruct neural activity and the Hilbert transform is used to estimate instantaneous signal power.

Unfortunately, artificial correlations are introduced by the MEG scanning process and the beamformer reconstruction. These artefacts confound genuine correlation measurements and so must be accounted for. A series of simulations are carried out to assess the spatial variation of the artificial correlations, the beamformer’s sensitivity to noise, the frequency dependance of the Hilbert transform and the relationship bewteen the Hilbert envelope correlation and the beamformer weights correlation.

The sensorimotor network is selected as the best functional network of brain activity on which to develope an analysis method. Steps towards imaging the sensorimotor network (SMN) are taken, including a frequency analysis of real MEG resting data which is carried out to detect which frequency bands should be used to image the SMN.

Design of Early Warning Systems in Sepsis

Louis Mayaud, Saint Hilda’s College

Sepsis can be defined as an maladapted host response to an infection. The detection of a danger signal by the innate immune system triggers a complex cascade of events, eventually leading to hypotension despite adequate fluid resuscitation called septic shock that is associated with a mortality rate of 50%. In addition to this, sepsis and its complications in Intensive Care Unit (ICU) count for about a third of ICU admissions, for an estimated 750,000 patients per annum in the US. Sepsis is therefore the second largest killer in ICU after coronary-related conditions. Recently, Early Goal Directed Therapy (EGDT) has been proposed by Rivers et al. [20] and proven to be associated with better outer, stressing the need of Early Warning Systems (EWS) for infection. This work is a preliminary exploration of data and techniques that might be used to tackle this problem.

Chapter 1 will introduce the definitions of sepsis and develop the rationale for EWS. Then, Chapter 2 will present briefly the pathophysiology of sepsis, as well as regulation mechanisms of blood pressure that are challenged during the onset of sepsis, in order to gain in understanding on signals to be processed. At this stage, a short review of literature, proposed in Chapter 3, will give us a glimpse on what have been done so far and what kind of performances/applications could be reasonably considered in the scope of a summer project. Finally Chapter 4 describes constitution of datasets in human and animal that will be used in the future works. Last but not least, Chapter 5 will presents early results of statistical analysis performed on data from 489 ICU patients at the very moment of their first septic shock.

The Importance of the Endothelial Glycocalyx in Vascular Disease and Implant Design: A Molecular Dynamics Approach

Maria Pikoula, Pembroke College

The aim of this research is to investigate the contribution and mechanisms of action of the vascular endothelium and, in particular, the endothelial glycocalyx layer, on conditions such as atherosclerosis and aneurysm formation as well as endovascular implants that are used for the treatment of these conditions. The current and near-future work on this focuses on the construction and molecular dynamics simulation of a representative model for the glycocalyx structure and its surrounding environment in the atomistic level. In this present work, the appropriate simulation software was selected and a preliminary hydrated glycocalyx model was constructed. Several test-runs were made, mainly in order to gain familiarity with the simulation tools. This report outlines the steps that were taken in order to accomplish this task.

Semi-automated Segmentation of Fetal Ultrasound Images

Thomas Rackham, Wadham College

Poor fetal nutrition has been shown to cause a number of problems in later life, such as cardiovascular disease, type 2 diabetes and hypertension. Therefore, it is important that malnutrition in the fetal stages can be identified and reversed during pregnancy. Fetal fat measurements can be used to approximate fetal body composition and hence give an insight into fetal nutrition and development.

While manual segmentation is an accurate method of calculating tissue volumes, it is time consuming and user dependent. Delineating a soft tissue boundary, such as that between fat and lean tissue, can be a challenge in ultrasound images. This paper proposes the outline for an advancement on the application of the established semi-automated segmentation method of Live Wire in order to tackle this problem. The use of local phase and ball scale measurements to enhance boundaries and characterise speckle have been proposed and tested in conjunction with Live Wire and each other. Initial investigations have shown that the enhancements are indeed relevant to the problem, improving the distinction between certain boundaries in ultrasound images. However, further work is necessary in order to quantify this benefit and form a functional, multi-faceted system.

Detection of Abnormal Respiratory Sounds and Sleep Apnoea using Audio Recordings

Aoife Roebuck, Kellogg College

Obstructive Sleep Apnoea (OSA) is a common disorder that is under-diagnosed. The side effects of this disease are many and varied, including short-term effects such as daytime sleepiness and long-term effects such as hypertension and cardiac disease. At the moment the ‘gold standard’ diagnostic tool for OSA is a polysomnogram (PSG). This is carried out overnight in a hospital, using multiple sensors and detects apnoeas, hypopneas and arousals. A PSG is expensive to set up and run, and some patients experience different sleep patterns due to the artificial condition of the sleep laboratory. There are a number of alternative devices that can be used in the home, which are essentially mini-PSG devices. However, these devices rely on the patient placing the sensors correctly resulting in possible inaccuracies. Accuracy is improved by introducing more parameters, which makes the device more cumbersome as well as more expensive. In the past few years, sleep specialists have begun to use systems which monitor alternative parameters such as audio and video. Audio is an under-used signal that provides a great deal of information to the clinician regarding respiratory activity during sleep, and therefore may be a useful tool for determining if a patient has sleep apnoea.

The aim of this project is to distinguish between ‘simple’ snoring and postapnoeic snoring using audio recordings. Data were collected from 14 patients in the home and experts labelled events as snoring, apnoea and noise. Data were divided equally into training and testing sets. Linear predictive coding (LPC) and cepstral coefficients were extracted from the training data during each event. Linear discriminant analysis was used to define a boundary between the three classes and the resulting boundary was used to classify data in the test set. LPC proved superior to cepstral analysis: using a 4 s window gave a sensitivity of 0.92 and a positive predictivity of 0.81 for the training data, and a sensitivity of 0.79 and a positive predictivity of 0.72 for the test data.

A personalised computational model of surgical intervention in hydrocephalus.

John Vardakis, Keble College

Hydrocephalus as a neurological disorder is investigated through the artificial distension of the cerebral ventricles. A background account of the disease is given, along with a concise appreciation of the concept behind Multicompartmental poroelastic theory and its primary theoretical development that encapsulates the relationship between the pressure of a fluid permeating a solid matrix and the constituent displacement. A more detailed approach is taken to account for the special case of single network poroelastic theory, where the model assumptions are identified along with the judgement behind the boundary conditions. The final discretized system is displayed along with a brief build up the events taking place to create a patient specific geometry for the Aqueduct of Sylvius.

The results revolved around increasing velocity magnitudes and wall shear stress, with increasing stenosis for the steady state cases. These are discussed in depth with some conduits being made with work on aneurysms, along with visual proof of a concentrated pressure gradient on the aqueduct. The application of a shunt is discussed, along with a preliminary investigation of using FORTRAN subroutines on the output of the shunt to mimic a valve.