06 AUGUST 2019
Thai researchers develop new blood test for prostate cancer
A research team at Mahidol University in Thailand has developed a blood test that can detect clinically significant prostate cancer in high-risk patients.
According to study lead scientist Dr Sebastian Bhakdi, the test is capable of isolating and visualising tumour-associated circulating endothelial cells (tCEC) using a small, 10ml blood sample.
Bhakdi said: “Tumour-associated circulating endothelial cells are highly promising biomarkers for the detection of early-stage cancers because they are thought to derive directly from a tumour’s own blood vessels.
“Unfortunately, however, they are extremely rare and almost indiscernible from normal blood cells, which is why they have been considered undetectable in routine laboratories until now.”
Dr Bhakdi and his team worked with private partners to develop a series of new technologies operating at sub-zero temperatures, allowing the researchers to isolate tCEC from whole blood and visualise them under a microscope.
Bhakdi added that the tCEC-based screening assay is capable of routinely detecting very rare cells in standard blood samples.
The study also suggests that the test enables differentiation between men with and without clinically significant prostate cancer.
The tCEC test was not designed as a stand-alone assay, but as an add-on to prostate-specific antigen (PSA) screening.
Dr Bhakdi noted that it was carried out as part of a prospectively blinded screening study conducted from 2016 to 2019, involving some 170 subjects.
The results indicate that tCEC testing is expected to avoid more than 70% of biopsies triggered by PSA readings in the so-called “diagnostic grey zone.”
A validation study covering more than 1,000 prostate cancer patients is currently being conducted at Mahidol University’s Siriraj Hospital in Thailand.
05 AUGUST 2019
Hillrom to acquire Breathe Technologies for $130m
Medical technology company Hillrom has signed a definitive agreement to buy respiratory care solutions developer Breathe Technologies for $130m in an all-cash transaction.
Founded in 2005, Breathe Technologies has annual revenues of $10m. It developed a wearable, non-invasive ventilation technology intended to improve patient mobility.
The company’s Life2000 Ventilation System is a mechanical ventilation device designed for various conditions such as chronic obstructive pulmonary disease (COPD), interstitial lung disease and restrictive thoracic disorder.
Designed for homecare and critical-care settings, the system can be separated from the stationary Life2000 Compressor to allow use outside the home through an alternate pressure source.
The device holds US Food and Drug Administration (FDA) 510(k) clearance.
Hillrom expects the acquisition to enhance its Respiratory Care portfolio, which includes The Vest, VitalCough, The MetaNeb System and Monarch Airway Clearance System.
Hillrom president and CEO John Groetelaars said: “The acquisition of a highly differentiated, wearable non-invasive ventilation technology provides an exciting growth platform that utilises our direct Respiratory Care commercial channel and business model.
“This transaction represents another example of our plan to strategically deploy capital with a disciplined approach toward higher-growth and higher-margin businesses by expanding into the disruptive new category of non-invasive ventilation.”
The transaction is subject to customary closing conditions and set to close in the fiscal fourth quarter of this year.
Besides respiratory care devices, Hillrom develops a variety of solutions for patient diagnosis, treatment and monitoring applications.
Last month, the company launched a handheld retinal camera called Welch Allyn RetinaVue 700 Imager for the diagnosis of diabetic retinopathy during routine primary care office visits.
The retinal camera features an image-quality assessment algorithm and can capture images in pupils as small as 2.5mm, noted Hillrom.
05 AUGUST 2019
Mobile app tests impact of mental health on cognitive performance
Researchers at the University of New South Wales (UNSW) and University College London (UCL) have developed an app which allows users to track how their mood and emotions impact their cognitive performance.
The Emotional Brain Study app has now been released on Apple’s App Store and the Google Play Store.
App users are asked a series of basic questions about their current mood, what they are doing and whether or not they are alone. They are then given five different memory and attention-based tasks to complete, while being presented with imagery designed to elicit either an emotional or neutral response.
The app also includes optional questionnaires to find out whether a person’s current mental wellbeing is related to how easy or difficult it is for them to play the emotional brain games.
The results of their performance are anonymously recorded by UNSW and UCL’s Institute of Cognitive Neuroscience. The information is designed to be collated in a large-scale dataset to see if there are any common patterns of behaviour.
UCL postdoctoral fellow Susann Schwizer said: “If we are able to show these patterns in a large-scale dataset, we can potentially use these types of tasks to detect early signs of low mood in a non-stigmatising and fun way, especially when thinking about young people.”
The development comes following the results of a laboratory study carried out between UNSW and UCL, which found that performance in a series of memory and attention based tasks influenced by emotional stimuli can reveal an individual’s capacity for psychological resilience.
Schwizer said: “In the lab, performance on these types of tasks differentiates between individuals who are psychologically healthy and those with a wide range of mental health problems including disorders such as depression, anxiety and schizophrenia.
“What we’re really interested in is to confirm that what we’ve observed in the lab will also be replicated in the world at large as people play the games in this app.”
02 AUGUST 2019
Blood test detects breast cancer relapse months before symptoms emerge
A new liquid biopsy has been shown to detect the return and spread of breast cancer months before symptoms emerge or hospital scanners can find the disease.
The test, developed by researchers at the Institute of Cancer Research (ICR) and the Royal Marsden NHS Foundation Trust, was found to detect levels of cancer DNA circulating in the blood an average of 10.7 months before clinical diagnosis in patients who had relapsed during the study’s three-year follow-up period.
The test works in all types of breast cancer, and can detect early signs of the disease metastasising around the body outside of the brain.
ICR professor of molecular oncology Nicholas Turner said: “These new blood tests can work out which patients are at risk of relapse much more accurately than we have done before, identifying the earliest signs of relapse almost a year before the patient will clinically relapse.”
Turner and his colleagues studied 101 women with breast cancer across five hospitals. Each woman was given a personalised blood test tailored to the makeup of their tumour to enable the levels of cancer DNA in their bloodstream to be monitored.
By analysing cancer DNA from tumour samples collected before treatment, the researchers were able to identify mutations that could distinguish this DNA from all other DNA in the blood and track its levels over time. In the 101 patients, 165 different trackable mutations were found, with 78 patients having one mutation and 23 having multiple mutations. Blood samples were collected from the participants every three months during their first year after treatment, and then every six months for up to five years after treatment.
To assess the test’s ability to detect recurrence at a molecular level in different breast cancer variations, the researchers combined the data from this study with a previous proof-of-principle experiment with 43 participants.
At a three-year follow-up, 29 of the 144 total patients had experienced a relapse in their condition. The test was able to detect cancer DNA in the blood of 23 of these patients prior to their relapse, with signs of recurrence spotted an average of 10.7 months before their clinical diagnosis.
Turner said: “We hope that by identifying relapse much earlier we will be able to treat it much more effectively than we can do now, perhaps even prevent some people from relapsing. But we will now need clinical trials to assess whether we can use these blood tests to improve patient outcome. We have launched the first of these studies already, and hope to launch large studies in the future.”
Trials are now underway to assess new treatments alongside this test in triple negative breast cancer.
01 AUGUST 2019
Human torso simulator opens door for back brace innovations
Engineers at the Lancaster University have created a simulator that mimics the mechanics of the human torso, which could lead to innovative new designs for medical back supports.
The simulator is designed to allow researchers to test different back brace designs and configurations without the logistical and ethical issues of testing them on human subjects. The simulator can be configured to present with different spinal conditions and deformities, such as scoliosis.
The male torso-shaped device is manufactured from a 3D-printed spine and rib cage, created using modified CAD models derived from computerized tomography (CT) scans of a human spine. This structure is packed with materials which resemble and behave like human tissues.
Lancaster University researcher David Cheneler said: “Back braces have been used as both medical and retail products for decades, however existing designs can often be found to be heavy, overly rigid, indiscreet and uncomfortable.
“Our simulator enables new back braces to be developed that are optimised to constrain particular motions but allowing for other movements. It could also help with the design of braces and supports with targeted restriction of movement, which would be beneficial to some conditions and helping to reduce the risk of muscle loss.”
Clinicians can use the torso to collect data on the reduction of flexion, extension, lateral bending and torsion each new back brace design provides for different conditions.
The research team acknowledged that the device would not be able to bypass human testing for new back braces entirely, but would allow them to reach human testing at a far more advanced stage of design. This could potentially reduce the pain and discomfort experienced by testers, as they would be testing more sophisticated devices.
Lancaster University graduate Jon Harvey, who worked on the project, said: “This is an excellent example of how engineering research can have wide-reaching impact, not only in industry, but also in the quality of life of a population.”
01 AUGUST 2019
DeepMind develops AI tool to predict kidney disease in advance
Google division DeepMind has partnered with the US Department of Veterans Affairs (VA) to develop an artificial intelligence (AI) tool that could predict acute kidney injury (AKI) up to 48 hours in advance.
AKI, which occurs when a person’s kidney functions abnormally, is associated with quick deterioration. According to DeepMind, up to 30% of cases could be addressed with earlier intervention.
The company applied AI technology using a dataset that comprised de-identified electronic health records of 703,782 adult patients from 172 inpatient and 1,062 outpatient VA sites.
Compared to existing techniques, the new technology was able to forecast the kidney disease up to 48 hours earlier.
DeepMind noted that the tool correctly predicted nine out of ten patients whose condition deteriorated to the dialysis stage.
According to the partners’ publication in the Nature journal, the AI model predicted 55.8% of all inpatient AKI episodes and 90.2% of all AKI cases that needed subsequent dialysis.
It is expected that these predictions could help in earlier preventative treatment and eliminate more invasive procedures such as dialysis.
The model has also been designed for future application in additional causes of diseases and deterioration such as sepsis, added the company.
DeepMind also unveiled results from research conducted by the University College London of its mobile medical assistant for clinicians called Streams.
The Royal Free London NHS Foundation Trust has been using the app since early 2017. Streams leverages the current national AKI algorithm in the UK to notify on patient deterioration.
The app also helps in the review of medical information at the bedside and facilitates instant communication between clinical teams. At the NHS Trust, Streams has been reported to save up to two hours per day for clinicians.
The latest research showed that the app could help review urgent cases within 15 minutes or less, with fewer AKI cases being missed.
While the app does not use AI currently, the company intends to integrate with predictive models.
In November last year, Google announced plans to absorb DeepMind’s health technology unit, including the Streams app team, into its newly formed health subsidiary, Google Health.
31 JULY 2019
Researchers develop soft wearable for health monitoring
A research team at Georgia Institute of Technology in the US has created a soft wearable for wireless, long-term monitoring of health in adults, children and babies.
The monitor, which is made of stretchable electronics that are linked to gold electrodes through printed connectors, is said to be conformable with the skin.
It is designed to capture data on electrocardiogram (ECG), heart rate, respiratory rate and motion activity. The device is capable of transmitting this data to a smartphone or tablet that is as much as 15m away, noted the researchers.
The team claims that the new monitor is designed to avoid skin injury or allergic reactions associated with traditional adhesive sensors with conductive gels.
Georgia Institute of Technology biomedical engineering assistant professor Woon-Hong Yeo said: “This health monitor has a key advantage for young children who are always moving, since the soft conformal device can accommodate that activity with a gentle integration onto the skin.
“This is designed to meet the electronic health monitoring needs of people whose sensitive skin may be harmed by conventional monitors.”
As the device can conform to the skin, it can avoid signal issues and capture accurate data even when the user is walking, running or climbing stairs. In addition, the health monitor could facilitate at-home use.
The wearable is waterproof and can be worn for several days. Its electronic components can be recycled, according to the researchers.
Yeo added: “The devices are completely dry and do not require a gel to pick up signals from the skin. There is nothing between the skin and the ultrathin sensor, so it is comfortable to wear.
“We use deep learning to monitor the signals while comparing them to data from a larger group of patients. If an abnormality is detected, it can be reported wirelessly through a smartphone or other connected device.”
While the researchers are currently working on paediatric applications of the device, they believe that it would allow monitoring in adults as well.
They created two models of the monitor – one with medical tape for short-term use in a care facility such as a hospital, and another with a soft elastomer medical film for wound care.
The team is planning to decrease the monitor’s size and add new features for the measurement of other parameters such as temperature, blood oxygen and blood pressure.
31 JULY 2019
Researchers convert brain speech signals into written text
Patients with paralysis-related speech loss could benefit from a new technology developed by University of California San Francisco (UCSF) researchers that turns brain signals for speech into written sentences.
Operating in real time, this technology is the first to extract intention to say specific words from brain activity rapidly enough to keep pace with natural conversation.
The software is currently able to recognise only a series of sentences it has been trained to detect, but the research team believes this breakthrough could act as a stepping stone towards a more powerful speech prosthetic system in the future.
UCSF neurosurgery professor Eddie Chang, who led the study, said: “Currently, patients with speech loss due to paralysis are limited to spelling words out very slowly using residual eye movements or muscle twitches to control a computer interface. But in many cases, information needed to produce fluent speech is still there in their brains. We just need the technology to allow them to express it.”
The Facebook-funded research was made possible due to three volunteers from the UCSF Epilepsy Center, who were already scheduled to have neurosurgery for their condition.
The patients, all of whom have regular capacity for speech, had a small patch of recording electrodes placed on the surface of their brains ahead of their surgeries to track the origins of their seizures. Known as electrocorticography (ECoG), this technique provides much richer and more detailed data about brain activity than non-invasive technologies like electroencephalogram or functional magnetic resonance imaging scans.
The patients’ brain activity and speech were recorded while they were asked a set of nine questions, to which they responded from a list of 24 pre-determined responses. The research team then fed the data from the electrodes and audio recordings into a machine learning algorithm capable of pairing specific speech sounds with the corresponding brain activity.
The algorithm was able to identify the questions patients heard with 76% accuracy, and the responses they gave with 61% accuracy.
Chang said: “Most previous approaches have focused on decoding speech alone, but here we show the value of decoding both sides of a conversation – both the questions someone hears and what they say in response.
“This reinforces our intuition that speech is not something that occurs in a vacuum and that any attempt to decode what patients with speech impairments are trying to say will be improved by taking into account the full context in which they are trying to communicate.”
The researchers are now seeking to improve the software so it is able to translate more varied speech signals. They are also looking for a way to make the technology accessible to patients with non-paralysis-related speech loss whose brains do not send speech signals to their vocal system.