This article was originally written byRob Bennett and published on Pharmiweb
In recent years, the relationship between healthcare and major tech companies has been strengthening. It develops further everyday and is becoming further ingrained in our daily lives.
We now have watches that track our vitals, apps that track our sleep, and social media companies are even experimenting with how they can track our mental health through the posts we make. If connected health feels big now, it’s going to be absolutely colossal in a few years time, as the likes of Apple and Google all make this market a key priority. The amount of money to be made is staggering, perhaps best displayed by Google’s recent acquisition of Fitbit for $2.1bn.
Over the last few months, I’ve found myself increasingly enthralled with my smart watch (I won’t name the brand out of fairness). The list of features were constrained to science fiction twenty years ago - like Cardiograms, my watch can take Cardiograms?!
There are plenty of benefits that tech like this will bring to the healthcare space, and one of the major benefits will be the gamification of positive action. For example, the watch knows when I’m washing my hands and gamifies the experience - giving me a 20 second countdown to make sure I’m doing it properly. The implications of this in the context of COVID alone are staggering, as tech like this could play a vital role in fighting the spread of similar viruses in the future. There are obviously also many more use cases for this in areas such as physical rehab and medicine administration by hospitals.
However, the most significant contribution these devices will make, by far, is through preventative care through the access to new types of data. When you wear a smart watch, two key things are happening - the device is measuring you and also building a profile of you. As measurable changes happen in your body, algorithms shouldn’t be able to just identify things that are wrong with you now, but things that are going to happen to you in the future. The technology already there could then be used to target people directly with interventions. For example, an algorithm could understand the combination of oxygen levels in your blood, your BMI, your cardio readings, your bone density, your usual level of physical activity, and could then understand that you’re at high risk of diabetes - offering you an automated digital intervention.
This could reduce significant stress on healthcare systems and save lives. We’ve recently had briefs from major companies along these lines - it’s becoming a priority.
This becomes even more interesting when you take vast amounts of that data, anonymise it, and look at it from a macro perspective - which tech companies very much can. In India, Google recently introduced an algorithm designed to identify cataracts by scanning the eye and looking at medical history. By chance, they found it was also able to identify heart disease - to the shock of everyone involved. There’s no limit to the healthcare discoveries that could be made as the amount of data available increases exponentially. At the start of the COVID-19 pandemic, a cough and fever alone were thought to be the key symptoms, with a lack of taste and smell later being realised to be a key indicator due to what the data revealed.
When Public Health England say that ⅓ of sufferers are asymptomatic, this doesn’t mean they don’t have symptoms - it means that we just don’t know what we should be looking for. For the sake of argument, it could be something otherwise trivial like an itchy toe. The point is - the UK government has been slow, but overall has made good use of data - and data analysers - to make informed decisions. Data used in this way, which tech companies can provide, can be groundbreaking in the field of medicine.
There are of course important issues that must be considered relating to data privacy and the misuse and mistrust of personal information. Especially so with medical records. Now that people are wiser, and because health is so deeply personal, any of the processes I’ve outlined above must be robustly monitored to make sure that people are respected and that their data isn’t monetised. It’s not unrealistic to imagine the equivalent of a car insurance ‘black box’ that lives on (or in) your body and cheapens your premiums if the real-time data looks favourable - which is not a good thing.
There are already insurers generating premiums based on quantified activity levels but operating like this also allows for unwanted factors like ethnic bias to creep in. A crucial step is ensuring that, wherever possible, data is anonymised, and this should be non-negotiable, in particular when dealing with Governments.
Ultimately, we need to be wary of how our data will be used, but also not feel overly-panicked by it’s growing use in our healthcare services. Because when technology and healthcare overlap, wonderful things can happen.