If you don't remember your password, you can reset it by entering your email address and clicking the Reset Password button. You will then receive an email that contains a secure link for resetting your password
If the address matches a valid account an email will be sent to __email__ with instructions for resetting your password
To survey recent advances in acute stroke symptom automatic detection and Emergency Medical Systems (EMS) alerting by mobile health technologies.
Materials and methods
Delayed activation of EMS for stroke symptoms by patients and witnesses deprives patients of rapid access to brain-saving therapies and occurs due to public unawareness of stroke features, cognitive and motor deficits produced by the stroke itself, and sleep onset. A promising emerging approach to overcoming the inherent biologic constraints of patient capacity to self-detect and respond to stroke symptoms is continuous monitoring by mobile health technologies with wireless sensors and artificial intelligence recognition systems. This review surveys 11 sensing technologies - accelerometers, gyroscopes, magnetometers, pressure sensors, touch screen and keyboard input detectors, artificial vision, and artificial hearing; and 10 consumer device form factors in which they are increasingly implemented: smartphones, smart speakers, smart watches and fitness bands, smart speakers/voice assistants, home health robots, smart clothing, smart beds, closed circuit television, smart rings, and desktop/laptop/tablet computers.
The increase in computing power, wearable sensors, and mobile connectivity have ushered in an array of mobile health technologies that can transform stroke detection and EMS activation. By continuously monitoring a diverse range of biometric parameters, commercially available devices provide the technologic capability to detect cardinal language, motor, gait, and sensory signs of stroke onset. Intensified translational research to convert the promise of these technologies to validated, accurate real-world deployments are an important next priority for stroke investigation.
The benefits of reperfusion therapy for acute ischemic stroke are extremely time dependent. With every 4 minutes of delay in start of endovascular thrombectomy, 1 of every 100 patients has a worse final disability outcome.
As a result, detecting stroke symptoms and activating the emergency medical services system at the earliest moment is critical to patient outcome. If an acute stroke is detected early and the patient presents to the Emergency Department within the first 60 minutes of onset, outcomes are optimal.
Second, deficits produced by the stroke often render patients unable to recognize the need for assistance or the ability to request it. Patient capacity to recognize that a stroke is occurring is especially impaired by right hemisphere strokes that anosognosia (unawareness of deficits) and by frontal system stroke that produce confusional states and impaired executive decision-making. Patient capacity to request assistance is especially impaired when strokes produce aphasia and severe dysarthria, rendering them unable to communicate verbally, and by strokes that produce major motor deficits, rendering them unable to manually use phones and unable to ambulate to find others who could provide assistance. Third, approximate 1 in 5 ischemic strokes has onset in sleep. Since stroke is not associated with pain that awakens the individual, patients often remain asleep for many minutes to hours after onset, noting symptoms for the first time only on awakening.
One highly promising emerging approach to overcoming the inherent biologic constraints of patient capacity to self-detect and respond to stroke symptoms is external monitoring. Hospitals that serve Las Vegas casinos have among the fastest onset-to-arrival times for stroke patients in the United States because the casino floors are continuously monitored by overhead video cameras and security personnel are instructed to activate the EMS system to as soon as a patron is observed to stop playing and be in distress.
Until recently, such ubiquitously monitored environments were uncommon and required extensive observing personnel. However, recent advances in wireless sensor technology and artificial intelligence are rapidly bringing ubiquitous health monitoring into everyday life.
In the last few years, mobile and wearable consumer health technologies have achieved broad availability in the population and remarkably robust artificial intelligence performance. In cerebrovascular disease, considerations of this technologic paradigm shift have generally focused on improved stroke prevention (by detection of atrial fibrillation, sleep apnea, and other risk factors) and improved paramedic evaluation of stroke (by use of specialized “stroke helmets” that use ultrasound, electrical impedance, microwave and other energies to noninvasively probe brain physiologic states). The ability of these mobile health technologies to dramatically improve the first detection of stroke onset has been less thoroughly considered, often from narrow perspectives of one sensory modality or one device form factor.
This review will review leading technologies to detect early onset of acute stroke symptoms for patients and bystanders and device form factors that provide one or several of these technologies that are currently available or likely to soon be available. All of these methods of detection are predominantly non-invasive and already implemented in commercially available products in everyday use in homes and offices.
The 11 sensing technologies reviewed include: 6 modalities mechanically delineating body and gait movement (accelerometers, gyroscopes, magnetometers, pressure sensors, touch screen and keyboard input detectors), 1 modality visually delineating bodily movement (artificial vision), 1 modality assessing human sound productions, especially spoken language (artificial hearing). The technologies reviewed were selected from among the 13 or more categories and 38 or more variations in touchscreen and related technologies that have been developed,
Triaxial accelerometers measure velocity changes in the X, Y, and Z axes, providing information on linear displacements. Triaxial gyroscopes measure rotational movements, referenced to gravitational forces. Triaxial magnetometers (electronic compasses) measure orientation to magnetic north. Together these multiple movement sensing technologies provide full 9 degrees-of-freedom motion sensing. Compared with laboratory gold standard inertial motion units, consumer wearable MEM hardware motion detection systems perform comparably, well within the mean error limit established by the American Medical Association for reliable evaluation of movement impairments in clinical evaluations.
These MEM motion-sensors are able to detect a wide range of arm, trunk, and leg/gait motion perturbations that occur in acute stroke.
Stroke symptom detection applications
Limb motor deficits are the most common presenting symptom of acute ischemic stroke, present in 83-90% of consecutive patients. Stroke-related upper extremity movement alterations detectable by MEMS motion-sensing include reduced range of motion, slowed motion, ataxic movements, and complete plegia.
Sophisticated motion analysis of MEM motion-sensor readings enables analysis of gait quality, including onset of abnormal successive steps times and step lengths. Forms of stroke-related alteration in walking that MEM motion-sensors can detect include reduced stride length, stride speed, stride frequency, asymmetric stride, swing time, clearance, stance time, and widened gait stance.
Falls and inability to return to standing posture are an additional important manifestation of acute stroke, particularly in major strokes, and can impair patient ability to themselves alert others to symptom onset. Falls during hospitalization in acute or combined acute and rehabilitation hospitalizations occur in 2-13% of stroke admissions
Technologies assessing fine and gross motor interactions with device surfaces
Sensor types: touchscreen contact position sensors, touchscreen contact force/pressure sensors, mechanical and optical keyboard press sensors, whole body robot skin
A touchscreen is an electronic visual display capable of detecting and locating a touch over its display area, pinpointing the position, timing, and duration of all screen contacts, by fingers, other body parts, or styluses. Structurally, touchscreens are 2-dimensional sensing devices made of 2 sheets of material separated by spacers. Touchscreens detect surface contact using one or more of several sensing technologies, among which the five most common types are: 5-Wire Resistive, Surface Capacitive, Projected Capacitive, Surface Acoustic Wave, and Infrared. With advanced implementations, touchscreens can localize touchpoints with a spatial resolution of <0.2 mm.
A small voltage is continuously to a device layer resulting in a uniform electrostatic c field whose perturbations can then be detected by capacitive or piezoelectric sensors, often placed at the 4 corners of the device surface. When a conductor, such as a human finger, touches the surface, a dynamic capacitor change is induced, with its magnitude, duration, and spatial location monitored by the capacitive sensors. Capacitive coupling technologies on consumer devices can detect microscopic changes in the distance between the surface and capacitive device layers, enabling measurement of the motor pressure and force generated by the user.
In addition to touchscreens, keyboards are an input modality that can detect normal and abnormal motor interactions with the device surface. Mechanical keyboards transduce user physical movement to electronic input with each keystroke, including capturing information on input pace in addition to semantic content. Optical keyboards use light-emitters and light-sensors to determine which key is being actuated, typically incorporating both horizontal and vertical light beams for accurate detection. Optical keyboards provide finer input timing information than mechanical keyboards.
Though not currently commercially available, whole surface robot skin is an additional sensor technology being developed that could enhance automated detection of patient fine and gross motor deficits. Covering a whole surface of a robot with tiny sensors which can measure local pressure and transmit the data through a network gives an artificial skin to robots to improve robotic sensory discrimination, action, and safety. In addition to improving robotic interactions with inanimate objects, whole surface robot skin would improve robotic detection of abnormal touching by an interacting human. Wafer bonding technology to integrate capacitive force sensor using silicon diaphragms have been shown to improve needed packaging of force sensors in a small volume.
Dystextia and Dystypia —deficiencies in texting and in keyboard typing – are now a common symptom of stroke, reflecting the shift human graphomotor communications from handwriting to hand touchscreen and electronic keyboard presses. The ability to communicate and create through texting and typing is a multifaceted and uniquely human behavior which relies on a distributed, dominant-hemisphere, neuroanatomical network that overlaps closely, though not completely, with that subserving handwriting. As a result, stroke lesions that produce dysgraphia typically produce dystextia and dystypia as well.
Accordingly, abnormal texting, typing, and handwriting will arise from acute stroke lesions causing aphasia, apraxia, and hemispatial neglect, as well as elementary motor, ataxic, and sensory deficits.
dystextia and dystypia also likely are present in one-quarter of patients with stroke. Uncommonly, isolated dystypia in the presence of normal handwriting output can occur, in part due to selective impairments in visuospatial memory for key positions.
While patients themselves or their text and email recipients have detected the presence of dystextia and dystypia in reported cases to date, existing text/type analysis artificial intelligence systems, developed for predictive texting/typing to accelerate user input
should with modification also be able to identify abnormal text/type input dynamics that reflect stroke impairments.
Technologies assessing somatosensory function through interactions with device surfaces
Sensor types: haptic sensors
Haptic touchscreen technologies create a user experience of touching different surface types by applying feedback forces, vibrations, or motions to the user when touch occurs. Depending on how much force is applied to the device surface, haptic is able to respond with a corresponding feedback evoking the sensation of clicks, vibrations, and raised or lower surface contours. The three most common types of haptic are Eccentric Rotating Mass (ERM), Linear Resonant Actuator (LRA), and Piezo Haptics. Electromagnetic linear actuators are capable of reaching peak output in just one cycle and producing vibrations lasting 10 milliseconds. Unlike typical motors, the linear actuator does not rotate but oscillates back and forth. Multiple actuators are mechanically connected to the back of the input surface, distributed along the surface, each at a separate contact location, to provide localized haptic feedback to the user.
Small robotic devices have been developed that are capable of assessing a variety of patient somatosensory functions by generating standardized sensory stimuli directed to different body parts. The Robotic Sensory Trainer device assesses, as well as rehabilitates, tactile functions. In assessment, the device provides applies to use fingers well-controlled amplitudes of angular displacement, vibration, and surface pressure, while recording position and user perception over a touch screen
Somatosensory deficits are common in acute stroke. Most sensory deficits occur as accompaniments to major motor, language, visual, and other domain impairments. In such patients, these other deficit domains will generally be more readily automatically detected by monitoring devices. But pure sensory stroke accounts for 5.4% of all acute ischemic stroke
and is inapparent to other means of detection, making detection by surface interacting smartphones, keyboards, home health robots desirable.
Technologies assessing motor functions through computer vision
sensor types: computer vision
Machine, neural net, and deep learning techniques have led to dramatic advances in computer vision functionality over the past 20 years. Methods for acquiring, processing, analyzing scenes and individuals, and extraction of high-dimensional data from the real world have proliferated. Sub-domains include object recognition, face recognition, scene reconstruction, event detection, video tracking, 3D pose estimation, learning, indexing, motion estimation, and 3D scene modeling. Visual recognition systems have been incorporated into smartphones (including face recognition for phone unlocking), smartwatches, smart cars, smart robots, and closed-circuit television systems. Surveillance of the public using CCTV is common in many areas around the world and permits detection of stroke symptoms arising in ordinary life over a wide geographic area.
Stroke symptom detection applications
Facial weakness – Unilateral face weakness is a common motor symptom in acute stroke. In the original study of the NIH Stroke Scale, among 15 scale items, facial palsy was the second most common to present on admission, seen in 76% of patients.
As a result, facial weakness is one of the 3 cardinal symptoms in the FAST (Face – Arm – Speech – Time) mnemonic used to educate the public regarding signs that a stroke may be occurring. Several studies have shown that computer vision facial landmark systems can accurately detect the presence of peripheral facial nerve palsy.
The feature extraction and asymmetry indexing approach should work as or nearly as well for central as for peripheral facial paresis.
Gaze deviation – Gaze deviation is moderately common stroke symptom and, when present, often signals the presence of a large vessel occlusion. In the original NIH Stroke Scale validation study, gaze deviation was present in 37% of patients.
Using a conventional smartphone with a high-resolution rear camera, a smartphone computer vision app showed high accuracy in identifying the presence of strabismus, phorias, and other ocular eccentricities.
Generally, the video-based fall detection system is segmented into 3 stages of video acquisition, video analysis, and notification communication. Therefore, the basic operation principle of this system is to record the patient in an ambient environment and then the video system's AI should recognize a fall immediately and notify the ambulance or the family of the patient. In Japan, studies have shown that mobile robots can visually detect patient falls and report them to observer, using the early Microsoft Kinect computer vision system.
Technologies assessing voice and language through audio sensors
sensor types: audio
Smart speakers combine of are voice command and speaker devices with integrated artificial intelligent virtual assistants that enable interactive actions and hands-free activation with the help of trigger words. They typically audit the sound environment continuously when connected to power. Most modern general-purpose speech recognition systems use hidden Markov models or deep learning computational linguistics to perform both acoustic modeling and language modeling.
Accuracy of speech recognition in consumer systems has steadily increased through the past decade. In 2019, in response to user queries, the top performing systems attempted answers to 80-85% of utterances and provided full and correct answers to 72-82%.
In tandem with increasing sophistication of modeling and analysis and decoding of normal speech, voice recognition systems necessarily are also developing increasing capacity to identify abnormal acoustic (phonemic), linguistic (semantic), and paralinguistic (prosodic) outputs.
Stroke symptom detection applications
Aphasia and dysarthria – Disordered articulation (dysarthria) and disordered language (aphasia) output are common stroke symptoms. In the NIH Stroke Scale design and validation study, dysarthria was present in 59% of patients and language abnormality was present in 43% of patients. among 15 scale items, facial palsy was the second most common to present on admission, seen in 76% of patients.
In a population-based study of Emergency Department presentation, abnormal speech or language was present in 57% of mixed ischemic stroke, intracerebral hemorrhage, subarachnoid hemorrhage, and TIA patients.
Because of this frequency, abnormal speech is another of the 3 major symptoms in the FAST (Face – Arm – Speech – Time) mnemonic for lay recognition of stroke. A clinical case example of stroke detection by a voice assistant from our practice is shown in Fig. 1.
Depression/apathy - Depression, apathy, and abulia may be leading stroke symptoms with relatively circumscribed frontal systems infarcts or hemorrhages. or depression-related utterances, sensitivity was 80% and positive predictive value was 83%. In a study of 100 conversations, automated speech recognition software showed 80% sensitivity and 83% positive predictive value for detecting depression-related utterances.
Devices with multiple form factors include one or more of the above reviewed sensing technologies and offer the capability to automatically detect the onset of stroke symptoms in patients wearing them, using them, or passing by their field of view. The distribution of sensor technologies across different device types is shown in Table 1. This section briefly reviews select aspects of each form factor and their current availability in the population.
Table 1Wireless sensor types and detectable stroke signs.
Smartphones are integrated mobile phones and multi-purpose mobile computing devices that support wireless communications protocols (e.g., cellular broadband, Wi-Fi, Bluetooth, or satellite navigation) and may be equipped with various sensors, including audio, video, accelerometers, gyroscopes, barometers, magnetometers, and proximity sensors.
Smartphones are the most widely currently distributed smart device technology. At the start of 2020, smartphones were being carried by over 3 billion people, representing around 40% of the world's population.
Studies demonstrating that smartphones can detect deficits that occur in stroke include demonstration of smartphone recognition of abnormal gait quality, including onset of abnormal successive steps times and step lengths
Of related interest, in addition to potentially being able to early detect stroke, smartphones can also cause symptoms that mimic acute stroke. Unilateral transient blindness may occur as a result of looking at blue-light emitting screens for a prolonged period of time in dark while laying down with one eye inadvertently covered, causing asymmetric dark adaptation. Symptoms invariably resolve with return of normal lighting, but patients can present as mimics of ischemic amaurosis fugax.
Smart watches and fitness bands are wearable computers in a watch or band form appropriate for wrist placement. Modern versions modern smartwatches include touchscreen and audio interfaces for user input and associated apps provides for long-term biomonitoring. Global wrist-worn wearable unit shipments worldwide are projected to increase from 67 million units in 2019 to 105 million units in 2023. Smartwatches became the first form factor to be used for stroke symptom detection in large-scale implementation with the release of the Apple Series 4 smartwatch in 2018 which included automated fall detection. When the software detected falls among users who enabled the functionality, the watch automatically called 911 emergency medical services when it detected user fall and subsequent non-response. Currently this feature is automatically turned on for users 55 and older.
Smartwatches are also being used widely for stroke prevention, with electrocardiographic detection of the stroke risk factor of atrial fibrillation and sleep and oxygen saturation detection of sleep apnea.
Adoption of smartwatch use by individuals at risk for stroke for prevention practice will have a synergistic effect in increasing smartwatch use for detection of stroke symptoms among patients most at risk for stroke onset.
Smart speakers/voice assistant
Smart speakers are speakers with an integrated virtual assistant with an ability to recognize and respond voice commands. They are rapidly increasing in consumer market penetration. In the United States in 2019, 157 million speakers were in the homes of 60 million Americans; 24% of individuals over age 18 owned at least one smart speaker.
homes and the report is based on a telephone survey of just over 1,000 U.S. adults. It reports approximately, 17% of the entire U.S population now owns smart speakers in their homes.
While these devices initial deployments focused upon performing tasks such as playing users’ music playlists, answering basic questions, setting schedules or alarms, and making calls, their ability to analyze user articulation, semantics, and prosody provides a robust basis for identifying stroke symptoms through audio sensors, as reviewed above. A case example from our practice of smart speaker detection of stroke is shown in Fig. 1.
Home health robots
Home health robots are autonomous mobile robotic assistants designed to serve patients, elderly, and disabled at home setting. They are capable of communicating with the user, recognizing various symptoms, reminding patients when to take their medicine, recognizing patient faces and facial expressions, producing facial expressions, and contacting caretakers and emergency care dispatchers. They are anticipated to increase in deployment the gap between the number of available caregivers and the world's aging population continues to widen. Carebot development has advanced particularly rapidly in Japan, where projections are a shortage of 1 million caregivers by 2025 for the country.
But their hardware platform is robust for incorporation of stroke detection technologies, incorporating visual, audio, and somatosensory modalities.
Smart clothing is comprised of vestments that can monitor and measure the physical condition of the wearer through the use of additional advanced textiles, sensors, and attached hardware. The initial market deployments have been in industrial settings and sports and fitness. In the United States, the smart clothing market is project to grow from $1.6 billion in 2019 to $5.3 billion by 2024.
Smart shirts, jackets, and vests can continuously monitor biometric data including heart rate, breathing rate, temperature, oxygen saturation, and muscle activity, and sleep quality. Though currently focused upon optimizing athletic performance and fitness, this array of biometric monitoring has substantial potential to improve stroke detection, including fall detection, gait impairment, and upper extremity paresis. For individuals unwilling to wear smartwatches, smart textiles are likely to provide a more acceptable mode of continuous bodily surveillance. A distinctive capability of smart clothing (smart pajamas) will be detecting the onset of hemiparesis in sleep, a frequent feature in the 20% of ischemic stroke patients with sleep onset of stroke.
Other monitoring form factors
A variety of additional form factors are being developed that integrate sensor technology and machine intelligence providing capabilities for detecting the onset of stroke symptoms. These include: 1) Smart Beds: beds with sensors placed under the mattress, or the mattress itself is a sensor, capable of monitoring heart rate, sleep quality, sleep apnea, and abnormal movements of onset in sleep; 2) Closed Circuit Television (CCTV) systems: continuous video surveillance systems monitoring public and private spaces capable of fall detection and detection of onset of abnormal gait or upper extremity movement; 3) Smart Rings: smart ring technology has been introduced to the global market as a simpler alternative to smart watches. Currently, smart rings can track blood flow, sleep, body temperature, and arm, hand, and finger movement; and 4) Desktop/laptop/tablet computers: in developed countries, the share of households with a personal computer exceeds 80 percent; like smartphones, these provide the capability to detect stroke symptoms through video, audio, and keyboard input analysis.
Challenges to translation and research directions
This survey has demonstrated that there is tremendous potential for mobile health technologies to improve almost every domain of stroke symptom detection. However, several important challenges lie ahead in translating the new sensor advances into reliable detection devices employed at scale in the population.
Addressing these hurdles will require clinical translational research studies in coming years that encompass a broad range of targets. These include:
Extending use-cases of current detection devices to the stroke setting. While the ability of motion-sensing micro-electronic machines to usefully detect stroke-related falls is now well-demonstrated, the extrapolation of other technologies to stroke needs proof-of-concept demonstration. For example, computer vision capability to identify strabismus and phorias needs to be extended to stroke-related gaze deviation and computer vision detection of peripheral facial weakness needs to be extended to stroke-related central facial weakness.
Attaining sufficient specificity and sensitivity for useful deployment. Given the variability in normal human activity, there is a substantial potential for devices to mistake non-stroke events for stroke events. For example, it is likely that alcohol intoxication is a more common cause of dysarthria and language alteration than stroke, and voice detection algorithms need to be highly reliable in making this distinction to avoid too frequent false alarms and notifications.
Determining the best strategies to link automated stroke symptom detection to automated notification for help. As with the existing fall detection technology, uses could set devices to contact family members, friends, doctors, human-monitored security companies, and/or EMS dispatch operators. The best linkage approach will likely vary with the type of symptom being detected, the response resources available in a region, and individual user preference.
Mitigating data overload. A formidable challenge to successful implementation of mobile technologies for stroke detection is assuring usability and legibility of the data for providers and patients. The enormous amounts of data potentially analyzable in real time require appropriate down-sampling and clinical framing to achieve actionable clinical relevance.
Information security and privacy. In order for mobile sensor technologies to achieve wide uptake, patients must be assured of the privacy and security of their data. Detailed regulatory oversight is needed. To facilitate rather than slow innovation, the U.S. Food and Drug Administration and the Federal Communications Commission have adopted a risk-based approach to regulatory oversight, focusing upon aspects of device performance that could jeopardize patient safety if not functioning as intended, but have not addressed privacy and ownership of health data. Health information privacy laws exist in many countries but need to be refined to encompass the massive datasets that mobile sensor technologies acquire in normal living settings.
The increase in computing power, wearable sensors, and mobile connectivity have ushered in an array of mobile health technologies that can transform stroke detection and EMS activation. By continuously monitoring a diverse range of biometric parameters, commercially available devices provide the technologic capability to detect cardinal language, motor, and sensory signs of stroke onset. Intensified translational research to convert the promise of these technologies to validated, accurate real-world deployments are an important next priority for stroke investigation.
Declaration of Competing Interest
BB: No competing interests
JLS: No competing interests.
Van der Lugt AA
Time to treatment with endovascular thrombectomy and outcomes from ischemic stroke: a meta-analysis.