Archive for the ‘Clinical Effectiveness’ Category

A patent foramen ovale is a defect in the wall between the two sides of the heart that allows the passage of blood and its contents. This course bypasses the lungs. (Courtesy Cleveland Clinic)
One controversial cause of some strokes is a small hole between the two sides of the heart known as a patent foramen ovale. Although rarely symptomatic for patients, this hole allows blood clots that occur in otherwise healthy individuals to bypass the lungs and lodge in critical arteries that serve the brain. In individuals without this defect, these small blood clots would normally lodge in the small vessels of the lungs and typically never lead to a disease state (see note at end for a more complete explanation). Because of the unresolved structural defect, conventional thinking considers these individuals at increased risk for future strokes that remains even after typical stroke prevention strategies.
Kurt Amplatz is an interventional radiologist who has spent his career developing a number of devices to repair these patent foramen ovale and other defects using a catheter-based device that closes the hole with a permanent metal disc. The procedure is similar to cardiac catheterization procedures used for patients with coronary artery disease (e.g., heart attacks).

An Amplatzer occluder deployed on the end of a catheter. (Courtesy St. Jude Medical)
Two recent papers(1,2) in the New England Journal of Medicine report findings from long-term studies designed to demonstrate the expected benefit of using these “Amplatzer” devices versus traditional medical therapies (e.,g., aspirin, Coumadin®, Plavix®). The Journal effectively uses these two studies to demonstrate just how fine a line of improvement may be found with use of the devices and ultimately concludes that true believers and skeptics will likely not be swayed from their opinions by the limited findings of both studies.
What I found most interesting about this recent revival of the debate of catheter-based occlusion devices is the near-zero discussion of the cost component of these devices. The Amplatzer occluder used in these studies was not a one-time quick-fix for these patients but was a supplemental therapy that was often used in conjunction with traditional medical therapies. Although exact pricing is not available, the cost of the device alone adds an additional $3,000-5,000 to the cost of the patient’s care and insurance is usually billed an additional $10,000-$25,000 for the procedure.
With the current evidence, these devices add additional costs and procedural risks to a patient’s care without demonstrating definitive benefit. Addressing the escalating cost-problem in U.S. healthcare starts with regulatory authorities scrutinizing care scenarios such as this one to determine if we are getting value for money in procedural medicine.
Note on blood clots: Venous thromboembolisms are a major cause of morbidity and mortality. However, the specific sequence of events that produces small clots that the human body can easily degrade versus those that cause life-threatening events is poorly understood. Although any blood clot seen in the healthcare setting is typically treated as if it were potentially life-threatening, the general thinking is that small blood clots as in the example given above are somewhat routine in the older population and can resolve spontaneously if not symptomatic.
This post is proceeded by a short case vignette to help frame the discussion that follows.
An 85 year-old man is brought to the hospital with a fever and found to have pneumonia. The man had been diagnosed about six months ago with an incurable form of leukemia, a cancer of the bone marrow. Because of his age and medical history, his pneumonia quickly leads to respiratory failure and he is admitted to an ICU for mechanical ventilation and other critical life-sustaining measures. Over the next few weeks, the patient’s overall condition failed to improve and he suffered a number of further physiologic insults. He became anemic and was transfused two units of blood. The patient then got a bladder infection that resulted in sepsis which caused the patient to cardiac arrest before being successfully resuscitated. After all of these events and the patient’s stable but poor condition, the medical team faced a difficult situation where the patient was too sick to be transferred to a long-term acute care facility but also had little chance of significant improvement. Given the current findings and poor prognosis, the medical team asked the family to reconsider the patient’s status as a “full code,” requiring all life-sustaining measures in the event of further cardiac or respiratory failure. The family felt very strongly that “ALL” medical efforts should be afforded to the patient and refused the medical team’s request. An ethics consultation was called by the primary team.
It is estimated that better management – meaning less aggressive – of patients’ care in the last six months of their life could save $700 billion of U.S. healthcare spending, one-third of the total annual spending on healthcare. Currently, the irreconcilable differences around end-of-life decision-making problems outlined above are handled by hospitals with the use of “ethics consultations.” An ethics consultation typically triggers a review of a patient’s case by a member of the hospital’s ethics committee who has been trained to use the four principles of medical ethics to reframe a particular ethical problem and provide a recommendation for moving forward. For example, in the case above an ethics committee member would highlight the balance that must be struck between the patient’s (and family’s) autonomy to make their own medical decisions versus the continued use of limited healthcare resources on a patient who will never be independently functional and suffering a series of interventions that are likely only prolonging the patient’s guaranteed death from his cancer. The intended purpose of such an exercise is that reframing the discussion in a more analytical manner may help the family or the medical team better understand the other side and highlight a potential resolution.
However, the major limitation of the ethics consultation is that its recommendations are not legally binding and therefore truly irreconcilable differences can rarely be solved by the process. When some form of cooperative agreement cannot be reached, the ethics committee has no power to enforce a recommendation on either party. Published reviews of the history of end-of-life care and medical ethics have highlighted the limited legal protection for either hospitals or patients with end-of-life care as a major hindrance to the available options for resolving these differences of opinion. Without strict legal guidance, hospitals have been forced to utilize a quasi-legal “fair process” system when faced with patients and decision-making surrogates who insist on continued care that has become medically futile. Fair process for withdrawal of care involves an onerous series of consultative steps that requires a medical team to explore alternative care options that include cross-specialty consults, an ethics consultation, an attempt to transfer to another hospital service, and an attempt to transfer to another hospital before being able to cease futile medical interventions. It is important to note that these safeguards are necessary, but only because the government has failed to legislate a more streamlined process of ceasing futile care efforts.
While it is probably unsurprising that legislators have been wary to pick up such a hot-button political issue – particularly after the pillorying of “death panels” during recent healthcare debates – my greatest disappointment lies in lack of professional association’s attempts to fill the void. Although recommendations from professional groups like the Society of Critical Care Medicine or the American College of Physicians may not carry legal weight, such professional association guidelines are regularly used in medical malpractice cases to demonstrate whether a physician’s actions were considered “standard of care.” The issuing of such guidelines by professional organizations would provide the kind of backstop needed for physicians to more comfortably navigate ethical dilemmas like the one presented above. To date, the only national physician’s organization that has provided public recommendations for withdrawal of care is the American Society of Nephrology with a particular focus on withdrawal of dialysis for kidney failure. In the present political climate, further efforts to manage end-of-life care will need more physician and patient groups to publicly take a stand on what qualifies as appropriate care. Without such courageous steps, end-of-life care will become a further emotionally destructive and financially untenable part of our healthcare system.
Disclosure: I have no conflicts of interests with regard to end-of-life care and medically futile interventions.
Editor’s Note: A change in the publication schedule has been made for the month of November. This post below replaces a previous November post on pharmaceutical markets in developing countries that will be re-published in the future.
Disclaimer: Every pregnancy is unique. The below discussion focuses on population-wide assessments of effectiveness and should not be used to make a decision about one’s personal healthcare. Any questions about the issues raised below with regard to the reader’s own pregnancy should be discussed with their obstetrician.
When a pregnant woman arrives to a hospital in labor, she will typically be whisked away to a Labor and Delivery ward. For those familiar with birth, the timing of what follows is highly unpredictable. If in true labor, she will be whisked to the room where she will deliver, and be allowed to progress for the often hours-long wait prior to delivery. After being immediately assessed, most institutions include protocols that have the ward nurses place a continuous fetal heart rate and uterine contraction monitor. The information gathered by the probes is then usually electronically transmitted to a central server for display on any number of desktop monitors and overhead displays scattered throughout the ward. The principal behind this monitoring is for the medical team to be able to readily assess the basic vital signs of labor on any patient from any location in the ward. For example, fetal heart rate decelerations are one recognized sign of fetal distress, and many institutions train staff to immediately respond to such a feature observed on monitoring. A frequent result of “ominous” or “nonreassuring” fetal heart rate patterns is that further observation of the fetal heart rate does not improve the assessment of fetal health and an emergent Caesarean section ensues.

As comforting as continuous fetal monitoring may appear, the devices’ tendency to prematurely warn of fetal distress may actually worsen outcomes for mother and baby. (Photo Courtesy: Medgadget)
Conventional wisdom would suggest that the current widespread distribution of such telemetry monitoring for active labor has saved the lives of countless women and their progeny. Until the twentieth century, pregnancy was one of the most dangerous periods of a woman’s life, and much of this was the result of difficulties during active labor without the ability to safely remove the near-term pregnancy. Even today, Caesarean section – with only minor technical modifications in the ensuing period – remains the standard of care for achieving optimal outcomes when persistent fetal distress is detected during active labor. Logically, one would then think that real-time vitals of the fetus would be the ideal technological assistance for fetal health surveillance during this period of pregnancy.
Although such reasoning appears sound and is routinely used by purveyors of continuous heart rate monitoring, the scientific evidence to date does not support the use of such devices. Although proponents of continuous fetal monitoring have cited a large recent study that suggests otherwise, the best scientific evidence suggests that continuous fetal heart rate monitoring results in the same mortality outcomes for mother and fetus as what can be achieved with the traditional form of fetal health monitoring, or “intermittent auscultation” (a nurse rounding on the mother every 15 minutes and listening for the fetal heart rate with a stethoscope-like device). Worse, not only are mortality outcomes unchanged by an expensive piece of hospital IT infrastructure, but the studies have also shown that the number of Caesarean sections increases with the use of continuous fetal heart rate monitoring. What this constellation of findings suggests is that such monitoring does not improve health outcomes but also has the unfortunate disadvantage of “sounding the alarm” too early. In other words, continuous monitoring appears to be picking up fetal distress that is transient that may be more accurately described as “fetal discomfort.” These findings suggest that if such cases were left alone, the distress would often resolve on its own without any lasting effect on health outcomes for mother or baby.
To summarize, the amassed evidence to date suggests that continuous fetal monitoring does not improve newborn mortality and increases the number of C-sections performed when it is used. Why then is it the standard of care across much of the U.S.? I’ve put exactly this question to a number of my mentors and colleagues, and the answer is invariably the same. Evidence aside, no jury in the United States is going to be kind to the obstetrician who goes against the tide of medical practice to adhere to what “stuffy” academics suggest is a more effective treatment strategy. That a minority of institutions in the U.S. even still allow practitioners to use intermittent auscultation is testament to the American College of Obstetrics and Gynecology’s measured acceptance of both forms of fetal monitoring.
What troubles me the most about the state of fetal monitoring today is the potentially massive quality transformation that exists and the limited work being done to explore this issue further. Labor is one of the most common conditions requiring hospitalization and the question of how it is monitored affects every single one of those admissions. Definitive research is needed to settle this debate. Because such research may very well prove the futility of a profitable medical technology, it is likely that professional advocacy is the only way such work will get the necessary funding to conduct a large randomized trial. Without it, we will continue to perform a practice that is contributing to what may be thousands of unnecessary surgeries a year and a considerable cost burden to our healthcare system.
Disclosure: I have no financial or professional interests related to continuous fetal heart rate monitoring or other matters discussed above.
One of the recent cost-control measures that Medicare has been experimenting with is a planned penalty for hospital systems with high readmissions. For example, if the reimbursement data a hospital files with Medicare shows a higher 30-day readmission rate for patients it previously treated, also called “bouncebacks,” a percentage deduction will be made from all future Medicare payments to that hospital. The basis of this new rule stems from a belief that hospitals with high readmission rates are the result of inadequate care continuity practices and not the result of skewed populations being served. For this post, I will leave aside the many criticisms (e.g., for indigent care hospitals, for population outliers) of the new policy and focus on the innovation trends for helping individual hospitals lower their readmission rates.

Leaving so soon? Most quality experts believe readmissions could be reduced if high-risk patients remained as inpatients longer. (Courtesy Hospital & Health Networks)
The research group that I currently work with at Emory University’s Department of Surgery and Georgia State University’s Andrew Young School of Policy Studies view excessive readmissions as the first signs of correctable errors in the discharge process. These errors can be broadly grouped together as systems-based and decision-related. Systems-based errors are when a patient is not adequately prepared for discharge because of an internal system failure. For example, the process for discharge at a hospital may not properly instruct a patient on the use of home-oxygen prior to discharge. Decision-related errors are when lack of information or external pressure lead to a patient being discharged too early.
Systems-based discharge errors are currently being addressed through traditional quality improvement mechanisms now being applied in the healthcare setting. However, decision-related discharge errors represent an under-explored opportunity for hospitals to reduce their readmission rates. The general thinking is that if physicians can have a more accurate sense of the likelihood of readmission, patients can be discharged at a more appropriate time while not wasting resources by simply holding on to every patient for a longer time period.
Although approaches have varied, the common wisdom to address decision-related discharge errors has been to take advantage of the latest advances in bioinformatics (i.e., healthcare IT) and apply them in real-time to patient discharge decisions. Currently, the most developed commercial solution is Microsoft’s Amalga healthcare information management platform (3M has a similar IT product oriented more toward quality improvement offices). The basic principle of these systems is for algorithm-based analysis of existing patient data to develop and refine predictive tools for use by a physician at the time of discharge of a future patient. For example, as the system collects data on patients who ahad gallbladder surgery it will become increasingly better at predicting which future gallbladder patients will most likely be readmitted. With such information in hand, a surgeon could potentially flag certain patients as high-risk for readmission and manage their discharge more conservatively.
It is important to note that product offerings like Amalga have not been readily adopted by the mainstream healthcare information management community. Critics note that Microsoft has been struggling to establish itself in healthcare IT due to its late entry and lack of a comprehensive product line. Recent moves by Microsoft signal that the company recognizes these vulnerabilities. A 50/50 joint venture called “Caradigm” between Microsoft (an IT and platform leader) and GE Healthcare (an electronic health record industry veteran) aims to capture many of Microsoft’s latest clinical informatics innovations and package them into existing health system platforms.
Currently, these uses of predictive data analysis are in their infancy. To use a term from business innovation theory, we’re in an “era of ferment.” What I find even more interesting than the technical hurdles firms are currently struggling with is the foreseeable problem on the horizon of how we pair technical expertise (healthcare providers) with these predictive tools. This man-machine interface is easy to dismiss, but I believe that successfully addressing it will be the determinant of a successful dominant design.
Disclosure: I currently receive a graduate research stipend from the National Institutes of Health (1RC4AG039071) for work related to surgical patient readmissions and discharge decision-making.
I recently came across FoetoH, a fetal heart rate monitoring device that has been developed at the University of Oxford. Unlike other forms of fetal health monitors, FoetoH is designed to be used by laypeople and in a real-time manner. Rather than giving health information output in complex jargon or graphs, the device provides a stoplight-style assessment (green, yellow, red) of a developing fetus’ current health. The scientific breakthrough was developing an exercise belt-like device (think exercise heart rate monitors) that a mother-to-be wears all the time that communicates with a handheld unit (or iPhone app) which facilitates data storage and interpretation.
The idea of FoetoH is attractive because of its synthesis of the latest technology trends (i.e., mobile-based health applications, user-oriented design) and advanced health monitoring devices. The marketing materials of FoetoH are excellent and describe this device as potential breakthrough to help address the more than 2 million stillborn babies born each year around the world. The basis of this claim is that mothers who know their pregnancy is in trouble (indicated by a “yellow” or “red light” on the device) could receive emergent medical care to improve fetal outcomes.
Unfortunately, such a simplified product and health solution obscure some major logical flaws in their existing argument. For FoetoH to contibrute to a reduction in worldwide stillbirths, the device needs to prove itself to be more than just effective at measuring fetal heart rates. FoetoH’s founders need to be able to demonstrate that identifying changes in fetal heart rates is an effective way of identifying AND preventing still births. Why do I raise this issue? The limited data available on still births demonstrates that the majority of still births are due to genetic and environmental insults that go well beyond impaired cardiovascular support of the fetus. Many of these stillbirths are due to unknown genetic causes, infectious disease, or severe malnourishment, and fetal distress (erratic fetal heart rates) is an end-stage sign of imminent still birth. In these cases, last-minute emergency care would have virtually no chance of preventing “fetal demise” (technical term for still birth). It is also unclear that FoetoH’s real-time monitoring is any more effective than current guidelines for antenatal care which include regular physician visits and routine ultrasound scans at pregnancy milestones.
Moreover, it is unlikely that most mothers at risk for stillbirth would be able to gain access to the FoetoH device. Its currently reported cost of manufacturing is approximately $80. A public sector price is likely at least twice as expensive with a market price even more. Given that the vast majority of stillbirths occur in impoverished women from developing countries, the target population who could potentially benefit from such a device would be unlikely to be able to afford it. Even if such devices were provided free of charge to high-risk mothers, the limited benefit of using the device I raised in the prior paragraph would likely outweigh the high cost to health systems.
These issues are not lost on healthcare device makers familiar with the product. At Oxford’s recent TATA Idea Idol business plan pitch competition – where FoetoH was a finalist – judge Will Chadwick of TATA Interactive Systems noted that the only realistic market for FoetoH were overly concerned mothers from the industrialized world who were willing to pay for a device that provided peace of mind rather than a clear-cut medical benefit over existing practices.
In fairness to FoetoH, its TATA Idea Idol team went on to win this year’s competition despite Chadwick’s misgivings (so someone clearly thinks FoetoH has something going for it). In the end, the science and potential commercial market for the device were convincing enough to beat out a number of strong competitors. FoetoH is a useful reminder for clinicians. Sound science and commercial availability do not make good medicine. Healthcare providers have to always maintain a critical eye and question new healthcare good and services to ensure that they are consistent with the individual provider’s aims and means of care as well.
For those of you familiar with my background and specific research interests in global health and quality improvement, it should be no surprise that I routinely get asked how do I connect two very disparate fields of medicine like global health and quality improvement. Just last week, a fellow of my college at Oxford, asked for further clarification. I still find myself falling into the trap of assuming the overlap between the two is obvious to others since that overlap is where most of academic work currently focuses. This particular fellow was a healthcare economist with an interest in financial crises and their impact on global health, so I incorrectly assumed that he would “get it.” This lack of discernment is unusually common, and I believe it only reinforces the relative lack of interest even within the global health establishment for the issues that I find most engaging.
Before one can understand my area of overlap between these two fields, it is important that I re-frame what these terms mean. To understand how I use the terms and my particular interests within each, most would benefit by briefly reviewing prior posts on the two (global health, quality improvement). At the macro-level, I find both of fields fascinating and ultimately the causative factor for why I am pursuing a career in academic medicine (i.e., not private practice). For me, these two fields are the current focus of healthcare’s greatest obstacles. Globally, we have billions of people around the world who cannot access even basic healthcare services aligned to 21st century standards of clinical care. In the industrialized world, we find that most countries are unable to provide healthcare services in a manner that maximizes capabilities given a set of constrained financial resources.
Albeit in different contexts, the kinds of systematic and institutional inefficiencies that ultimately impair the delivery of quality healthcare in modern health systems like the U.S. are also evident in the delivery of healthcare in even the most resource-limited environments. Global health is quickly reaching a point where the technical capabilities needed to address the world’s healthcare problems (e.g., effective antibiotics, vaccine development and production technologies, low-cost anesthesia equipment) are available, but global health interventions often lack the operational competence needed to achieve their goals.
My belief and where much of my research efforts are focused is that we can use the quality improvement frameworks being developed for modern healthcare systems to also improve the healthcare delivery in less developed healthcare settings as well. Global surgery is an ideal area in which to adopt these analytic models because of the process-driven nature of the surgical resource procurement (e.g., anesthesia and surgical equipment, pharmaceuticals for infection prevention and anesthesia) and direct patient care.
A man visits a surgeon about a lipoma (usually a benign glob of fat) on his arm that has been aesthetically bothering the patient for a number of years. The surgeon assesses the lipoma and rules out any concerning other health problems. The man is booked for surgery three weeks later. The patient arrives on-time to an outpatient surgery center for the removal of the lipoma. This patient checks in at the front desk and is guided back to the preoperative assessment area. There, the surgeon and anesthesiologist consent the patient for the procedure, start an IV, and ask the patient and his personal friend to say there goodbyes. Shortly thereafter, the patient is brought to the OR and sedated for the procedure. The case goes off without a hitch, and two hours later the patient is comfortably recovering from the effects of the sedative before being taken home by his friend. His wound heals well, the tissue removed during surgery is found to be benign, and he is formally released from the surgeon’s care at a follow-up appointment two weeks later.
This vignette represents a stylized “perfect” surgical encounter. All the various processes necessary to move from diagnosis of surgical disease to remediation of the condition proceed apace without incident. What happens in real life? In more than a few cases, one of the many steps above has a seemingly minor logistical hiccup that ultimately causes far too many resources to be devoted to this individual patient’s care. Sometimes these issues pass without the patient even aware of a problem. For example, it is not uncommon that a patient is brought to an operating room and sedated but the surgical team is not yet available to start the case. Other times, the patient’s entire memory of the encounter is shaped by the issue. It is not unheard of that a patient arrives for surgery and finds that because of paperwork errors he or she was not scheduled for surgery. In the best of cases, the patient has to wait a few hours before being squeezed in. Unfortunately, sometimes the patient is asked return at a later date for the procedure which of course requires the patient finding a new window in his busy schedule that can accomodate such a visit. Regardless of whether the patient is aware of the problem or not, these logistical errors greatly increase the resources and time needed to meet the healthcare needs of a population.
In the U.S., a seminal report by the Institute of Medicine (the profession’s pantheon) in 2001 radically accelerated the industry’s adoption of product quality and operations management practices from other industries. A practical example of this was many of the “cost containment” issues that were discussed — but largely not included — in the 2010 U.S. healthcare reform legislation. For example, the original legislation included powerful politically independent panels of experts (a la the 9/11 Commission) that would provide evidence-based – rather than expert-opinion – recommendations for the highly politicized but necessary changes to align Medicare reimbursement policies with cost-effective care.
Within surgery communities, there seems to be a rather large divide between institutions and programs that “get it” and are willing to spend the administrative resources to address these operational inefficiencies and those who fail to recognize the magnitude of these problems or see no solution from them.
The part of surgery’s newfound interest in “quality improvement” that is of greatest importance to me is exploring the incremental changes we can make to healthcare processes to prevent the small process problems noted above that ultimately weigh heavily on the costs of healthcare. I specifically am looking at using process mapping and decision analysis tools to question existing healthcare operations through a patient-centered perspective.
What we are trying to do in quality improvement is identify systems (e.g., patient management systems), products (e.g., mobile software), and services (e.g., nursing hotlines), that help us provide healthcare that is more comprehensive but also cheaper than existing practices. The burden that currently weighs on those of us working in this field is developing and testing these new methods of providing care in a way that convincingly demonstrates results with which we can act on.
Why an amazing product can’t find a market

The Gold Standard: GE Vivid 7 (Courtesy GE Healthcare)
Medical ultrasound is considered one of the most effective imaging modalities for cheap, reliable diagnosis of many disease states from heart disease to infected abscesses. Since the technology’s inception, device manufacturers have been steadily miniaturizing components so that what used to be a desk-sized machine (The GE Vivid7 in the picture to the left is a modern version of these behemoths) has now been succeeded by a laptop bolted to a small rolling cart with a number of ultrasound probes hanging off the side.
In late 2009, GE Healthcare released its VScan (see picture on right) handheld ultrasound device to the American market. The VScan device was most notable for its size — only slightly bulkier than a classic clam-shell mobile phone — and a price that was 80% less than traditional machines. Although VScan’s introduction was accompanied by a number of competing devices from other major industry competitors (e.g., Siemens’ ACUSON P10, Signostics’ Signos) as well as enterprising start-ups, the VScan has generally been considered to be the superior product in the pocket portable product segment because it functions near-identically to larger laptop-based portables but in a smaller form-factor. (Still not quite following? See a short introductory video by GE here.)

GE VScan (Courtesy GE Healthcare)
The VScan’s entry should have been heralded by doctors in a variety of clinical settings. A device exists that a clinician can carry in his or her pocket that with relatively little additional training can be used to detect traumatic injuries, major vessel disease, heart failure, and other maladies all without the need for any further appointments (in the outpatient setting) or bulky equipment often confined to a poorly accessible imaging suite. Unfortunately, the VScan and its competitors have had little adoption in the years since their introduction.
The lack of interest in the VScan illustrates how the current reimbursement system of the American healthcare system has direct effects on how patient care is provided. An example of these skewed incentives can be readily seen in one of ultrasound’s largest markets – outpatient cardiac ultrasound. Without VScan, a cardiologist orders a comprehensive ultrasound study that typically requires a second visit and an advanced cart-based ultrasound machine. Each study performed earns the cardiologist a professional and technical fees exceeding $1,500. In contrast, current reimbursement policies (typically set by Medicare and then voluntarily adopted by private insurance companies) do not cover ultrasound procedures performed with handheld devices so physicians are unable to charge for this service. Thus, the $7,000 cost of each handheld ultrasound machine is not going to be recouped through additional procedural charges.
The only revenue stream from use of the VScan product is any potential efficiency gains met by being able to rule-out serious underlying disease quickly and thereby see more patients in the same amount of time. While such a hidden benefit may be difficult to convey to a private practitioner or a standalone healthcare system, integrated healthcare systems (e.g., U.S. Veterans Affairs Hospital Systems, National Health Service) stand to benefit more from lowering the number of unnecessary comprehensive ultrasound exams. It should be no surprise then that integrated healthcare systems in Europe have been some of the few markets where handheld ultrasound devices are regularly used and clinically studied.
Personally, I’m not convinced by proponents of the VScan that such devices will soon replace the conventional stethoscope. The price and relative ease of use the latter are so superior that any real competition between the two is still a decade or more away. Such a claim is analogous to saying the typewriter would make the pencil obsolete. What is far more likely is growing demand for VScan and its competitors as the American healthcare system wakes up to cost-effective care. A greater focus on comparative cost-effectiveness will lead to reimbursement policies rewarding physicians for at least the limited use of such handheld devices versus comprehensive ultrasound studies. As GE’s marketing efforts in the ultrasound industry demonstrate, the real success of such a product can only be evaluated as a component of a greater imaging ecosystem. For a GE, the real value of such a device is that it allows the company to offer an end of the spectrum poorly matched by any of its competitors.
Conflict of Interest Disclosure: I have never received funding from and have no financial stake in GEHealthcare or its subsidaries. In 2010, GEHealthcare loaned a LOGIQ i portable ultrasound machine to a humanitarian surgical trip I led to Hinche, Haiti.