Share |

What Can Technology Reviews Contribute to Participatory Medicine?


Summary: Technology can be a great aid to participatory medicine when it works well, but when it fails, it results in great frustration and can adversely affect outcomes. So, how do patients and professionals know which tech tools to try and which ones to leave on the shelf? At CNET, we spend a lot of time demystifying gadgets for consumers. Some of the best practices for reviews of consumer electronic apply to the review of products for participatory medicine.

Keywords: Participatory medicine, technology, products, reviews, testing.
Citation: Merritt T. What can technology reviews contribute to participatory medicine? J Participat Med. 2009(Oct);1(1):e8.
Published: October 21, 2009.
Competing Interests: The author has declared that no competing interests exist.

Technology is awful. If you don't agree, you probably haven't spent much time working with it. Any new consumer technology I can think of overpromises what it can do and makes it difficult (if not impossible) to figure out how to use—then often crashes just when you need it most. But, when tech tools work well, they're magic.

Even in the face of these frustrations, we're able to accomplish so much more with technology tools than we can without them. For example, this article was much easier to revise with a word processor than it would have been if I used a typewriter or pen and paper. Exchanging emails with a professional colleague is much more efficient than having to make a personal visit by car, train, or airplane.

The potential value of technological tools for medicine seems obvious, especially for patients. The Epdetect© uses an accelerometer to identify when its carrier is having a seizure, then sends a text message to the primary caregiver. An application for the iPhone, available for $150, allows sufferers of ALS to write text and have it easily converted to speech (see New York Times, September 14, 2009: Insurers Fight Speech-Impairment Remedy ).[1] It’s much cheaper than a $10,000 dedicated machine. These are just two examples of how tech products can help patients participate more effectively in their own care.

Technology can be a great aid to participatory medicine when it works well, but when it fails, it results in great frustration and can adversely affect outcomes. So, how do patients and professionals know which tech tools to try and which ones to leave on the shelf? Just like the consumer searching for the best HDTV, some expert help is indicated.

At CNET, we spend a lot of time demystifying gadgets for consumers and trying to include our audience in the review process. Participatory medicine's approach seems similar: health professionals partnering with patients and their families to demystify medicine and improve care. Both parties contribute their unique expertise and perspective to improve the outcome.

New tools and services are certainly needed to optimize participatory medicine. Patients and doctors need better ways to communicate faster and more often. The same can be said for the distribution of knowledge and the ability to monitor health. Until recently, the physician almost solely controlled knowledge, as well as diagnostic and monitoring tools. Rarely would a patient have exposure to reliable new information or these tools. But this is rapidly changing, largely due to technology, especially the Internet.[2] Participation will lead to many more new tools; new tools will lead to more participation, and so on.

Providers and patients alike need to know what works, what they can trust, and how much they can trust it. This applies to gadgets, software, and even to health classes in a local gym.

A medical journal such as the Journal of Participatory Medicine should tackle this issue head on. Reviews of products and services aren't just an interesting add-on, but are central to understanding and advancing participatory medicine, as well as to holding vendors and manufacturers accountable for the products they distribute.

The techniques that follow have worked well for CNET when we review consumer products; perhaps our experience might provide some guidance for your tech reviewers. There is a need to develop objective review criteria. This will be challenging in a field as broad as participatory medicine, but, we believe that the subjective opinions of experts can also be very helpful. In participatory medicine the experts are not just physicians, but multidisciplinary providers, patients, and perhaps even payers, policymakers, and caregivers. In addition, soliciting and highlighting user feedback can substantially enhance your approach. This strategy seems ideally suited to participatory medicine.

At CNET, technology reviews start by obtaining the product, either directly from the manufacturer or by buying it on the open market. From there, the review process can be divided into a qualitative and a quantitative component.

From the qualitative perspective, it is important to use the product in a "real world" situation, if possible. While this is easy to do for an MP3 player, it's harder for a new piece of diagnostic software. The essential point is to determine how the product works, while noting any problems with its function. Was the interface easy to figure out? Did the program perform as expected? What was the overall quality of the experience? Was the audio good? Was the picture sharp? Was the data reporting process easy?

But a reviewer can't rely solely on subjective assessment. To inform the review, we need some numbers. We take all products into the lab to test their speed, battery life (or affect on it), and other benchmarks. Far from undercutting our qualitative analysis, these benchmarks usually explain it and certainly enhance it. Tests may reveal that the interface on a device may lag because the processor is slow, for instance.

The Web has made the review process iterative, which can be very helpful. Often, we'll post a product review before all lab testing is completed in order to give readers some guidance based on what we've found—with the caveat that more test results will follow. When these later results arrive, we incorporate them into the review and adjust the rating accordingly.

We also create a video version of the review for every product. This may seem frivolous or unnecessary, but it allows us to demonstrate different dimensions of the product being reviewed. It might be one thing to explain how the beveled edge on a new phone is shiny, and quite another to turn it in the light to show the reader. Features and limitations of a user interface are much easier to explain by demonstrating than by explaining. This would seem to be even more important in participatory medicine.

One limitation of our reviews is that we can't tell the reader how the device performs after long-term use. Readers often want to know about a rating as soon as they hear about a product, often before it arrives in stores, which means we can only describe how it performed over the course of about a week. If it turns out to break under repeated use after a year, we wouldn't have that information for our readers.

To account for this, Rafe Needleman and I have begun a segment on our help and how-to podcast, The Real Deal, called "Road Test". We report on the gadgets we've been living with over time and invite listeners to write or call in with their experiences so we can report our own long-term findings as well as gather those of other real world users.

For instance, I purchased the Verizon MiFi, a small device that receives the cellular data network from Verizon and rebroadcasts it as wireless Internet, usable by any WiFi device. For the first week I owned it, I raved about how much better the Internet reception was on MiFi than on my phone. In fact, I shot a video demonstrating how to use MiFi to replace a phone's Internet by using the WiFi receiver in a smart phone.

Now that I've had my MiFi for a month, I've noticed a few hitches. For instance, it occasionally loses the WiFi connection with some devices—it unpredictably stops broadcasting. I've also discovered a few dead zones where the service doesn't work. Although these factors haven't dramatically altered my evaluation, and I still consider it an excellent device, it has tempered my enthusiasm a little.

And that's one of the issues with product reviews. It's difficult to be definitive, because you can't know everything about a device. But your audience wants you to be definitive. What's the best product in a given category of devices? Which one should I buy? These questions and others need definitive answers; just as treatment decisions need to be made in medicine.

A useful balance can be found. It involves the three parts we mentioned above: qualitative, quantitative, and real world evaluation. A good review gives the reader a definitive answer with the understanding that the answer is based on the available data. For example, CNET will pick an "Editors' Choice" laptop. But we make it clear that we chose it based on our benchmarks and what we know about the product. We temper our subjective evaluations with objective data. And we reserve the right to adjust our evaluation as real world use reveals new dimensions about the device. We also enlist our audience's help with this.

Making product recommendations should, ideally, be customized to the needs of the person considering using the product. The best smart phone for a teen who communicates with friends and family by text is not the same phone that's best for a visually impaired senior who wants to call a family member often.

That's why a good review doesn't merely provide a list of specs and a rating; it also explains what a product does, and how it does it. A good review anticipates problems that might occur and alerts the reader to them. A good review gives its audience all the tools and facts needed to decide if the product will work for their particular needs.

The final dimension of a good review is the audience. One of the problems with the MiFi I mentioned earlier was accurately diagnosed and fixed by a listener to the podcast. After implementing her decision, I passed it along on the show to our readers. If we didn't have the option for users to participate, this couldn't have happened.

This is the primary way we bring in the element of real world use that can't be replicated during a week in the lab. The amazing power of the Web is the ability to bring consumers into the conversation. All our reviews contain a user rating based on the cumulative scores given by hundreds of readers.

Granted, a manufacturer could try to game these scores, but the number of people participating helps mitigate this scenario, and the expert review itself is right there for reference. It's another data point for the individual to use and take into account when making a purchase decision—one we rarely had before the Internet.

I can't stress enough the importance of listening to and relying on the audience to inform what you do. In our videos, our audio podcasts, and our written articles, we strive to incorporate not just the other side of the story, but also many different sides; perspectives we could have never discovered on our own. Participatory medicine obviously values the contributions of many, but there are even more ways to bring in the crowds through this and other publications, so that the information collected and distributed is better than it otherwise would be.

When your audience consists of faceless, voiceless numbers, you miss a lot. A sewing enthusiast in our audience once schooled us on the copyright implications of sewing patterns and shed a whole new light on the debate over music piracy. A psychology professor once helped explain to us and our audience how the emotional component of decision-making played into the strategy of a new video game console's points and achievements system. These are just two examples of perspectives that individual writers could not bring forth on their own.

If you want to help people make good choices, you need to take expert, informed, subjective experience, combine it with solid, valuable, quantitative information, and supplement it with real world experience and perspectives. It's a mixture that may be new, but it's one that is vastly improving the useful information available to people in all fields.


  1. Vance, A. Insurers Fight Speech-Impairment Remedy. New York Times, September 14, 2009. Accessed October 2, 2009.

  2. Baker L, Singer S, Wagner T, Bundorf MK. Use of the Internet for health care and effects on health care use. Presented at Academy of Health Meeting. Nashville, TN: 2003;20: abstract no. 828. [Google Scholar]

Open Questions

  1. What are some of the ways that participatory medicine can serve to improve the availability and quality of technological tools that are available for patient self care?
  2. How can patients and providers work in cooperative alliances to improve the assessment and review of health technology?
  3. What resources can be developed to provide richer information on technology that might be valuable to patients?
  4. What is the best way to keep medical information and product reviews up to date and accessible?
Copyright: © 2009 Tom Merritt. Published here under license by The Journal of Participatory Medicine. Copyright for this article is retained by the author(s), with first publication rights granted to the Journal of Participatory Medicine. All journal content, except where otherwise noted, is licensed under a Creative Commons Attribution 3.0 License. By virtue of their appearance in this open-access journal, articles are free to use, with proper attribution, in educational and other non-commercial settings.

Comments on this article

View all comments

Starting in September 2017, JoPM is published by JMIR Publications and papers should now be submitted at