Share |

Reviewing Health Tools: A Community Matter


Summary: High-quality product reviews will be an important part of this new journal, with its focus on supporting and encouraging people to participate in their own health care. But how should we go about evaluating various interactive applications and devices that bill themselves as "health tools"? Health design advocate Amy Tenderich of explores the definition of a "health tool," and lays out parameters for a new kind of participatory product review process.

Keywords: Participatory medicine, medical devices, self-care, health tools, Health 2.0, patient networks, health social media, health design, ePatients, product reviews, patient advocacy.
Citation: Tenderich A. Reviewing health tools: a community matter. J Participat Med. 2009(Oct);1(1):e9.
Published: October 21, 2009.
Competing Interests: The author is working with startup company Keas Inc. to introduce a new set of online health tools beginning in November 2009.

The Journal of Participatory Medicine (JoPM) is about people taking a more active role in managing their own health conditions. That's not to suggest supporting the concept of unaided self-medication; we still need doctors. But a new era of technology-based tools is enabling patients to care and advocate for themselves, and to research their own medical conditions in ways never before imaginable.

For example, self-testing and monitoring devices have revolutionized the daily existence of people living with diabetes. Programs for online health education and record-keeping are teaching people about self-care, and giving them new confidence to take a proactive role in their own treatment. Every day, networks for interacting with others online are getting better at affording patients the opportunity to collectively communicate with medical providers and with the drug industry that caters to them.

Without these evolving communication-enabling tools for individual patients, "participatory medicine" would be quite a vacant term indeed. But, in all fairness, not every tool is a useful one. Thus, publishing high-quality product reviews will be an important part of this new journal, with its focus on supporting and encouraging people to participate in their own health care.

Scores of people—health professionals and patients alike—are struggling to identify which of these "tools" really matter. Which ideas change lives? Which ones have staying power? Which ones require too much manual data entry? Which ones are simply faulty?

As many of my colleagues know, I'm a huge proponent of this great wave of new health applications and personal medical gadgets. I've been fortunate to review many, and have campaigned for greater innovation in diabetes tools through the 2009 DiabetesMine™ Design Challenge, on my blog, Diabetes Mine. Once upon a time, I personally reviewed technology products for corporations; now, living with chronic illness, I find I have a much grittier perspective on the promises of new products. How then should we go about evaluating various interactive applications and devices that bill themselves as "health tools"?

What Makes a Health Tool?

This is a health journal, not an engineering publication. Therefore, I believe the focus should consistently be on whether—and how—the tool in question can improve a person's health or change that person's life for the better. This should be our basis for evaluation.

The next question we should ask is whether the so-called "tool" is something really useful in a person's daily life, or does it simply make the user's life more complicated by demanding an undue burden of time and attention? In other words, was it designed with the user (patient) in mind, or was it assembled by technology experts and other “specialists” with a more abstract interpretation of what they believe patients want or need?

At recent conferences, I've seen demonstrated a number of PC-based "decision support programs" and other "solutions" that require a huge amount of data entry, but don't offer much in terms of value coming out the other end. For example, one online "personal health management" program that enables users to track their health by using eight different "tabs," namely, information (pharmacy and insurance details), food, exercise, body measurements, chemistry (blood and urine tests), mood ratings, medical history, and a graphs page that "makes it easy to... look for trends and view your progress." Clearly, a user would have to spend many long hours painstakingly inputting data here—whereas the only output from this “tool” is pie charts and other fancy representations of the data that has been tracked.

Likewise, another example: why would a person with diabetes spend hours entering glucose values and food choices if the "tool" fails to do anything actionable with this information?

In a more distressing example, a recent list of "5 online health tools" promoted a website called Here, visitors are encouraged to "simply get a blood test from your doctor, go to the website, and enter the data from the blood test results sheet." After which, the user can obtain "a personalized printout that details the nutrients you're getting and those you need." And what can users do with this information? The answer: "You can talk to your doctor about the results and the best ways to improve your vitamin and mineral intake." Meanwhile, the website very prominently sells a range of nutritional supplement products, from ammonium chloride to sunflower oil. Clearly, this is not an actual tool, but a marketing scheme. This is a pretty banal example, but it certainly illustrates the point.

Whether it's a weight loss aid aimed at improving eating habits, or a beeper that reminds elderly patients to take their blood pressure meds, the best "health tools" are interactive, mobile, and provide the user some tangible value. Without these attributes, it’s a non-tool. And the need remains to expose the opportunistic imposters who build a website, populate it with some basic health information, use it for some entrepreneurial purpose, and try to pass it off as a "tool."

Getting Real

Two well-known major "sins" of technology reviews are (1) reviewing the product in a lab environment (synthetic) rather than a "real-world" setting, and (2) overlooking the fact that the products themselves were built without the end-user in mind.

In fact, reviews of computing equipment are notorious for focusing on benchmarks, and leaving out the practical question of how the tool would perform in the real world.

The lack of standardized, widely accepted performance benchmarks for online health tools will make objective measuring difficult. Therefore, our reviews will, of necessity, be subjective evaluations by the people doing the testing.

But this is quite appropriate, given that "participatory" medicine focuses on the user experience—subjectivity is the point! One might argue that objectivity does not exist in social media; the good information is that which is best-liked by the most people, and therefore bubbles to the top.

Reviews as Conversations

Thus, we should look at a review not as the definitive word on the usefulness of any given product, but rather as the start of a conversation. The reviewer inevitably voices an opinion, and others are invited to chime in.

The widely venerated 1999 publication, The Cluetrain Manifesto by Rick Levine and colleagues, brought to light the concept that "markets are conversations"; which is to say that conversation is the essential element needed to reach and serve clients and other stakeholders. According to the authors, the participants in a market (community or industry) "communicate in language that is natural, open, honest, direct, funny, and often shocking. Whether explaining or complaining, joking or serious, the human voice is unmistakably genuine. It can't be faked.[1]"

When we publish a review in the JoPM, our goal will be to initiate a community review process in which many others will have an opportunity to offer comment, feedback, and dissenting opinion. Moreover, similar to many health blogs, when rich discussion occurs, our reviewer can later summarize that discussion in a future article. So the reviewer becomes the facilitator of a conversation.

Reviewer Guidelines

Naturally, some guidelines and rules of engagement will be necessary to make the process a success.

First, the reviewer should be mindful of the concepts introduced above: namely, that we are not looking for quantitative "lab" reviews conducted in a sterile environment that overemphasizes the underlying technology. Rather, we seek to reiterate the question, "Does this product recognize the true needs and real life context of the people intended to use it?"

As for the nuts and bolts, I personally would like to see reviews that offer a mix of three elements:

  1. Story telling: One or more "mini case studies" of what occurred when an actual patient used this tool.
  2. Compare and contrast: "How useful is this tool compared to similar programs or devices?"
  3. Real-world ROI (return on investment): A meaty discussion of what the user gets out of using the tool; ie, "If I download this cell phone application for $24 a year, how likely is it to improve my daily routine, or am I more likely to disable the annoying text reminders within a week?"

Drilling down, here are some key criteria for the reviewer to explore:

  • To what degree does the design and form factor score high marks? Is the tool embarrassing to use in public? Is the screen display readable in daylight or in the dark? Do the form factor and color options make sense for this tool?
  • Is it a "complete product"? Or is it just a handy single feature that's being positioned as a "tool"? Consider the example of a website that features photos of various foods and their nutrition content. This serves as a reference guide, to be sure, but couldn't be described as a complete "tool" for behavior change or to achieve weight loss goals.
    • Occasionally there are some products in which a single standout feature is so innovative and so useful that the products earn the right to be incomplete, yet still labeled tools. When this occurs, the reviewer should shout it from the treetops. An example would be the 2007 launch of the OmniPod™, a tubeless insulin pump system that provides unprecedented comfort and flexibility for diabetics—revolutionary even though the system's initial data tracking software left much to be desired.
  • What about the cost factor? How does cost stack up against usability? From another standpoint, I would argue that every reviewer should consider how the company behind the tool is working to make it accessible to those who can't afford to buy it out-of-pocket. This may be as simple as asking the vendor outright: "What are you doing to help the poor and uninsured gain access to this tool?"

Participant Guidelines

For participants in the resulting conversation, some basic ground rules are in order:

  • Employees or executives of the company that markets the tool should identify themselves as such, and, in that capacity, should only comment to correct  factual misconceptions.
  • Every comment should be respectful, on-subject, and never a direct personal attack on the review author or other participants.

In summary, a "health tool" should be interactive, and should make a significant positive impact on the user's daily life and health. The writer reviewing it will have accomplished something worthwhile if a conversation begins and the crowd-sourcing process starts. Here at JoPM, we should aim to create participatory reviews for participatory medicine. It wouldn't make sense any other way.


  1. Levine R, Locke C, Searls D, Weinberger D. The Cluetrain Manifesto, 10th Anniversary Edition. New York: Perseus; 2009.

Open Questions

  1. Should the JoPM attempt to create an official definition of a "health tool" that will help to exclude any commercially-driven imposters from our review process?
  2. Should we set guidelines for fair & balanced selection of patient product testers? How shall these individuals be identified?
  3. Should the journal allow vendors to directly request a review of their product, or restrict the selection to the discretion of our editorial team?
Copyright: © 2009 Amy Tenderich. Published here under license by The Journal of Participatory Medicine. Copyright for this article is retained by the author(s), with first publication rights granted to the Journal of Participatory Medicine. All journal content, except where otherwise noted, is licensed under a Creative Commons Attribution 3.0 License. By virtue of their appearance in this open-access journal, articles are free to use, with proper attribution, in educational and other non-commercial settings.

Comments on this article

View all comments

Starting in September 2017, JoPM is published by JMIR Publications and papers should now be submitted at