Category: e-health

Journal Nature changes course on allowing comments

published date
December 22nd, 2006 by

In September the Journal Nature started allowing online comments. On the surface it sounded like a promising idea –getting papers out sooner and allowing a wider range of commentary– but as I asked in Nature opens the peer review door a crack. Will anyone step through?— it didn’t get off to a promising start:

I don’t see a single comment on the 10 pages that are listed on the Nature site.

Now the program has been withdrawn due to lack of interest. It’s not as easy as it may look to generate mass use of a tool like this even for a prestigious journal. One of the problems is that Nature was too restrictive on who could post and how the comments would be moderated.

In announcing the discontinuation, Nature’s editors said they found the majority of scientist-authors were unwilling to post their papers or were unwilling to criticize peers’ work publicly by posting comments on Nature’s Web site.

Of the 1,369 short-listed papers submitted during the four-month trial, authors of 71 papers were willing to post their work online, Nature said, receiving 92 technical comments.

The Public Library of Science’s PLoS ONE is starting to ask for questions and comments as articles are posted. PLoS is much more attuned to user participation and their experiment is more likely to succeed, based on a quick look at their guidelines.

We’ll see.

Enquiring minds want to know

published date
December 4th, 2006 by

Enquiring minds want to know

Found this disturbing piece in the New York Times:

BILL CLINTON’S identity was hidden behind a false name when he went to New York-Presbyterian Hospital two years ago for heart surgery, but that didn’t stop computer hackers, including people working at the hospital, from trying to get a peek at the electronic records of his medical charts.
The same hospital thwarted 1,500 unauthorized attempts by its own employees to look at the patient records of a famous local athlete, said J. David Liss, a vice president at NewYork-Presbyterian.
The usual approach has been to allow types of personnel who need to see the records to have access and log the results. But logging means nothing without consequences for improper access. What did Columbia do to discipline those who tried improperly to access celebrity charts?
It may be necessary to have a person monitoring the process in real time and denying access in some situations. This is what happened in the era of paper charts for a patient not in the hospital. For a patient in the hospital the chart sat in a rack and if there was a parade of people coming to peek they would have been stopped.

Someone else who doesn’t drink Kool Aid

published date
November 15th, 2006 by

Someone else who doesn’t drink Kool Aid

I’m sick of reading all the glowing articles (for example here, and here) about using Google for diagnosis. Fact is, it’s not a great idea, as I’ve written (Stupid pet tricks, aka Google diagnosing). So I was glad to see a letter to Modern Healthcare by Joseph Britto, MD, CEO of Isabel Healthcare (a decision support company) entitled “Google inadequate for diagnoses.

While the idea of using Google as an ersatz clinical decision support system is clever, a 58% accuracy rate is unacceptable — in either a human clinician or a software program. Google, of course, was not designed for this purpose…

Older-generation diagnosis decision software systems have much higher accuracy rates. In addition, the latest generation of diagnosis reminder systems…consistently suggest the proper diagnosis 90% of the time. These new programs use advanced natural-language processing algorithms — a newer, more powerful search technique — to scan a specific database of medical journals and texts. This produces more accurate, higher-quality search results.

Is Medsphere betraying the open source community?

published date
November 12th, 2006 by

Is Medsphere betraying the open source community?

Fred Trotter of GPL Medicine is unhappy about the behavior of Medsphere.

Medsphere is arguably the most famous VistA vendor. However, some in the VistA community have wondered why Medsphere, which touts itself as an open source company, has not released their improved code back to the community. I and other VistA community members have been concerned that Medsphere might have made a proprietary product around VistA. I have been publically commenting about this for quite some time.

Apparently, there’s a nasty legal dispute under way between Medsphere and its founders. Trotter explains why he’s on the founders’ side. Go have a look at what he has to say.

Stupid pet tricks, aka Google diagnosing

published date
November 10th, 2006 by

Stupid pet tricks, aka Google diagnosing

From the BBC

A team of Australian doctors googled the symptoms of 26 cases for a study in the New England Journal of Medicine.

In 15 cases, the web search came up with the right diagnosis, the paper published on the British Medical Journal website reports.

In each of the 26 cases studied, researchers based at the Princess Alexandra Hospital in Brisbane selected three to five search terms from each case and did a Google search without knowing the correct diagnoses.

They then recorded the three diagnoses that were ranked most prominently and selected the one which seemed most relevant to the signs.

The doctors then compared the results with the correct diagnoses as published in the journal.

Google searches found the correct diagnosis in just over half of the cases.

The crucial paragraph from the BMJ paper seems to be:

We then did a Google search for each case while blind to the correct diagnosis (that is, before reading the differential diagnosis and conclusion of each case record). We selected and recorded the three most prominent diagnoses that seemed to fit the symptoms and signs.
They looked through 30-50 results to find three that seemed reasonable. If a real decision support tool like SimulConsult doesn’t rank the correct diagnosis within the top 10 it’s considered as evidence of failure, not success. When there is such a failure typically there is some important fact missing from the database, a situation that is then corrected. It is not clear if Google has a similar ability to learn from the wisdom of the community of users.
Searches are less likely to be successful in complex diseases with non-specific symptoms … or common diseases with rare presentations …
These are the situations where decision support software is most useful.

Google is incredibly powerful. If you are too lazy to think you can type any thoughtless thing in and get something at least somewhat relevant out. That doesn’t mean Google should be used for clinical decision support.