February 11, 2006
About half of drugs in late-stage trials don't make it out of trial, says Brock Reeve, chief operating officer and managing director of Life Sciences Insights, a pharmaceutical industry advisory firm. Among tools helping pharmaceutical companies are predictive-modeling software that simulates the probability of achieving a response level to a specific dose of medicine. When drug companies test new drugs in trials, groups of patients are given different doses of the medicines. Many of those doses might end up not being effective at all, while others might prove to be toxic. "We've got to understand better what drugs do when we study them in animals and man," says Pieter Muntendam, president of BG Medicine, a biotechnology firm that provides services, such as computational analysis of molecular activity and drug compounds, to help drug companies predict the best chemical compounds for their medications or forecast the type of patients mostly likely to benefit from those drugs.
"We can open up that black box" to help drug companies decide earlier what doses, compounds, and types of patients would be most effective in trials based on "mathematical algorithms and lots of computation horsepower," Muntendam says. The company is using a 32-node, 64-CPU cluster running Red Hat Linux and Sun's Grid Engine, with custom-written and third-party statistical algorithms and tools. It also has another 10 servers running Windows, Linux, and Solaris, as well as around 10 Oracle databases. An early decision to stop testing a specific drug dose not likely to make it to market can in the long term reduce the total cost of a trial by millions of dollars, Muntendam says.
"This also minimizes patient exposure to drug doses that aren't likely to make it to market," says Jerald Schindler, president of Cytel Pharmaceutical, a company that works with drug companies in the development of new products.
Technologies such as sophisticated analytic, pattern-recognition, and decision-support software that help scrutinize data from millions of sources could become some of the most important medical instruments. They could let doctors provide patients with care that's best suited for their particular needs, genetic makeup, medical backgrounds, and other very individualized factors.
Without a national IT infrastructure, it's not yet possible to have widespread use of evidence-based best practices that could generate noticeable improvements in treatment. "The key tipping point will be in getting the national health IT infrastructure in place," IBM's Davis says.
There also are security, ethical, and privacy issues to be resolved. Widespread use of E-health records makes them subject to electronic threats such as worms, viruses, and hackers, problems that don't affect paper-based records. Another big concern revolves around the ethical use of this information, such as whether insurance companies would become more selective in covering individuals who are known to have a genetic predisposition for developing certain diseases if they had access to such information.
Some large employers that advocate the use of E-health records say they've already begun grappling with those kinds of ethical questions. For instance, IBM, which last fall began providing 150,000 employees and their families access to E-health record systems, has pledged not to consider health data when making hiring decisions.
That's one among the many difficult issues that business, medical, and tech professionals will have to wrestle with as the use of IT becomes more widespread in health care and as technology begins to tell us more about ourselves than we ever knew before. The prospect of better treatments, longer lives, and lower health-care costs is appealing, and the road to those goals is clear. But it's taking longer than expected to get there.
Continue to the sidebar:
A Peek Inside The Brain
About the Author(s)
You May Also Like