Fraud on Rise : Faking It in Biomedical Research - Los Angeles Times
Advertisement

Fraud on Rise : Faking It in Biomedical Research

Share via
Times Medical Writer

Reports of irregularities in biomedical research--ranging from outright fakery and plagiarism to poor record-keeping--have increased markedly in recent years, shaking leading research institutions and triggering a sharp debate on the extent of the problem and the need for corrective measures.

The National Institutes of Health, a government agency that funds a third of all biomedical research in the United States, now receives two or three serious allegations of misconduct a month; as recently as seven years ago such reports were almost unheard of, according to Mary L. Miers, the NIH misconduct policy officer.

These charges are about equally divided between those that are baseless, those that prove to be true and those that are true to some degree but turn out to be less serious--such as when a researcher has not kept records of his experiments but did in fact conduct them, Miers said.

Advertisement

Shortcuts, Sloppiness

“The most frequent finding is not psychopathology or outright dishonesty, but accumulated cutting of corners and sloppiness,†said Dr. William Raub, the institute’s deputy director at a symposium on fraud in science held at the University of California, Davis, Medical Center last year. “(Researchers) will admit incompetence but say, ‘I am not a crook.’ â€

The most recent serious incident surfaced last November, when Harvard researchers retracted their just-announced discovery of a new molecule that stimulated the human immune system and was seen as crucial to understanding the body’s ability to ward off disease. The molecule, they admitted, did not exist.

Such incidents can lead not only to wasted time and money but also to diminished respect for the quality of research in general, and perhaps even can cause harm to patients whose treatments are based on faulty information.

Advertisement

In 1986, after four years of planning, the NIH implemented new procedures for dealing with scientific misconduct. These include guidelines for conducting investigations, requirements that institutions receiving federal funds more closely monitor their own scientists, and safeguards for the rights of those accused.

“This is really a step forward,†said Patricia K. Woolf, a sociologist affiliated with Princeton University who monitors fraud in science. “It had been extremely uncomfortable to indict a colleague accused of misconduct. Now there is an orderly way of doing it.â€

But others say even tighter federal controls may be necessary, including more stringent record-keeping requirements and spot audits of researchers.

Advertisement

Nobody is really sure how often scientific misconduct occurs, because few studies have been performed to find out. The National Academy of Sciences’ Institute of Medicine is expected to launch a major study of the issue soon, but details are not available.

While wholesale fraud is believed to be rare, subtle abuses appear to be more frequent and probably far more damaging because they are less likely to be detected. Scientists have coined colorful expressions to describe some of these abuses, such as “honorary authorship†for the practice of padding the list of authors on a research paper and “salami science†for the practice of reporting results in several papers when one article would suffice.

The allegations also have focused attention on the often extreme pressures on scientists who seek prestige, large research grants and academic promotions.

The problem “begins with fierce competition in college, excessive emphasis on grades and the rise of students who become 22-hour-a-day study machines,†according to Dr. Robert G. Petersdorf of the University of California, San Diego, School of Medicine. “Medical science today is too competitive, too big, too entrepreneurial and too much bent on winning.â€

Not an Excuse

But such pressures do not excuse misdeeds, he said, writing in the Annals of Internal Medicine last June: “Those who have chosen science for a career have, in a sense, taken an oath to discover and disseminate the truth, much as physicians have sworn the Oath of Hippocrates.â€

The traditional view holds that science is ultimately self-correcting because researchers eventually discover the honest mistakes--or frauds--committed by others.

Advertisement

In the normal process of scientific research, there are many steps along the way at which mistakes or misdeeds can be picked up. These include the review of research grant proposals and the evaluation of scientific papers by independent experts. Also, when researchers undergo evaluation for promotion or tenure, their work is closely reviewed.

Furthermore, researchers usually present their findings at scientific meetings, where they can be questioned by colleagues. Finally, significant discoveries often prompt confirmation efforts by other researchers; when the findings cannot be confirmed, doubts arise.

But instances of apparent deceit going undetected for many years abound, as outlined by journalists William Broad and Nicholas Wade in their 1982 book, “Betrayers of the Truth,†and by Alexander Kohn of the Tel Aviv Medical School in his recently published book, “False Prophets: Fraud and Error in Science and Medicine.â€

Published the Best

One scientist who apparently only published his best results was Robert A. Millikan, the American physicist who won a Nobel Prize in 1923 for measuring the electric charge on an electron as well as for pioneering work in photoelectricity.

In the early 1900s, Millikan was engaged in a sharp debate with Felix Ehrenhaft of the University of Vienna: Millikan believed that all electrons had a single charge, while Ehrenhaft believed that sub-electrons carrying fractional charges existed. Most physicists of the day eventually became convinced that Millikan was correct.

In the 1970s, Harvard historian Gerald Holton took another look at the controversy. By examining the original notebooks in which Millikan based a key 1913 paper, Holton discovered that Millikan had reported only 58 measurements of electron charges out of a total of 140--despite his assurances in the text of the paper that the data represented all experiments conducted during 60 consecutive days.

Advertisement

Ehrenhaft, meanwhile, had published all his readings--even those that did not support his theory.

Enter the Quark

In recent years physicists--using more accurate equipment than was available to either Millikan or Ehrenhaft--have accumulated evidence suggesting that Ehrenhaft may have been correct after all. Most scientists now believe that electrons, like other subatomic particles, are made from smaller units called quarks.

Other celebrated scientific discoveries have turned out to be out-and-out hoaxes. One example is the Piltdown skull, unearthed in 1908 by Charles Dawson, a lawyer who collected fossils as a hobby, in an English gravel pit.

In 1912, Dawson and Arthur Smith Woodward, the keeper of the department of geology at the British Museum, announced the discovery at the meeting of the Geological Society of London. Many took the skull as proof of a missing link between apes and humans--and that the first human was British as well. But some zoologists suspected a fraud from the start.

Skulduggery Unearthed

The debate was not settled until the 1950s, when researchers used new dating techniques to show that the skull consisted of an ape jaw with filed-down molars and part of a human skull that had been stained to appear old.

Despite the increasing sophistication of research methods and the procedures for reviewing scientific work, instances of fraud continue to crop up.

Advertisement

In 1974, William T. Summerlin achieved international notoriety at the Sloan-Kettering Institute for Cancer Research in New York City by coloring white skin grafts black with a felt pen to fake the results of skin transplant experiments in mice.

Like the Summerlin case, many of the more recent instances of fraud have involved young researchers in large laboratories at prestigious institutions, where the pressures to publish and fear of failure may have been extreme.

Typically, directors of such laboratories may employ dozens of junior scientists and technicians and be responsible for millions of dollars in research grants each year. As a result, they often spend much of their time lecturing, administering and raising funds, leaving less time to supervise ongoing research. The young researchers in these laboratories also may be highly ambitious to get ahead and establish laboratories of their own.

Such pressures appear to have been a factor in the case of Claudio Milanese, an Italian researcher at Harvard’s Dana Farber Cancer Institute. His “discovery†of a new molecule that stimulated the immune system was reported in Science magazine in March, 1986.

Missing Link

News stories hailed the discovery of “interleukin-4A†as a crucial missing link in understanding the the body’s ability to fight disease. The molecule appeared to belong to the same group of substances as interleukin-2, which is being used experimentally to treat patients with cancer.

After the paper was published and Milanese returned to Italy, his co-authors, including his supervisor, Ellis L. Reinherz, continued to study the new molecule. But they were unable to reproduce his results.

Advertisement

Milanese eventually conceded to Science magazine that he had manipulated the data. That paper--as well as a related paper in the Journal of Experimental Medicine and several other manuscripts awaiting publication--were retracted in November.

An investigation of the case by Harvard officials to determine the extent of the data-fudging is continuing.

Treating the Retarded

Another instance of alleged “serious scientific misconduct†involves Stephen E. Breuning, 34, a prominent psychologist who studied the treatment of mentally retarded children with behavior-control drugs, according to an investigative panel of the National Institute of Mental Health.

Since the mid-1970s, his studies at the University of Pittsburgh and elsewhere have suggested that stimulant drugs, such as amphetamines, are effective in controlling aggressive and self-destructive behavior in some mentally retarded children. His findings have influenced some physicians to prescribe these stimulants.

The National Institute of Mental Health, which funded some of Breuning’s research, began investigating Breuning in December, 1983, after a co-worker raised questions about the validity of his work.

After a three-year probe, the institute’s review panel recently concluded that Breuning actually had carried out very little of the work he described. Breuning, who now works at the Polk Center in Polk, Pa., has denied the charges.

Advertisement

One of the most analyzed cases of scientific misconduct in recent years involved Dr. John Darsee, an ambitious young heart researcher.

Between 1978 and 1981, Darsee, while at the Emory University School of Medicine in Atlanta and at Harvard Medical School, fabricated data that formed the basis for more than 100 scientific publications on heart disease, according to investigations by university committees and the National Institutes of Health.

His fraud was discovered after co-workers observed him forging data on heart attack experiments in dogs by mislabeling tracings of their heart rhythms. Subsequently, the validity of most of his research has been called into question. Darsee initially denied falsifying any data but later conceded that he had and retracted many of his publications, according to the National Institutes of Health.

The Darsee case became especially controversial because it raised questions about the culpability of 47 other researchers at Harvard and Emory who are listed as his co-authors, including Dr. Eugene Braunwald, Darsee’s lab chief at Harvard and one of America’s most prestigious cardiologists.

Braunwald’s alleged failure to adequately supervise Darsee was cited by a National Institutes of Health panel as one reason the falsifications were not detected sooner, a charge the researcher has denied.

The role of these other researchers has also been scrutinized by Walter W. Stewart and Ned Feder of the National Institutes of Health, two nerve cell researchers who became interested in research malpractice as well. Indeed, Steward and Feder’s study of the performance of these other researchers has acquired a notoriety in scientific circles equal to the Darsee case itself.

Advertisement

Numerous Errors

After the scandal broke, Feder and Stewart reviewed Darsee’s research papers and found an abundance of errors and misleading statements--an average of 12 per paper--which they said the co-authors should have caught. These were in addition to the fabricated data.

In one family tree, for example, a 17-year-old was listed as the father of children aged 8, 7, 5 and 4. In other papers, data tables contradicted one another. And according to their study, when data was reused from other studies, the practice was often not explicitly acknowledged.

There also were many “honorary authorsâ€--people whose names were listed on the papers even though they apparently had not made any substantial contribution.

In total, three-quarters of the researchers were authors of papers that lapsed from “generally accepted standards,†Feder and Stewart concluded.

Publication Delayed

The publication of their study, completed in 1983, was delayed for more than three years, in part because lawyers for some of the researchers threatened to sue for libel.

The study finally appeared in a much-edited and toned-down version in the British journal Nature in January, 1986, along with a rebuttal by Braunwald and a commentary by Nature’s editor, John Maddox. Braunwald disputed most of the allegations against Darsee’s co-authors, saying that at most any defects were “wholly trivial.†Maddox pointed out the study’s shortcomings but concluded that “shocking though it may seem . . . errors of all the kinds listed are far from being rare.â€

Advertisement

There is strong disagreement within the scientific establishment about whether the federal government, journal editors and leading researchers have done enough to safeguard scientific integrity.

Under current National Institutes of Health regulations, institutions receiving federal funds have 120 days to complete their own investigations from the time they first notify the institute of suspected misdeeds. The institute often conducts its own investigation as well. One criticism--acknowledged by federal officials--is that it sometimes takes the institute several years to complete its probes.

If misconduct is confirmed, the institute has a variety of sanctions at its disposal, ranging from letters of reprimand and revocation of federal research funds to requirements for closer scrutiny of ongoing studies.

Federal Agencies

In rare instances, in which criminal or civil charges are being considered, cases may be referred to other federal agencies. In the Darsee affair, for example, the scientist was barred from receiving federal research funds for a 10-year period and Harvard had to return $120,000 to the institute.

The institute now maintains a confidential nationwide list of scientists and institutions under investigation or restrictions because of past misconduct. In a related development, the Index Medicus, the world’s largest bibliography of medical literature, has begun keeping track of retractions of scientific papers. Since 1984, there have been 36.

Some feel that the new institute regulations and the peer review system--the pre-publication evaluation of scientific articles by independent experts--are sufficient. “There is no evidence that the small number of cases that have surfaced required a fundamental change in procedures that have produced so much good science,†insisted Daniel E. Koshland Jr., editor of Science, which published the fraudulent Harvard manuscript.

Advertisement

Unwanted Intrusions

Intrusive attempts to detect fakery, such as visits to laboratories by outside auditors to review records, may do more harm than good, according to Dr. Arnold S. Relman, editor of the New England Journal of Medicine. “Any attempt to police the system would probably create such an atmosphere of suspicion and distrust that the free spirit of scientific inquiry would be crushed,†he wrote in a 1983 editorial about the Darsee case.

But others say radical surgery to change some scientific practices is necessary. Peer review, they point out, is based on the presumption of honesty in research and therefore offers little protection against fraud in the past.

One view is that researchers should be required to retain their raw data and laboratory notebooks for several years after they publish their results and to voluntarily make them available to other scientists.

Dr. Edward J. Huth, editor of the Annals of Internal Medicine, believes that practices like honorary authorships and duplicate publications can be curtailed by requiring all authors to affirm by signature their responsibility for the final version of a paper.

Dr. John C. Bailar, president of the Council of Biology Editors, says that authors also should be required to fully disclose potentially deceptive practices, such as selective reporting of results or failure to point out that results appearing to be significant may in fact be due to chance.

Spot Checks Proposed

Others have proposed spot audits of investigators who receive federal funds, a procedure the U.S. Food and Drug Administration already uses to ascertain the accuracy of tests of new drugs.

Advertisement

About 12% of the FDA’s routine audits each year uncover “serious deficiencies,†and about seven researchers a year are disciplined or disqualified from further drug studies, according to a 1985 article in the New England Journal of Medicine by Dr. Martin F. Shapiro of the UCLA Medical Center and Robert P. Charrow of the University of Cincinnati College of Law.

“The audits act as a great deterrent for those who might be sloppy or fraudulent,†said Daniel L. Michels, director of the FDA’s Office of Compliance. “We are not finding horrendous numbers of studies we have to discard.â€

One exception involved Dr. Wilbert S. Aronow, who was chief of cardiology at the Veterans Administration Hospital in Long Beach.

Aronow had a national reputation for testing new drugs on patients with congestive heart failure and was a member of the FDA’s advisory committee on cardiology.

But during a 1982 FDA audit, apparently just when his actions were about to be discovered, Aronow admitted that he had invented data to show that a drug decreased heart pain symptoms and that he had altered reports of chest X-rays in another study. He disqualified himself from new drug trials and resigned from the Veterans Administration.

Advertisement