Posted on | April 29, 2016 | 1 Comment
Would you be willing to subject yourself to a 3.3% payroll tax (and your employer to a 6.7% payroll tax) to gain access to reliable simple universal health coverage – one that provided choice in and out of network, one that would cover all citizens, and one that has drawn the active opposition of health insurers? That’s the $38 billion dollar question facing Colorado voters in the near future.
Colorado supporters have arrived at this point through a rational process that began with advantaging Obamacare with the expansion of state Medicaid rolls, and navigating bronze, silver, gold and platinum options. But in the end, as the system attempted to move toward low cost/high quality goal posts, which contractually continued to include insurer middle-men draw downs and embargoes on drug price negotiation, all roads led toward universal care and universal management as simpler, better, and (for almost everyone) cheaper in the long run approach, supporters said.
A similar process was proposed this week in a JAMA article titled, “Toward an Integrated Federal Health System”. In it, the authors review the facts, including:
- The federal government spends $1.3 billion a year (40% of all health care spending), on health care.
- This budget funds coverage through the departments of Health and Human Services, the Department of Defense, Veterans Administration, and the Department of Homeland Security.
- The payments channel through a bewildering array of mechanisms and middle men – through private insurers, private health professionals and organizations, and direct services to covered patients.
- The complexity is mind-boggling. For example, 42 programs exist across 6 federal agencies for ambulatory transportation of the elderly and disabled, each with their own rules.
- Drug payment levels vary widely. DOD pays 67% more for generic drugs than the VA for example.
- Double payments are not uncommon. Approximately one million citizens are now simultaneously tapping in to both VA and Medicare Advantage payments with losses estimated at $3 billion a year.
- Brick and mortar duplications, and in-patient underutilization is legendary.
- The DOD and the VA electronic medical records are moving forward toward integration this year – which should allow inter-operability and mobility/virtual choices for services to proceed.
- HHS expects that 50% of Medicare payments will have been converted from fee-for-service to “value-based models” (bundled payments with quality performance incentives) by 2018. Seven states will apply this approach to their Medicare Advantage plans beginning in 2017.
If all of this sounds complicated, multiply it by 50 states, and you begin to understand why supporters in a state like Colorado are seriously considering pulling the plug and going with a simpler universal solution.
Are we approaching some consensus? Will integration morph eventually into universal and cut the middle men out of the deal? Where are the consensus points emerging? Here are four areas:
1. Eliminate “fee-for-service” payment methodologies.
2. Move toward universal coverage, eliminating low benefit scam plans.
3. Integrate patient-focused EMR’s regionally, then nationally.
4. Favor non-profit solutions, and integrated delivering systems whose mission incorporates individual, population and community health.
Posted on | April 23, 2016 | No Comments
The medical journals these days are replete with analyses of the latest health reform measures, and their negative impact on the physician psyche. It would be easy to simply connect the dots, and say that physician discontent is the result of ill-advised organizational changes. But, in reality, this problem has plagued the profession for some time, and is existential in nature.
Over the past two decades, our health care system in the United States has been actively transforming. Health is rapidly becoming synonymous with reaching full human potential. Health care provision is increasingly being redefined as a right carrying with it responsibilities for individuals, families and community. Provision of care is now a collaborative effort with individual providers giving ground to health care teams, and consumers joining hands with providers in strategic health planning and mutual decision making.
The role of ‘professionalism’ in training of physicians and in the delivery of care has been heralded by major scientific bodies including the AAMC, Institute of Medicine, the ACGME and the ABMS. Their listing of desirable attributes in health care professionals is helpful. But absent the context of rapid environmental change, the modeling of new approaches to care that are emerging from both the consumer and provider side, and the integration of the latest social science concepts which impact human planning, development and potential, physicians will predictably under perform in the modern world and not fully realize either the professionalism they desire or their full leadership potential in the future.
As a Petersdorf Scholar-in-Residence at the Association of American Medical Colleges (AAMC) in 2002, Dr. Thomas S. Inui opened his mind and heart to try to understand whether and how professionalism could be taught to medical students and residents. His thoughts on the topic, published under the title “A Flag In The Wind: Educating For Professionalism In Medicine”, are highly relevant to today’s medical educators and our nation’s health professional community.
After listing the profession’s ideal values and character qualities, he states:
“While we in medicine might see these as our lists of the desirable attributes of professionalism in the physician, as the father of an Eagle Scout I know that Boy Scout leaders use a very similar list to describe the important qualities of scouts: ‘A Scout is trustworthy, loyal, helpful, friendly, courteous, kind, obedient, cheerful, thrifty, brave, clean, reverent (respecting everyone’s beliefs).’ I make this observation not to descend into parody, but to make a point. These various descriptions are so similar because when we examine the field of medicine as a profession, a field of work in which the workers must be implicitly trustworthy, we end by realizing and asserting that they must pursue their work as a virtuous activity, a moral undertaking.”
Later in the report, he shares: “The processes of formation include experience and reflection, service, growth in knowledge of self and of the field, and constant attention to the inner life as well as the life of action. ‘Who am I becoming as I move towards this life of service?’ is a critical question in formation, as disciplinary acculturation and expertise increases. Acknowledging that the educational process in medicine changes – in some substantive sense – who we are as well as how we relate to others, may be the key to understanding why we need to be mindful, articulate, and reflective about the process.”
“Who am I becoming?” is the right question. But equally important (perhaps more) is “Why am I becoming that?” In the same year when Dr. Inui was doing his AAMC fellowship, John Inglhart, founding editor of Health Affairs, interviewed Steven Schroeder, who had announced his coming retirement as CEO of the Robert Wood Johnson Foundation. Schroeder said, “If physicians and nurses, who are central to the operation of the system, however care is financed, are dissatisfied and feel undervalued, I grieve for that system because that is a system in trouble.” Here we see a shift, away from “I” to “it”. It is the “system”, not an individual or even an individual’s teachers, that is “in trouble”. “Bad people or bad design?”, Deming, the father of re-engineering systems, might ask.
But increasingly, I believe that the systems that are evolving are largely a reflection of the current values of physicians and the organizations that represent physicians. Eli Ginzberg predicted this outcome thirty years ago in his classic article, “The Monetarization of Medical Care”. The recent manmade opioid epidemic, made possible in part by the AMA Federation’s liberal approval policies of “specialty organizations” in “pain management”, as well as the rapid fire prescribing of oxycontin by the nations doctors and dentists, is proof positive that we have wandered far afield of our original mission, into a positioning that is so deeply conflicted by our own and others business interests that our identities as physicians and self-regulating ethical professionals have become fundamentally compromised. To my mind, the “Medical-Industrial Complex” now largely owns the soul of medicine. And for physicians to regain possession of their professional values, and quiet our inner voices of discontent, we will be required to do some serious soul searching, and exhibit a bit of backbone as well.
Posted on | March 30, 2016 | 4 Comments
Few on the planet remain unfamiliar with an infectious disease threat that was invisible to most a year or so ago – the Zika virus. It’s association with microcephaly and original concentrated appearance in Brazil (home to the 2016 Summer Olympics) has created the image-driven, news barrage that publicized the threat. All of the above has created a sense of urgency among scientists to discover and unleash a technologic solution.
The blood sucking carrier of Zika is well known – Aedes aegypt. The mosquito not only spreads Zika, but also yellow fever, dengue fever and chikungunya, a miserable infection that attacks the joints. In short, there is little sympathy for the mosquito. But it is its’ association with birth defects that makes it a unique and pressing public health emergency since at least 1/2 of the planet’s pregnancies are unintended, and exposure to Zika early in pregnancy carries a high chance of conceiving a severely disabled child.
There are quite a few scientists out there who are experts when it comes to Aedes aegypti, and not surprisingly, there have been a range of views on how to halt its’ scourge. But nearly all lead back to the genetic structure of this mosquito, and altering it in a manner that sterilizes or limits the growth of the mosquito or the reach of its vector.
We tend to think of our scientific community as integrated and unified, especially when confronted with a urgent challenge of this magnitude. One envisions a emergency meeting in Bethesda, a defined action plan with timelines, and plenty of funding. But the truth is, and has been since 1980, that America’s scientists are an independent, entrepreneurial, competitive, and patent-conscious lot that can be difficult to herd. As Rockefeller University mosquito expert, Leslie B. Vosshall, put it to a New York Times reporter, when commenting on defining the bug’s genetic code, “For a long time, I think we all thought the map was somebody else’s job.”
Now there is an Aedes Genome Working Group, but it wasn’t pulled together in Washington. It began with a Twitter post from Vosshall that read, “The Aedes aegypti mosquito is infecting millions with #Zika and #Dengue, but we still haven’t put all the pieces of its genome together”.
The subsequent professional chatter led to a coalescence of experts who eventually managed to scrape together a bare minimum of funds to start the process. They weren’t starting from scratch – but almost. There was a genetic mapping of the mosquito back in 2007 – but it is fragmented and so compromised as to make it relatively useless. To be clear. This not an easy task. The Aedes has only 3 chromosomes in its nucleus, but they contain an estimated 1.3 billion “letters” in their DNA sequence.
The good news is that DNA sequencing technology has come a long way since 2007. Still, it’s a challenge, which is why the work group is pursuing three different approaches in parallel, not certain which will unlock the code fastest and most accurately. What all members agree on is that genetic mapping is key to addressing the challenge.
Finally, there is the issue of what to do with the map once you get it. Do you attempt to genetically engineer future sterility into the breed? Could you direct the mosquito to avoid biting humans, and engineer a preference for other animal species? What happens if the engineered gene jumps species, and escapes human control? Quickly then, science technology morphs into science policy.
There was a time when scientific progress was highly centralized nationwide, when any discovery partially funded by a federal grant became the intellectual property of the U.S. government. But this approach discouraged profit seeking organizations from developing real-life applications for the discoveries.
In fact, by 1978, 28,000 scientific patents sat dormant on shelves in the U.S. Patent Office in Washington.(58) On December 12, 1980, all that changed when an outgoing Jimmy Carter signed the Bayh-Dole Bill giving academicians and their institutions (and subsequent corporate investors) control over applied discovery profits.
The response was dramatic. While 380 patents were granted to them in 1980, that number soared to 3088 by 2009. According to one estimate, the resultant impact on the nation’s Gross Domestic Product (GDP) reached $47 billion in 1996, and soared to $187 billion a decade later. Since 1980, 2,200 new companies appeared and generated more than 1000 new products. As important, the new technologies spawned entirely new industries in the United States including the field of biotechnology.
Twenty years later, The Economist commented that: “Possibly the most inspired piece of legislation to be enacted in America over the past half-century was the Bayh-Dole Act of 1980….More than anything, this single policy measure helped to reverse America’s precipitous slide into industrial irrelevance.” But that very same publication rescinded its glowing portrayal, just three years later, in an article titled, “Bayhing for blood or Doling out cash?”(78)
As that article states, “Many scientists, economists and lawyers believe the act distorts the mission of universities, diverting them from the pursuit of basic knowledge, which is freely disseminated, to a focused search for results that have practical and industrial purposes…it makes American academic institutions behave more like businesses than neutral arbiters of truth… Researchers (and particularly their minders in university patent-licensing offices) are increasingly reluctant to share materials and knowledge with others unless such sharing is accompanied by legal agreements about ‘reach-through’ royalties on potential findings and the right to restrict publication of results.”
And so, the fate of women of child bearing age, at risk from Zika, relies on the good will, brilliance and drive of individual entrepreneurial scientists who somehow manage to discover each other…sometimes, as in this case, on Twitter. As our planet becomes smaller, and our problems larger and more complex, such a free-wielding approach may be fatally flawed.
Posted on | March 29, 2016 | 1 Comment
J.Craig Venter Institute team
A decade or so ago, I had the opportunity to moderate an educational forum that featured Craig Venter. Venter was relatively fresh off of the competitive race to define the human genome, a scientific battle that ended in a truce with current NIH director, Francis Collins. After shaking hands, the two headed in opposite directions. Collins remained in governmental service, and Venters formed the Institute for Genomic Research which later became the J. Craig Venter Institute in La Jolla, California.
Off that base, he consulted with a wide range of corporate entities focused on synthetic biology, that is, taking genetically modified microbes and pushing out a range of products from petroleum to pharmaceuticals. His efforts were financially underwritten by the likes of British Petroleum and Exxon in the hundreds of millions. Designer organisms have met multiple obstacles, not the least of which has been the plunging price of oil.
But at his core, Venter is as much an explorer as he is an entrepreneur. When I asked him, on behalf of the audience, “What percentage of the knowledge do we currently possess to take optimum care of human health?”, his response without delay was “Less than 1%.” He is committed to exploring the 99%.
What is his secret? Nobel Prize winner Sir Richard Roberts claims he’s a “managerial genius” with a history of holding together large, highly specialized and integrated teams for decades in pursuit of elusive endpoints. Twenty years ago, he and his co-workers began to wonder how many genes are actually necessary to create a living organism – one, as Venters says, that can “live, eat, and self-replicate.” Humans have 20,000 or so genes made of 3 billion individual building blocks. But it’s been known for some time that many of these wait on the sideline and are not an active part of the game of life. To get down to the simple essentials, you can either successively “edit out” genes of a existing organism and see what happens, or create a brand new form of life, building it block by block.
Venters, with his commitment to synthetic biology, chose a middle road. As a model, he began with the 900 gene M. mycoides, a microbe that lives in sheep. Through a series of experiments over a number of years, they were able to identify 428 of these as non-essential for life of the microbe. They then created a brand new organism with just 473 genes and roughly a half million building blocks, and booted it up. the lifeform, dubbed JCVI-syn3.0 or Syn3.0 for short, not only came to life but, ate, grew and replicated in the specialized environment Venters lab had created.
Still, the team, in its publication, was quick to dampen expectations. First, the new creature needs the specialized environment to survive. Second, of the necessary 428 genes, the scientists have no idea what 149 of those genes (roughly 1/3) do, and why they are essential for life. Obviously, part of the goal is to explore those unknown functions, and in the process better understand the workings and evolutionary history of living cells.
But in addition, as it turns out, size matters when it comes to synthetic biology. And bigger is not better. As with the Model T, it pays to have a simple chassis with few moving parts, high reliability, and efficient productivity. Most scientists this week agreed that “minimal-DNA microbes” have a bright future that will likely include, with the help of selectively added genes, production of a wide range of products. For Venter and his colleagues, their “designer organism”, like other micro-tools embraced in Cambridge, Mass, or Silicon Valley, will eventually be a ubiquitous presence on biologic product lines everywhere.
Posted on | March 28, 2016 | 1 Comment
Two years ago, the Swedish Hockey League made medical history – but not in a good way. 288 members of the 12 teams fighting for the title in their 2012-2013 season agreed to participate in a head trauma medical study. An unfortunate 35 did sustain concussions, and of these 28 completed required blood testing at 1, 12, 36, and 144 hours after injury. In each case, their blood was tested for specialized protein biomarkers. Two of those markers, total tau and S-100 calcium-binding protein B, consistently rose and were confirmed to be associated with acute axonal and astroglial injury. In addition, the level of rise positively correlated with the extent of the injury, and the subsequent recovery of function.
The study was a breakthrough. It proved that traumatic injury of brain cells could be detected by a simply blood test because these injured cells released stress proteins, and these proteins crossed the blood brain barrier and entered the general blood circulation.
Medical science, especially diagnostic medical science, never exists in a vacuum. In the public light of recent documentaries, investigative reports and feature films, the tragic outcomes of NFL players with a history of traumatic injuries has been on full display. And parents of young children and high schoolers in competitive sports of all types have wasted no time raising the question, “Are our children safe, or at least safe enough to take on the risk of participating on a sport team?”
Emergency medicine and sports medicine specialists weighed in with data. About 250,000 children suffer traumatic brain injuries a year, the majority associated with sports. The level of injury is highly variable, and competitive sports associations, in association with sports medicine specialists, have agreed on protocols for on the field evaluation and acute response to injuries. But most have agreed that the evaluative measures (some basic neurologic evaluation and questions to assess balance, level of consciousness and orientation) are a very blunt tool.
But what if there was a blood test that could be administered on the field that could detect significant blunt trauma damage to the brain? Four months ago, an emergency medicine physician, NIH-funded, researcher from Florida reported on the results of just such a test. Dr. Linda Papa and her team described the results of 152 children who had suffered sports related head trauma, and who had received both CT scans and a blood test for glial fibrillary acidic protein (GFAP). Glial cells surround brain neurons, and the protein in released with injury and finds its way into the general circulation.
The blood test was found to corollate with CT findings 94% of the time. In addition, levels of GFAP rose with severity of injury. The researchers left no doubt where they are heading with this. Papa said, “The idea is to get a point-of-care test that could be used on the field, to help the coaches, the trainers and the athletic directors, make a decision then and there about whether the child should go back to play.”
But she may be under-estimating the full impact of this test. Because it is quite likely that the question most of America’s increasingly risk-averse parents will ask is not “whether their child should go back to play”, but rather whether the child should have played in the first place. And it’s also likely that elevations will be a common finding not only in football, but also in a wide range of other sports including soccer, basketball, track and field, gymnastics and many more.
Posted on | March 23, 2016 | 1 Comment
OK. Let’s just all admit it. Al Gore was right – even if the truth was inconvenient at the time. For most of the civilized world, the present day truth is not only inconvenient, but also incontrovertible, inconceivable (in its potential destructive effects), and increasingly (almost) inevitable.
Most of the emphasis has been on the human creation and release of carbon into the atmosphere. Policy makers have focused on rules and incentives to encourage humans to act, well, more human. That hasn’t worked. Nor has public education, threats, or even perceptible changes in the weather related disasters. Some have sought refuge in possible technologic advances – say in new alternative, less polluting, energy sources. But the economic and political clout of fossil fuel producers, and the real time energy needs of emerging economies around the globe have, for the time being, counter-balanced these progressive efforts.
But wait. Don’t give up on human ingenuity just yet. Maybe we’ve been looking for sanity in all the wrong places. Rather than focus on “carbon in” (which we need to continue to pursue with energy and determination), perhaps we should spend equal time on “carbon out”.
We’ve spent so much energy in associating the word carbon with a mountain of dirty coal that we have almost lost sight of the fact that this atom is the fundamental building block of all life, and of the planet we inhabit. It is anything but solid and static. It’s a highly mobile cycle – moving in and moving out constantly. It anchors the geosphere (soil), the hydrosphere (water), the atmosphere (air), and the biosphere (living organisms).
For the first time, NASA has recently launched a satellite whose purpose is to visualize the carbon cycle in action worldwide. What is already clear is that carbon release is highest, as you would expect in the northern hemisphere. But it moves fast and far with the trade winds. It accumulates in highest concentration during the winter months, when new plants lie dormant, not engaging in photosynthesis, which transmits atmospheric carbon into the soil.
Carbon sinks come in many shapes and sizes – oceans, forests, wetlands, undisturbed soils, grasslands and more. But the most important of these are the oceans. Their capacity to bind carbon is 50 times greater than the atmosphere. They alone absorb 48% of all atmospheric carbon – but that number is rapidly declining.
The movement and absorption of carbon is ocean water is effected by water temperature, the currents and the activity of biologic species through photosynthesis and respiration. In general the lower the ocean temperature, the more carbon it can hold. Northern oceans are colder and possess downward moving currents where the deeper water is colder than surface water. The net effect for carbon then is the “solubility pump” or mixer which delivers carbon to deep storage areas in the ocean where it lies relatively dormant. In contrast, warmer water favors upward currents and the release of stored carbon into the atmosphere.
But there is another piece to the puzzle, and its called the “biological pump”. It seems that when carbon is absorbed into the ocean, very little of it remains in the form of carbon dioxide. Instead it takes on a variety of different dissoved inorganic forms captured in living plankton or calcites, which move slowly but steadily downward. The carbon remains trapped in these living creatures until they decay or die, releasing carbon dioxide in the process.
The average acidity of today’s oceans is a pH of 8.2. That represents a drop of 30% over thew past 100 years. A balanced higher pH is essential for maintaining optimum marine life. But there is more to it than that. Acid pH destroys the shells and outer protective layers of plankton, coral, crabs, clams, and many others. Lose them, and you disable the “biological pump”.
All of the above is why some scientists are advocating incentives that focus on new technologies to enhance carbon capture. Some may focus on improving the viability and functioning of ocean solubility and biologic pumps. Others might reward farming approaches that conserve undisturbed soil and wetlands, or expand forest cover. And others might encourage new technologies that attract and capture carbon, and bury it deep in our earth or oceans, or beyond Earth’s atmospheric borders.
Obviously we need to focus on both over-release of atmospheric carbon and under-removal of the substance. But when it comes to short term and urgently needed actions, we should use our resources wisely, and invest where scientists believe there is the greatest potential for immediate success.
Posted on | March 17, 2016 | 1 Commentkeep looking »