Wednesday, October 27, 2010

Brainwashed into pleading guilty: rights denied to one are denied to all.

This has been all over the news, but it's enough of an outrage (and gross violation of human rights) that it seemed the perfect opening salvo for this born-again cynic.

Omar Khadr was arrested in 2002 as a fighter for al-Qaeda after killing an American soldier with a hand grenade at the age of 15.  He has spent the majority of the last eight years imprisoned at Guantanamo bay--over a third of his life--and, having plead guilty, is facing at least another eight years behind bars.

As a fighter for a "terrorist" organization, the United States government does not consider Khadr to be a prisoner of war.  According to international law, prisoners of war are entitled to combatant immunity and may not be prosecuted by their captors for legitimate acts of war (e.g. killing an enemy soldier in combat).  It is only Khadr's extra-legal status that allows the United States government to prosecute him.

The question of Khadr's status hinges on whether captured al-Qaeda fighters are to be considered (in the eyes of international law) "lawful" or "unlawful" combatants.  On the one hand, terrorist organizations, like al-Qaeda, are large multi-national, non-sovereign, criminal (therefore, unlawful) organizations; on the other hand, the organizational structure and tactics of certain pockets within these organizations mirrors that of "lawful" guerrilla movements--entitling captured fighters to POW status per article 4 of the third Geneva Convention.

Obviously, the senior leadership of al-Qaeda is responsible for atrocities that lie far outside the legal definition of "acts of war".  Indeed, in addition to international terrorism, there is evidence that al-Qaeda is engaged in international drug smuggling, money laundering, and weapons dealing.  However, the persons involved in these activities are in violation of international law and, as such, may legally be prosecuted (for these activities) upon capture regardless of POW status.

Additionally, it is highly unlikely that the average al-Qaeda fighter on the ground in Afghanistan is even aware of, or (if they are aware) even peripherally involved in, the global crime syndicate they claim to represent.  In the most reductive sense, these fighters represent a militant organization that, at least in Afghanistan, finds itself in direct conflict with the military of a sovereign nation--a military opposing them under the auspices of war.  It is, therefore, not unreasonable to consider fighters captured in such conflict prisoners of war entitled to all the rights guaranteed by that status.

As for Omar Khadr, it is unlikely that a fifteen year old boy would be fully cognizant of the potential ramifications of joining al-Qaeda.  Furthermore,  given his age and stated family history, it is likely that he was coerced into joining al-Qaeda and was not affiliated with that organization entirely of his own free will.  When he was captured, he had been wounded in a gunfight between US forces and other al-Qaeda militants; when Khadr threw the grenade that killed a US soldier he was under fire--thus, killing the soldier was not an act of cold-blooded murder, but an act of self-defense against an armed attacker; an armed attacker acting as a combatant in the execution of a declared war.

Considering the length of Khadr's captivity, and the conditions of his captivity, his guilty plea is nothing more than a surrender.  He is a victim of circumstance who poses no real threat to the people of the United States yet the people of the United States have ruined his life.  Regardless of his statements in court, he was branded with the identity of a terrorist during his most impressionable years--any ideological statements he makes now are not the product of al-Qaeda, but the product of the ordeal he has been through in the past eight years.  This man's life now amounts to nothing more than that of a political pawn who's rights were disregarded in the name of "freedom".

Tuesday, October 26, 2010

The Cynic Returns!

After a three month hiatus from blogging (necessitated by school, MCAT, med school apps, etc.) I have decided to return to the world of blogging!

Having received constructive feedback about my previous entries (thanks to all of you who, at my incessant urging, read them), I have decided to institute a few changes in the way that this blog is kept:

  • Rather than long-winded, technically worded, entries, the entries posted will be shorter, and more numerous.
  • There will be fewer entries babbling about science, and more about politics and recent events.
  • As always, so long as they do not contain obscenities, comments will never me moderated.
That said, let's get cynical!

Monday, August 2, 2010

Pressure Washington to Sign the Global Cluster Bomb Convention!

First developed by the Germans for use in World War 2 as an airborne anti-infantry weapon, cluster bombs of various designs have become a standard part of the modern military arsenal.

The basic idea behind cluster bomb design is simple. Rather than comprising a single large explosive intended to be dropped on a specific target, they are designed to disperse several smaller explosives over a larger, less concentrated, area.

Cluster-bombing strategies have proved useful not only against infantry, but also for knocking out an enemy's essential infrastructure (e.g. pockmarking runways with hundreds of small craters to render them unusable), and laying minefields.

The danger posed to civilian bystanders by any weapon designed to distribute a non-specific swathe of destruction over a large area is obvious. Indeed, since their introduction, the use of cluster bombs has accounted for a disproportionately large number of civilian fatalities when compared to other, more "targeted", munitions. Even after a particular war has ended, unexploded "bomblets" and aerially dispersed minefields have been known to claim scores of innocent lives.

Yesterday, an historic international treaty banning the use of cluster bombs, the "Convention on Cluster Munitions", officially went into effect. As of the time of this writing, 108 countries have signed the convention of which 38 have ratified it. While, with the exception of the UK, none of the countries in which it has been ratified are what would be considered "major" global military powers, its signatories collectively represent about half of the world's population.

Unfortunately, out of the 14 countries known to have used cluster bombs in recent history, only 8 have signed the convention and, of these, only 2 have ratified it. The six countries that have both used cluster bombs and abstained from signing the treaty are Eritrea, Ethiopia, Georgia, Israel, Russia, and the United States. Among all of the "free" and "democratic" nations of the first world, only Israel and the United States have not signed the convention.

The failure of both Israel and the United States to sign this convention is both a humanitarian and foreign relations travesty. While, given it's sometimes tenuous position on the world political stage, Israel's failure to sign is perhaps understandable if not forgivable, the failure of the United States to sign the convention is virtually an admission of guilt to the general disregard for civilian lives alleged by so many of the United State's enemies.

The era of American hegemony fueled with blood must come to an end if the United States is to maintain its position of global political and economic influence. From the Korean war to Iraq, hundreds of thousands of civilians, and hundreds of thousands of American citizens, have died to protect American political interests abroad. This state of endless war has only fanned the flames of dissent sown throughout the world by the United State's enemies. What could possibly be a better way to douse those flames than demonstrating to the world a newfound commitment to preventing unnecessary fatality?

In order for the United States to ratify a treaty, it must be both approved by a super-majority (2/3) of the senate and signed by the president. Accordingly, it now falls to the people to put pressure on their state's federal senators and on President Obama to add the United States to the list of countries agreeing to refrain from using such weapons.

Thus far, there is little publicized public effort to do this. In the next day or so, I will create a petition on and draft letters to the president and my state's Senators and post them on here for sufficiently motivated readers to copy and send for themselves.

Sunday, July 25, 2010

This Made Me Laugh...

Age of Autism is sponsoring a contest to "win" lunch with "Doctor" Andrew Wakefield...

While I have no money to bid in an auction, I think that a coalition of legitimate scientists should band together and try to win it--if only to spend the awkward lunch hour grilling him on medical ethics and sound science practice.

Wednesday, July 21, 2010

Vaccination: A Love Story

(Dear reader, I apologize in advance for the rather cliche topic of this post...I'm just fed up with all of my comments getting moderated off of age of autism.)

Perhaps one of the most pernicious aspects of the culture war is the anti-vaccination movement. Driven by charismatic celebrities such as Bill Maher and Jenny McCarthy, an army of soccer moms, and a handful of ethically dubious fringe scientists, the "anti-vaxers" represent one of the greatest extant immediate threats to general public health in the developed world. Since its modern beginnings in the early 1990's to the present day, the movement has accrued a cult-following of otherwise well-meaning people that has done nothing but hinder the progress of science and cause unnecessary human fatality. Contributing to the movement is both a general lack of understanding of the science behind vaccines among the general public and clinicians, and a not-wholly-irrational public distrust of the pharmaceutical industry. Addressing this issue will require effective and persuasive communication of vaccine science to both the general public and practicing clinicians.

As a safe and effective preventative counter-measure to infectious disease, vaccination is far from a modern concept. Primitive vaccination protocols can be found in ancient aryuvedic texts from India, and the procedure was widely practiced in the Muslim world until it was introduced to western medicine by Lady Mary Montague and (more famously) Edward Jenner in the 18th century. While the technology of vaccine production and delivery has changed over time, the basic concept that a specific immune response can be "primed" in a person to render immunity to a pathogen prior to exposure remains. In order to properly explain the mechanism at play here, it is first necessary to briefly explain how immunity works.

In the most reductive sense, the human immune system differentiates between things that are supposed to be in the body (e.g. one's own serum proteins and cells) and things that are not (e.g. viral coat proteins, bacteria, and even cancer). To accomplish this task, and eliminate those items that fall into the latter category, human immunity functions through two broad mechanisms. Innate immunity is just that, innate, upon contact with any foreign biotic material, it functions to quickly and efficiently eliminate it. Sometimes, however, this non-specific first line of defense is breached and, in the case of infectious disease, the body needs to break out the heavy artillery.

The components of adaptive immunity represent the body's version of guided missiles and surgical strike commandos. Upon exposure to an antigen, macrophages (the "storm troopers" of innate immunity) present the antigen to adaptive immune cells and trigger proliferation of cells specific to that antigen. These adaptive immune cells fall into two categories, B-cells and T-cells. B-cells produce antibodies, proteins capable of recognizing a specific antigen and facilitating its destruction; once the body starts producing antibodies against a specific antigen, it will continue to do so--granting immunity--throughout life. T-cells are a bit more diverse, and fall into several subcategories, but their general role is to recognize and kill infected and pathogenic cells. Following exposure to a specific antigen, "memory T-cells" specific to that antigen persist throughout life and can proliferate quickly to mount a specific response in the event of re-exposure. Vaccines induce adaptive immunity to a particular pathogen so that, upon exposure, the body already has a specific response.

Modern vaccines fall into five broad categories, attenuated, killed, subunit, toxoid, and conjugate.

Killed and attenuated vaccines are technically the simplest to produce and the earliest vaccines generally all fall under these two categories. These vaccines work by introducing killed, noninfectious, or weakly infectious versions of the infectious pathogen into the body thereby simulating infection and inducing an adaptive immune response. While certain attenuated vaccines may cause infection in immunocompromised individuals, and many of these vaccines must be refrigerated to be kept "fresh", they have become one of the safest and most reliable counter-measures to infectious disease in use today.

Subunit, toxoid, and conjugate vaccines work by introducing biomolecules specific to a given pathogen rather than the pathogen itself. Vaccines in this category can be used to develop an adaptive immune response against bacterial lipopolysaccharides and even small organic molecules. Because, with the exception of some toxoid vaccines, recombinant DNA technology is required to produce them, vaccines in these three categories did not enter wide-spread clinical use until the latter half of the 20th century. Because they contain no potentially infectious particles, these vaccines pose no risk of causing infection. Additionally, some of them can even be stored "dry", greatly increasing their shelf-life. These vaccines are highly safe, with the clinical trials for a few of them producing no statistically significant life-threatening effects.

Re-enter the anti-vaxers. The most common ailment blamed on vaccinations is autism. Much of the momentum for this belief was provided by a now-retracted-and-thoroughly-disproven paper published in the Lancet in 1998. No peer-reviewed publications have ever presented even a hypothetical pathogenic mechanism for such a causal linkage; in fact, much of the anti-vaccination fervor has focused on pseudoscientific vagueries such as "toxic load" and "vaccine load."

"Toxic load" as defined by the anti-vaxers refers to the idea that trace chemicals or preservatives present in the vaccine induce autism. Several studies have thoroughly discounted this concept. The only potentially scientifically valid tenet underlying this idea is several public health studies that have shown a (barely) statistically significant correlation between environmental mercury content (defined variously as soil concentration, drinking water concentration and atmospheric concentration) and local prevalence of autism. Unfortunately, a biologically insignificant amount of mercury given as a bolus (as in a vaccine) and environmental exposure throughout development are still two very different things, and treating autism as heavy metal poisoning can have lethal consequences.

"Vaccine load" is the idea that the normal vaccine schedule gives too many vaccines at once, thereby overwhelming a child's immune system. This concept blatantly disregards that the human body is constantly under attack from pathogens and has evolved to survive under constant attack from pathogens--the human body is more than capable of dealing even with a relatively high titre of thousands of antigens at once. Not only is the normal vaccine schedule safe, adhering to it is necessary for the maintenance of "herd immunity" and general public health.

Vaccination is one of the oldest medical procedures still in use today, and continues to be one of the most effective tools available against infectious disease. Outlining the flaws in the anti-vaccination movement would produce a blog entry far longer than I am willing to write, but there will certainly be more to follow on this topic.

Tuesday, July 20, 2010

The Arizona Bill: A Good Place to Start the Conversation

After sitting through months of hype on CNN and Faux news, I finally made time to read through Arizona's infamous S.B. 1070. Having expected to see a work of tea-stained Beck-isms and back-door racism, I was actually quite surprised to encounter a perfectly constitutional, surprisingly impotent, politically neutral law.

From an enforcement standpoint, the law really does not change all that much. Actually, from its wording, it seems plausible that the law was explicitly written to incite controversy and discussion--not racial profiling.

Legally, S.B. 1070 primarily accomplishes three things:

1. It sets a state-wide standard of enforcement of already-existing federal law.

2. It authorizes law-enforcement officers to question the immigration status of persons suspected to be in violation of other local, state, and federal laws (but, outside of suspected human smuggling, immigration status alone explicitly does not constitute probable cause under this law).

3. It adds the potential for an additional 500$ fine to be imposed on persons found to be here illegally.

The caveat here that prevents racial profiling is that in order for a person's immigration status to be questioned in the first place they have to be legitimately stopped by a law-enforcement officer for some other reason. An illegal immigrant walking down the street, not causing any trouble cannot--and would not--be stopped. Even passengers in a car subjected to a traffic stop, so long as the driver is cooperative and able to produce his license, registration, and proof of insurance, cannot be questioned because, unless the passengers provide it, the officer would not have probable cause.

Most foreigners carry a passport with them at all times, most adult legal residents have driver's licenses as do most adult citizens. If, as a citizen, I am stopped by a police officer for any reason, even on the street (as I recently was when a friend of mine was caught urinating on the sidewalk), the first thing they ask to see is my driver's license. Failure to produce it or any other form of identification would be suspicious and, were I driving, would earn me a fine. A person who is in the United States legally who is stopped legitimately should be able to produce some form of identification--S.B. 1070 simply allows for the arrest of suspicious persons found violating the law so that they may be identified. So long as they behave themselves, even illegals have nothing to worry about.

The purpose of S.B. 1070 was not to stop the flow of immigrants across our borders--with the exception of a very small minority, this entire country was built by immigrants--the purpose of this bill was to catalyze a discussion about the changes that need to be made to existing American immigration policy. For that effect, the bill has worked beautifully; the American people have put immigration reform back on the front burner. Now is the time, let's work together to figure out how we can best continue to welcome the "huddled masses" of the "tempest tossed" into the land of the free.

Wednesday, July 7, 2010

My histones made me do it!

Unraveling the biochemistry of behavior is one of the most difficult ongoing scientific ventures. The brain develops nearly continuously from shortly after implantation to early adulthood. Much of the molecular biology of normal brain development remains elusive; even less is known how genetic and environmental cues could potentially interact to cause psychopathology later in life.

A news focus appears in this month's Science, discussing the work that has been done to date on the role of epigenetics in human behavior and the promise of future research in that field. (link to abstract)

For those of you unfamiliar with molecular biology, epigenetics is the term used to describe the correlation between gene regulation and phenotype. (methinks I'll add a glossary page)

Individual chromosomes are LARGE molecules--stretched end to end, the 46 chromosomes in a single human cell would be roughly two meters long. Obviously, this is an inconvenient way for cells to store genetic data--cells get around this problem by compactly packaging DNA around proteins called histones.

With the exception of certain types of blood cells, every cell in the human body contains a complete copy of the host genome. As cells differentiate throughout development and adult life, each cell type only needs to actively use a minority of the genes it its genome. Mechanically, genes that are packaged around histones cannot be actively expressed--therefore histone modification allows the "unpacking" and "repacking" of genes according to developmental and physiological need.

The important thing to remember regarding the role of epigenetics in development is that development is not a static program. Owing to environmental factors, even identical twins can manifest markedly different physical phenotypes (e.g. I'm an inch taller than my own identical twin). Based on environmental and physiological cues, expression level of any given gene in a particular tissue can vary widely from individual to individual. The variability of development is known as phenotypic plasticity.

Inappropriate epigenetic regulation of gene expression has been implicated in several human diseases ranging from cancer to metabolic disorders to autism. As the age of monogenic disease gene hunting draws to a close, scientists are increasingly coming to understand the role of differential gene regulation in human disease.

Regarding the role of epigenetics in human behavior and psychopathology, there was a more complete review published in Frontiers of Neuroendocrinology last year.

Much of the work that has been done to date in behavioral epigenetics has focused on the role of epigenetic factors in creating brain sex differences and favoring sex-specific behaviors in rodent animal models.

The potential link between autism and epigenetics is tantalizing. Gregory et al. (2009) found a significantly higher rate of histone methylation (silencing) within the oxytocin receptor (OXTR) promoter region in samples taken from the temporal lobe of autistic patients versus age and sex matched controls.

Polymorphisms within the regulatory region of the OXTR gene have been previously correlated with autism. Additionally, exogenous administration of oxytocin to autistic patients has been shown to temporarily attenuate many of its outward behavioral signs.

These finding, coupled with the recent correlation of gene copy number of various genes with autism, imply that autism could conceivably be a disorder of transcriptional regulation rather than the direct result of one or more "broken" genes.

Friday, June 11, 2010

The Real White Coat Syndrome: Why We Stigmatize the Sick and the Real Case for Single-Payer Healthcare

The two-party system of American politics relies on a nexus of certain polarizing “core issues” with party platforms clearly defining the “black and white” nature of left and right politics. A particular candidate’s party affiliation predictably reflects that candidate’s public views on issues ranging from abortion, to free speech, to immigration, to gun policy and foreign policy. Obviously, there are outspoken “black sheep” from both sides, but these Nancy Pelosis and Ron Pauls are a mostly impotent minority. The vast majority of candidates from both parties must toe the line between observing their prescribed platform and alienating moderate voters. Unfortunately, it is this very pandering that breeds legislative gridlock and blocks real progress and reform. Within the last decade, the epitome of this gridlock is the move towards comprehensive healthcare reform.

By the 2008 presidential election cycle, the economic need for healthcare reform made it one of the most important issues driving debate outside the voting booth. As unnecessary foreign wars and questionable lending practices drained the American economy, candidates were forced to account for why, dollar for dollar, the American healthcare system was (as it remains) among the least fiscally efficient in the developed world—and what exactly they planned on doing about it.

At that time, as now, the system had been very obviously very broken for decades. In 2007 alone, Americans spent 2.3 trillion dollars on healthcare(1)—put in context, this is more than double what has been spent to date on military operations in Iraq and Afghanistan combined since 2001(2). In spite of this, quality healthcare remained a precious commodity with scores of millions of Americans lacking access to even basic primary care—and scores of millions more bankrupted by their need for it(3). The healthcare system had grown into a tangled web of bureaucratic overhead; effectively manufacturing a very artificial shortage of access to quality care.

President Obama’s signing of the Patient Protection and Affordable Care Act on March 23, 2010 and the Healthcare and Education Reconciliation Act of 2010 on March 30, 2010, was intended to represent the first real step towards addressing this artificial healthcare shortage in America—a remarkable step, but by no means a victory for the American people. While the net effect of both bills does ultimately increase healthcare access to productive citizens and promote the general public health, it does not establish healthcare access as a fundamental right and does little to address the expanding healthcare bureaucracy.

Universal healthcare access is commonly attacked in the United States as a “socialist” ideal but it is in fact a fundamental right granted to the people of almost every other developed nation in the world (3)—even in the United States, prison inmates and retirees are entitled to healthcare. Access to quality healthcare is perhaps the only American right routinely granted to convicted murderers and denied of school-aged children. Bearing witness to this, urban streets are flooded with the untreated mentally ill(4) as children in rural Washington are lucky to stay current on their vaccinations(5). Were these disaffected populations ever to get the vote out, the end to legislative gridlock would certainly come quickly and decisively.

The majority of voting Americans are not blind to the disaffected peoples around them—the average American volunteers a remarkable 250 hours per year to community service(6)—but when it comes time to cast a ballot, personal economic incentive invariably outweighs empathy. When dealing with a system as complex as American healthcare, personal economic incentive can be confusing, if not impossible, for the average citizen to objectively determine. Adding to the confusion, political fast-talk regarding the topic has led to the birth an even more complex network of political smoke and mirrors. Put simply, republicans say that healthcare should ultimately be paid for by the recipient in a manner commiserate with the level of care received and democrats say that level of care received should be independent of one’s ability to pay. The Bush deficit aside, both positions are consistent with the established economic stance of their respective parties.

Further cluttering the debate, outspoken and charismatic politicians are guilty of distorting facts and sleight-of-hand wordplay. Sarah Palin’s talk of “death panels” and the Beckism “Obamacare”(7) are born, not out of objective argument—but out of a need to suppress the potential political advances of the rival party. Images reminiscent of cold-war McCarthyism are used by conservatives to quash any budding public support for expanding the role of the federal government in healthcare. Before their final texts had even been decided upon, the radical “tea party” movement had already dismissed both healthcare bills as part of a larger “socialist agenda”(7).

Such foot-dragging put democratic legislators in a rough spot—with most constituents unaware of what the actual wording of the bills entails, voting for a bill touted by conservatives as “providing Viagra for convicted sex offenders”(7) comes at a huge political risk. Coming from the “moral majority”, such rhetoric is neither surprising nor unexpected; unfortunately, the human cost of accommodating this rhetoric far exceeds the value of any perceived moral high ground. Using moral objections to oppose the expansion of healthcare rights is far from a new concept—societies throughout history have attributed a level of social and legal stigma on patients with various conditions.

The most common underlying causes of chronic illness and death in the United States are obesity, an increasingly sedentary lifestyle, and smoking(8). With modern technology, most Americans live free from the infectious diseases that ravaged the populations of their parents and grandparents. Those infectious diseases that do persist tend to have more illicit usual routes of exposure; HIV is most commonly spread through sharing needles and unprotected sex, as is Hepatitis C, and both are far more common among traditionally “disaffected” populations than the American population as a whole(9). Thanks to a sensational media, public fear of these diseases both far outweighs the actual threat and perpetuates the dangerous tradition of ostracizing the sick. Public association of an infectious disease with a particular demographic is dangerous on two counts; first, it can often lead to the arbitrary restriction of the rights of a certain people (e.g. gay men are not allowed to give blood), second, it gives “low-risk” members of the public a sense of false security from a still real biological threat.

Ironically, the deadliest and most visible stigmatized condition is not infectious at all; addiction is, perhaps, the single most common and least understood chronic condition in the world. Whether directed at alcohol, food, sex, or controlled substances, addiction imposes a higher social and economic cost on American society than all infectious diseases combined(10). Neurobiologically speaking, all addictive processes seem to share a common molecular etiology; genetics is known to play a significant role, but the true disease mechanism likely stems from a complex combination of genetic predisposition with novel environmental factors(11). In spite its known biological basis, addicts are treated like criminals. Public resources available for the treatment of addiction are sparse at best—the majority of addicts who fall out of the system ultimately find themselves on the streets or in prison. The cost of prosecuting the “war on drugs” alone was an astronomical 40 billion dollars in 2000(12)—with the vast majority of drug shipments evading capture to be readily distributed to addicts from every background(12).

In addition to addiction, untreated mental illness is endemic among the American homeless population(4). For many of these people, their individual diagnoses would be manageable with the proper level treatment and support—properly managed, many could probably go on to live otherwise productive lives. The resources exist to treat many (if not most) of these people who would otherwise fall out of the system; it is only access to these resources that is limited. A universal healthcare program that extends access to mental health and addiction treatment services could render these people into productive tax-paying citizens. Therefore, spending taxpayer dollars on such programs would be far from a wasteful handout, it would be an investment made by taxpayers into producing more taxpayers—many of whom would pay, in taxes, for their own treatment within years.

The paradigm of government run healthcare as a vehicle through which a society can invest in itself is not new either; it is one of the more profound economic arguments favoring such a system. However when healthcare is viewed as a consumer commodity, this argument makes little personal economic sense. Simply put, American voters do not want to pay for services that they themselves are not consuming. This potentially represents a strong argument against a single-payer government run system; until the true cost of indigent care under the current system is considered.

Emergency rooms are required by law to provide care to every patient who presents regardless of that individual patient’s ability to pay. At major public teaching hospitals, this requirement to provide care extends to everything from lengthy ICU stays to the management of chronic illnesses. In the average American emergency department, only one patient in every three pays their bill. The two out of three patients who were treated “for free” had their tab picked up in the form of higher costs assessed to the patient who did actually pay(13). These savvy “healthcare consumers” are blissfully unaware of the fact that the majority of their healthcare costs already go towards covering indigent care. Furthermore, most indigent patients requiring long-term residential care are covered by Medicare and already reflected in the tax-rates paid by the general population(14). A universal single-payer system is nothing but a more honest way of doing business; furthermore, because such a system could be funded from income tax dollars, it would be a way of ensuring that everybody with the means to contribute something to the system actually does.

The administrative streamlining allowed by such a system—by standardizing reimbursement protocols and short-circuiting the bureaucracy of the current system—would mean fewer middle men between patient and service. A service that costs a dollar in the clinic requires a dollar to reach the clinic. Under the current system, this dollar usually passes through at least three sets of sticky hands between the patient and the provider—a single-payer system run by the (obviously not-for-profit) federal government literally requires fewer hands and maximizes the value of every dollar.

Medicare is a fine example of this; as the model for a potential American single-payer system, Medicare is perhaps the single most efficiently run insurance provider in the country. Dollar for dollar, Medicare is able to cover the costs of more care than any private insurer. It is also worth noting that Medicare funds a significant portion of post-graduate medical training in the United States—thereby covering an even greater distributed healthcare cost than most private insurers(15). What is even more remarkable is that Medicare was molded around the existing tangle that is privatized healthcare—an integrated Medicare-like system that provides care for every citizen has the potential to be even more efficiently administered.

Making Medicare eligibility universal has been suggested before. First introduced in 2003, HR 676 aims to do just that—provide universal healthcare by expanding the already existing American single-payer system(16). The bill has been resubmitted every year, most recently in 2009, but has never reached the house floor for debate or a vote. Perhaps more disturbingly, at the congressional hearings that eventually gave rise to the bills signed in March by the president, not a single lobbyist supporting single-payer healthcare was invited to speak (see video). Perhaps because of vehement opposition from the far right, Americans will still have to wait for truly universal single-payer care.

The bills that did pass are still woefully inadequate and politicized. Imposing harsher regulations on insurers and providing subsidies to the poor only adds another layer of complexity to the American healthcare landscape. It is entirely likely that insurers will increase rates across the board to compensate for the additional strain placed on the system. Even after all of the reforms outlined in the bills take effect, there will still be uninsured Americans and there will still be a shortage of access to quality primary care. As much as the public conversation over healthcare reform has ceased since the passage of these bills—it would be to the detriment of the greater good to assume that the debate is over.

Works Cited
1. WHO. “World Health Statistics 2009.” 2009
2. (accessed 7/6/2010)
3. Anderson GF, Reinhardt UE, Hussey PS, and Petrosyan V. “It’s the prices, stupid: why the United States is so different from other countries.” Health Affairs (2003) 22:1:89-106.
4. Fazel S, Khosla V, Doll H, Geddes J. “The prevalence of mental disorders among the homeless in western countries: systematic review and meta-regression analysis.” PLoS Medicine (2008) 5:12:1670-1680.
5. (accessed 7/6/2010)
6. Corporation for National and Community Service. “Volunteering in America 2010 Issue Brief.” (2010)
7. (accessed, regrettably, 7/6/2010)
8. (accessed, 7/6/2010)
9. (accessed 7/6/2010)
10. (accessed 7/6/2010)
11. Feltenstein MW, See RE. “The neurocircuitry of addiction: an overview.” British Journal of Pharmacology (2008) 154:261-274
12. (accessed 7/6/2010)
13. American College of Emergency Physicians. “Release: the uninsured: access to medical care” (20110)
14. (accessed 7/6/2010)
15. (accessed 7/6/2010)
16. Conyers. “HR 676.” 111th Congress 1st session (2009).

Wednesday, June 9, 2010


Even though I don't have any real posts yet...this is the "permanent" thread for open discussion.

So discuss!!!!

Saturday, June 5, 2010

A Blog is Born!

After many months of contemplation and countless thoughts to the effect of “I should do that,” I have undertaken to inaugurate a blog dedicated to the critical examination of politics, science, and culture. Realizing that the internet is already saturated with such blogs—many of which written by writers far more talented and informed than myself—this blog aims to take a novel approach.

Beneath the smoke-screen of polarizing rhetoric, between right and left, and independent of peculiarities imposed by tradition, there lies the fundamental reality of the human condition. That is to say that beyond subjective opinion, which is often dictated by ignorance and idealism, exists objective fact. Therefore it is the intent of this blog to present thoroughly-cited, accessibly worded, objective and scholarly interpretations of political issues, social commentary, and the ongoing evolution of science. In doing so, it is my hope that my blog readership will engage in enriching scholarly discussion and, by means of the understanding gained through participation in such discussion, become more responsible members of human society.

The aforementioned tripartite subject matter is essential to this end. Human society as a whole is dictated by the constraints placed upon it by the interplay between objective empirical reality (science), custom and historical precedent (culture), and group dynamics as dictated by interpersonal competition for finite resources (politics). Obviously these three forces are fluid, often overlapping, and in no way mutually exclusive—but, for the whole of human history, these three forces have dictated both human law and each other.

Recognizing that my own subjective reality will effect both the content and the register of this blog, it is necessary for me to declare my life circumstances and in doing so reveal my own biases.

I am a student studying cell and molecular biology at a large research university in the midwestern United States. I identify as a white middle class homosexual male who votes blue and declares no religion. More personally, I am an addict (the kind that goes to meetings) who has never enjoyed membership in a "nuclear" family; both of these experiences contributed significantly to my world-view and ultimately compelled me to, like Diogenes, test the great hypothesis that is social conformity.

True to the mission of this blog, I invite dissenting opinions to be voiced and conflicting evidence to be presented. Comments will be welcomed on all posts and no comment will be deleted under any circumstances unless it is blatantly obscene or disrespectful. If possible, evidence presented in support of any subjective position should be properly cited--failure to do so will not result in moderation but will significantly weaken any argument.

Finally, there will always be a thread dedicated to open discussion. I will do my best to tailor the content of this blog to fit the topics that are discussed. That said, let the blogging begin!