Saturday, May 30, 2015

How much does it cost to have a baby?

How much does it cost to have a baby
When my wife delivered our second child in 2008, the hospital sent our health insurance company a bill for $8569. The insurance company then wrote off $4117 of that bill, paid $4352, asked us for a copayment of $100. When we found out last year that we were expecting again, we noted that my wife's new insurance plan requires us to pay 20% coinsurance for all non-preventive care. That would have amounted to several hundred dollars of our 2008 bill, and knowing the rapid rate of health care inflation, we thought it would be good to find out how much it would cost this time around. So we went back to the same hospital, where we expect our third child to be born in a few weeks, and asked if they could give us an estimate of the charges. It seemed like a reasonable enough request, especially since the pre-admission consent form we signed specifically said that patients had a right to know what the hospital charged for its services.

We're just looking for a ballpark number for our flexible savings account, we said. The charge for an uneventful labor, vaginal delivery and single overnight stay. We understand that unexpected things can happen in childbirth, and we won't hold you to it.

The hospital representative we spoke with clearly wanted to be helpful. She called the billing office, the labor and delivery floor, every place in the hospital she could think of that might have that information. But in the end, no one could give us an answer to a seemingly simple question: how much does it cost to have a baby at your hospital?

And the truth is, even if they had, we would have had no way of knowing how much our insurance company would have actually paid. Hospitals routinely inflate their listed charges, knowing full well that insurers will want to negotiate deep discounts. The only people who actually pay the listed hospital charges - analogous to the sticker price on a new car - are uninsured patients who aren't poor enough to qualify for free or discounted care.

The whole idea of "consumer directed health care" is that patients who anticipate medical expenses in advance  can shop around to get the best prices. We had nearly nine months to get ready for having a baby, and that should have been plenty of time. But consumer directed health care doesn't work when no one can tell you the price. A federal report issued last October confirmed what most doctors have known all along: most medical practices and hospitals either can't, or won't, provide estimates about the costs of commonly provided services such as diabetes screenings and knee replacements. Several years ago, health economist and Princeton professor Uwe Reinhardt called the pricing of hospital services in the U.S. "chaos behind a veil of secrecy," and things haven't gotten any better since the passage of health reform.

In the end, my wife and I were forced to make an educated guess about how much money to put away for her labor and delivery. We're both family doctors, by the way, and between the two of us have personally delivered hundreds of babies. And if we can't figure out how much it costs to have a baby, good luck to all of the other women who will be giving birth in the U.S. this year.

Decision making :one of multiple category options

In the previous post, Decision-making learning from one’s mistakes., I provided evidence that selective attention to items that were retrieved into working memory were a major factor in making good decisions. This has generally unrecognized educational significance. Rarely is instructional material packaged with foreknowledge of how it can be optimized in terms of reducing the working memory cognitive load. New research from a cognitive neuroscience group in the U.K. is demonstrating the particular importance this has for learning how to correctly categorize new learning material. They show that learning is more effective when the instruction is optimized ("idealized" in their terminology).

Decisions often require categorizing novel stimuli, such as normal/abnormal, friend/foe, helpful/harmful, right/wrong or even assignment to one of multiple category options. Teaching students how to make correct category assignments is typically based on showing them examples for each category. Categorization issues routinely arise when learning is tested. For example, the common multiple-choice testing in schools requires that a decision be made on each potential answer as right or wrong.

In reviewing the literature on optimizing training, these investigators found reports that one approach that works is to present training in a specific order. For example, in teaching students how to classify by category, people perform better when a number of examples from one category are presented together followed by a number of contrasting examples from the other category. Other ordering manipulations are learned better if simple, unambiguous cases in either category are presented together early in training, while the harder, more confusing cases are presented afterwards. Such training strengthens the contrast between the two categories.

The British group has focused on the role of working memory in learning. Their idea is that ambiguity during learning is a problem. In real-world situations that require correct category identification, naturally occurring ambiguities make correct decisions difficult. Think of these ambiguities as cognitive "noise" that interferes with the training that is recalled into working memory. This noise clutters the encoding during learning and clutters the thinking process and impairs the rigorous thought processes that may be needed to make a correct distinction. In the real world of youngsters in school, other major cognitive noise sources are the task-irrelevant stimuli that come from multi-tasking habits so common in today's students.

The theory is that when performing a learned task, the student recalls what has been taught into working memory. Working memory has very limited capacity, so any "noise" associated with the initial learning may be incompletely encoded and the remembered noise may also complicate the thinking required to perform correctly. Thus, simplifying learning material should reduce remembered ambiguities, lower the working memory load, and enable better reasoning and test performance.


One example of optimizing learning is the study by Hornsby and Love (2014) who applied the concept to training people with no prior medical training to decide whether a given mammogram was normal or cancerous. They hypothesized that learning would be more efficient if students were trained on mammograms that were easily identified as normal or cancerous, and did not include examples where the distinction was not so obvious. The underlying premise is that decision-making involves recalling past remembered examples into working memory and accumulating the evidence for the appropriate category.  If the remembered items are noisy (i.e. ambiguous) the noise also accumulates and makes the decision more difficult. Thus, learners will have more difficulty if they are trained on examples across the whole range of possibilities from clearly evident to obscure than if they were separately trained on examples that were clearly evident as belong into one category or another.

Initially a group of learners was trained on a full-range mixture of mammograms so the images could be classified by diagnostic difficulty as easy or hard or in between. On each trial, three mammograms were shown: the left image was normal, the right was cancerous, and the middle was the test item requiring a diagnosis of whether it was normal or cancerous.

In the actual experiment, one student group was trained to classify a representative set of easy, medium, and hard images, while the other group was trained only on easy samples. During training trials, learners looked at the three mammograms, stated their diagnosis for the middle image, and were then given feedback as to whether they were right or wrong. After completing all 324 training trials, participants completed 18 test trials, which consisted of three previously unseen easy, medium and hard items from each category displayed in a random order. Test trials followed the same procedure as training trials.

When both groups were tested on samples across the range in both conditions, the optimized group was better able to distinguish normal from cancerous mammograms in both the easy and medium images. Note that the optimized group was not trained on medium images. However, no advantage was found in the case of hard test items; both groups made many errors on the hard cases, and optimized training yielded poorer results than regular training. 

We need to explain why this strategy does not seem to work on hard cases. I suspect that in easy and medium cases, not much understanding is required. It is just a matter of pattern recognition, made easier because the training was more straightforward and less ambiguous. The learner is just making casual visual associations. For hard cases, a learner must know and understand the criteria needed to make distinctions. The subtle differences go unrealized if diagnostic criteria are not made explicit in the training. In actual medical practice, many mammograms actually cannot be distinguished by visual inspection—they really are hard. Other diagnostic tests are needed.

The basic premise of such research is that learning objects or task should be pared down to the basics, eliminating extraneous and ambiguous information, which constitute “noise” that confounds the ability to make correct categorizations.

In common learning situations, a major source of noise is extraneous information, such as marginally relevant detail. Reducing this noise is achieved by focus on the underlying principle. Actually I stumbled on this basic premise of simplification over 50 years ago when I was a student trying to optimize my own learning. What I realized was the importance of homing in on the basic principle of what I was trying to learn from instructional material. If I understood a principle, I could use that understanding to think through to many of the implications and applications.

In other words, the principle is: "don't memorize any more than you have to." Use the principles as a way to figure out what was not memorized. Once core principles are understood, much of the basic information can be deduced or easily learned. This is akin to the standard practice of moving from the general to the specific. Even so, general ideas should emphasize principles.

Textbooks are sometimes quite poor in this regard. Too many texts have so much ancillary information in them that they should be thought of as reference books. That is why I have found a good market for my college-level neuroscience electronic textbook, “Core Ideas in Neuroscience,” in which each 2-3 page chapter is based entirely on each of the 75 core principles that cover the broad span of membrane biochemistry to human cognition.. A typical neuroscience textbook by other authors can run up to 1,500 pages.

Friday, May 29, 2015

Guest Post: PSA screening: does it or doesn't it?

Marya Zilberberg, MD, MPH is an independent physician health services researcher with a specific interest in healthcare-associated complications and a broad interest in the state of our healthcare system. She is also a Professor of Epidemiology at the University of Massachusetts, Amherst. The following post was first published on her blog, Healthcare, etc.

**

A study in the NEJM reports that after 11 years of follow up in a very large cohort of men randomized either to PSA screening every 4 years (~73,000 subjects) or to no screening (~89,000 subjects) there was both a reduction in death and no mortality advantage. How confusing can things get? Here is a screenshot of the headlines about it from Google News:






























How can the same test cut prostate cancer deaths and at the same time not save lives? This is counter-intuitive. Yet I hope that a regular reader of this blog is not surprised at all. For the rest of you, here is a clue to the answer: competing risks.

What's competing risks? It is a mental model of life and death that states that there are multiple causes competing to claim your life. If you are an obese smoker, you may die of a heart attack or diabetes complications or a cancer, or something altogether different. So, if I put you on a statin and get you to lose weight, but you continue to smoke, I may save you from dying from a heart attack, but not from cancer. One major feature of the competing risks model that confounds the public and students of epidemiology alike is that these risks can actually add up to over 100% for an individual. How is this possible? Well, the person I describe may have (and I am pulling these numbers out of thin air) a 50% risk of dying from a heart attack, 30% from lung cancer, 20% from head and neck cancer, and 30% from complications of diabetes. This adds up to 130%; how can this be? In an imaginary world of risk prediction anything is possible. The point is that he will likely die of one thing, and that is his 100% cause of death.

Before I get to translating this to the PSA data, I want to say that I find the second paragraph in the Results section quite problematic. It tells me how many of the PSA tests were positive, how many screenings on average each man underwent, what percentage of those with a positive test underwent a biopsy, and how many of those biopsies turned up cancer. What I cannot tell from this is precisely how many of the men had a false positive test and still had to undergo a biopsy -- the denominators in this paragraph shape-shift from tests to men. The best I can do is estimate: 136,689 screening tests, of which 16.6% (15,856) were positive. Dividing this by 2.27 average tests per subject yields 6,985 men with a positive PSA screen, of whom 6,963 had a biopsy-proven prostate cancer. And here is what's most unsettling: at the cut-off for PSA level of 4.0 or higher, the specificity of this test for cancer is only 60-70%. What this means is that at this cut-off value, a positive PSA would be a false positive (positive test in the absence of disease) 30-40% of the time. But if my calculations are anywhere in the ballpark of correct, the false positive rate in this trial was only 0.3%. This makes me think that either I am reading this paragraph incorrectly, or there is some mistake. I am especially concerned since the PSA cut-off used in the current study was 3.0, which would result in a rise in the sensitivity with a concurrent decrease in specificity and therefore even more false positives. So this is indeed bothersome, but I am willing to write it off to poor reporting of the data.

Let's get to mortality. The authors state that the death rates from prostate cancer were 0.39 in the screening group and 0.50 in the control group per 1,000 patient-years. Recall from my meat post that patient-years are roughly a product of the number of subjects observed by the number of years of observation. So, again, to put the numbers in perspective, the absolute risk reduction here for an individual over 10 years is from 0.5% to 0.39%, again microscopic. Nevertheless, the relative risk reduction was a significant 21%. But of course we are only talking about deaths from prostate cancer, not from all other competitors. And this is the crux of the matter: a man in the screening group was just as likely to die as a similar man in the non-screening group, only causes other than prostate cancer were more likely to claim his life.

The authors go through the motions of calculating the number needed to invite for screening (NNI) in order to avoid a single prostate cancer death, and it turns out to be 1,055. But really this number is only meaningful if we decide to get into death design in a something like "I don't want to die of this, but that other cause is OK" kind of a choice. And although I don't doubt that there may be takers for such a plan, I am pretty sure that my tax dollars should not pay for it. And thus I cast my vote for "doesn't."

Job Posting: Primary Care Health Policy Fellow

The Department of Family Medicine at Georgetown University School of Medicine is currently seeking qualified applicants for its one-year fellowship in Primary Care Health Policy. This is a unique, full-time program that combines experiences in scholarly research, faculty development, and clinical practice. Fellows have the opportunity to interact with local and federal policymakers in Washington, D.C. and pursue original research projects with experienced mentors at the Robert Graham Center for Policy Studies in Family Medicine and Primary Care. They will join a dynamic group of faculty (including me) at one of the flagship departments for urban family medicine on the East Coast. Past Health Policy Fellows have gone on to hold leadership positions in federal health agencies, community health organizations, and academia. Applicants should be graduates of an accredited residency program in family medicine or expect to graduate in 2012. Please e-mail me at KWL4@georgetown.edu for additional information.

Tuesday, May 26, 2015

5 quick ways to lose weight 5 pounds in 2 weeks

5 quick ways to lose weight 5 pounds in 2 weeks
Excess weight is indeed requires every woman to go on a diet. Who doesn't like her body svelte and beautiful commended? But running a diet isn't as easy as spoken, you need a strong commitment to lose weight.

If you want to successfully run the diet soon and want to lose weight, then the diet tips are excerpted from this lolwot.com might be able to try right now.

1. Never eat junk food

The kind of food that enters the category of junk food such as all fried foods like fried foods, fried chicken, French fries and more. Avoid junk food also is low or even no nutrients such as crackers, chips and other processed fresh bread or biscuits. Replace your meals with a fresh as a salad and steamed or boiled foods only.

2. Drink lots of water

The most powerful diet way to lose weight fast is to drink plenty of water. Make it a habit to drink water before and after eating, before going to bed and to wherever you go. Drinking water will keep the stomach full, preventing dehydration, makes the brain more focus, helping the body's metabolism and burn calories better.

3. Burn more calories

Pay attention to your calorie intake each day. Do not let you eat more calories than the calories can you burn per day. To limit your calorie intake, reduce portions to eat and replace with a vegetable or fruit alone as well as to avoid dinner. To burn more calories, balance with sports such as cycling, running, swimming or aerobics every day for approximately 1 hour.

4. Replace a carbohydrate with protein and fiber

Don't eat too many carbohydrates like rice and bread. If you want to eat the bread, substitute with whole wheat bread, if you want to eat rice, replace white rice with brown rice. But it would be better if reducing your intake of carbs and eating more protein and vegetables, such as fruit, eggs, fish and meat without fat.

5. Sleep

Pay attention to Your night's rest time. Sleep will make you lose weight faster. Be sure to sleep 8 hours per day as this will help the body process the calories and the body's metabolism is better.

Follow the 5 points only, and apply every day. If you can stand the live fifth points above, diet no longer a hard thing to do.

Friday, May 22, 2015

Don't do stupid sh*t in cancer screening

The electronic medical record that my office uses features a clinical protocol button that we are encouraged to click during patient visits to remind us about potentially indicated preventive services, such as obesity and tobacco counseling and cancer screenings. I once tried it out while seeing a 90 year-old with four chronic health problems. The computer suggested breast cancer, colorectal cancer, and cervical cancer screenings - three totally inappropriate tests for the patient.

Don't do stupid sh*t in cancer screening

At the residency program where I precept one afternoon a week, we recently held a "chart rounds" on an elderly patient with advanced dementia: when should you stop cancer screening? The answer boils down to the patient's predicted life expectancy compared to the number of years needed for a patient to benefit from a test. Although forecasting how long someone has left to live is not a precise science, knowing averages is essential to deciding if the inconvenience, expense, and potential adverse effects of screening (and treatment, if an abnormality is discovered) can be justified by the potential benefit. Since advanced dementia is a terminal disease, with more than half of nursing home residents in a National Institutes of Health-sponsored study dying within 18 months, there is virtually zero chance that a patient with this condition would benefit from cancer screening of any type. The same statement applies to a healthy 90 year-old in the U.S., who is expected to live around 4-5 more years.

But as one might expect in our crazy health non-system, cancer screening in patients with limited life expectancies happens all the time. A study published last week in JAMA Internal Medicine found that one-third to one-half of surveyed Americans with a 9-year mortality risk of more than 75 percent reported receiving recent cancer screenings. 55% of men in this group had knowingly been screened for prostate cancer within the previous 2 years - a test that, if it works at all, requires a decade to show a mortality benefit. (Nonetheless, my late father-in-law, who passed away at age 75 from chronic obstructive pulmonary disease, faithfully went in for annual PSA tests up until the year of his death.)

Aside from making me want to pull my hair out (or turning even more of it prematurely gray), reading this study's findings brought to mind President Obama's much-mocked second-term foreign policy doctrine: "Don't do stupid sh*t." I agree with former Secretary of State Hillary Clinton that this pithy four-word directive ending with a four-letter word shouldn't be an organizing principle for foreign policy, health policy, or any national policy. But as an antidote to the ubiquitous practice of too much medicine, it could be be a useful starting point: Don't do stupid sh*t in cancer screening. Think twice before reflexively doing things to elderly patients that can't possibly help and, therefore, can only hurt. And keep in mind that electronic clinical decision support should never, ever substitute for a physician's brain.

Should women start having mammograms before age 50?

The best answer to this question, I tell both my patients and loved ones, is: it depends on you.

Should women start having mammograms before age 50?As the U.S. Preventive Services Task Force affirmed this week in its updated draft recommendations on breast cancer screening, "The decision to start screening mammography in women prior to age 50 years should be an individual one. Women who place a higher value on the potential benefit than the potential harms may choose to begin biennial screening between the ages of 40 and 49 years." The Task Force went on to suggest that women with first-degree relatives who had breast cancer might be more motivated to start screening in their 40s.

What this decision shouldn't depend on is being bullied by one's doctor into getting a mammogram "just to be safe." Screening mammography's benefits and harms are closely balanced, and as two of my mentors in preventive medicine observedsome women might reasonably decide to say no:

Over the years we have learned more about the limited benefits of screening mammography, and also more about the potential harms, including anxiety over false-positive results and overdiagnosis and overtreatment of disease that would not have caused health problems. More and more, the goal for breast cancer screening is not to maximize the number of women who have mammography, but to help women make informed decisions about screening, even if that means that some women decide not to be screened.

Two women at "average risk" for breast cancer might make different decisions after they turn 40, depending on how concerned they are about dying from cancer, being diagnosed with cancer, and their tolerance for harms of screening. One well-informed female science journalist might choose to start being screened. Another female reporter, equally well-informed, might choose to opt out. Neither of these decisions is right or wrong on an individual or population level, regardless of the apocalyptic protests of self-interested radiology groups.

What concerns me is how current quality measurement and pay-for-performance approaches could end up pressuring more doctors to behave like bullies and drive up health care costs. Fee-for-service Medicare already spends about $1 billion each year on mammography; across all payers, about 70% of U.S. women age 40 to 85 years are screened annually at a cost of just under $8 billion. Doctor A is not necessarily a better doctor who deserves higher pay than Doctor B because more of Doctor A's patients get mammograms. In fact, the opposite might easily be true.

A recent study estimated that patients and insurers in the U.S. spend an additional $4 billion annually on working up false-positive mammogram results or treating women with breast cancer overdiagnoses. That's an extraordinary amount to spend for no health benefit, and it could be substantially less if physicians had the time and resources to explain difficult concepts such as overdiagnosis. But that doesn't appear to be where we're headed.

Finally, the notion that has been written into law in nearly half of the states in the U.S. requiring that women with dense breast tissue be notified so that they can get supplemental testing for mammography-invisible cancers is particularly misguided. The USPSTF's review found no proof that breast ultrasound, MRI, or anything else improves screening outcomes in women with dense breasts, and a sizable percentage of women can transition between breast density categories over time.

Too much medical care: do we know it when we see it?

The year after I moved to Washington, DC, I visited an ophthalmologist for a routine vision examination and prescription for new glasses. Since undergoing two surgical procedures to correct a "lazy eye" as a child, I hadn't had any issues with my eyesight. Part of my examination included measurement of intraocular pressures, a test used to screen for glaucoma. Although my work for the U.S. Preventive Services Task Force was in the future, I already understood the lack of evidence to support performing this test in a young adult at low risk. Not wanting to be a difficult patient, though, I went along with it.

Too much medical careMy intraocular pressures were completely normal. However, the ophthalmologist saw something else on her examination that she interpreted as a possible early sign of glaucoma, and recommended that I undergo more elaborate testing at a subsequent appointment, which I did a couple of weeks later. The next visit included taking many photographs of my eyes as I tracked objects across a computer screen, as well as additional measurements of my intraocular pressures. These tests weren't painful or very uncomfortable, but they made me anxious. Glaucoma can lead to blindness. Was it possible that I was affected, even though no one in my family had ever been diagnosed with this condition? Fortunately, the second ophthalmologist who reviewed my results reassured me that the tests were normal, and admitted had probably been overkill in the first place. "Dr. X [the first ophthalmologist] is a specialist in glaucoma," he said, by way of explanation. "Sometimes we tend to look a little too hard for the things we've been trained to see." (I appreciated his candor, and he has been my eye doctor ever since.)

I was reminded of this personal medical episode while reading a commentary on low-value medical care in JAMA Internal Medicine by Craig Umscheid, a physician who underwent a brain MRI after questionable findings on a routine vision examination suggested the remote possibility of multiple sclerosis, despite the absence of symptoms. Although Dr. Umscheid recognized that this expensive and anxiety-inducing test was low-value, if not worthless, he went along with it anyway. "Despite my own medical and epidemiologic training," he wrote, "it was difficult to resist his [ophthalmologist's] advice. As my physician, his decision making was important to me. I trusted his instincts and experience."

If physicians such as Dr. Umscheid and I didn't object to receiving what we recognized as too much medical care when we saw it, it should not be a surprise that, according to one study, many inappropriate tests and treatments are being provided more often, not less. 5.7% of men age 75 and older received prostate cancer screening in 2009, compared to 3.5% in 1999. 38% of adults received a complete blood count at a general medical examination in 2009, compared to 22% in 1999. 40% of adults were prescribed an antibiotic for an upper respiratory infection in 2009, compared to 38% in 1999. (If you usually have complete blood counts done at your physicals or swear by the Z-PAK to cure your common cold, we can discuss offline why both of these are bad ideas.)

One of the obstacles to reducing unnecessary medical care (also termed "overuse") is that outside of a limited set of tests and procedures, physicians and policymakers may disagree about when care is going too far. The American Board of Internal Medicine Foundation's Choosing Wisely initiative is a good start, but these lists consist of low-hanging fruit accompanied by caveats such as "low risk," "low clinical suspicion," "non-specific pain." To a clinician who feels for whatever reason that a certain non-recommended test or treatment is needed for his patient, these qualifications amount to get-out-of-jail free cards. It's easy to say that payers should just stop paying for inappropriate and potentially harmful medical care, but as an analysis from the Robert Wood Johnson Foundation explained, this is much easier said than done. If a panel of specialists convened to review the medical care that Dr. Umscheid and I received, would they unanimously deem it to have been too much? I doubt it.

Similarly, although endoscopy for uncomplicated gastroesophageal reflux disease is widely considered to be unnecessary, that didn't stop an experienced health services researcher from undergoing this low-value procedure after a few days of worsening heartburn. Comparing her personal experience to the (superior) decision-making processes that occur in veterinary medicine, Dr. Nancy Kressin wrote in JAMA:

Until patients are educated and emboldened to question the value of further testing, and until human health care clinicians include discussions of value with their diagnostic recommendations, it is hard to foresee how we can make similar progress in human medicine. Patients may be fearful that there is something seriously wrong that needs to be identified as soon as possible, they are often deferential to their clinicians' greater knowledge of the (potentially scary) possibilities, and some patients want to be sure that everything possible is done for them, without recognizing the potential harms of diagnostic tests themselves, the risks of overdiagnosis, or the sometimes limited value in knowing the cause of symptoms in determining the course of therapy.

Regardless of future insurance payment reforms, both doctors and patients will have key roles to play in recognizing when medical care is too much. More widespread uptake of shared decision-making, while hardly a panacea, would call attention to the importance of aligning care with patients' preferences and values and the need for decision aids that illustrate benefits and harms of often-overrated interventions. Changing a medical and popular culture that overvalues screening tests relative to their proven benefits may be more challenging. A survey study published in PLOS One affirmed previous findings that patients are far more enthusiastic and less skeptical about testing and screening than they are about medication, even though the harms of the former are often no less than the latter. I agree with the authors' conclusions:

Efforts to address overuse must involve professional medical associations, hospital systems, payers, and medical schools in modifying fee-for-service payment systems, enabling better coordination of care, and integrating lessons about overuse into training and continuing education. But the preferences of active patients nonetheless merit attention. Both the mistrust of pharmaceuticals and the enthusiasm for testing and screening reflect individuals’ efforts to take care of their health. The challenge is to engage patients in understanding the connection between over-testing and over-treatment, to see both as detrimental to their health, and to actively choose to do less.

Thursday, May 21, 2015

Decision making :learning from one’s mistakes.

Teenagers are notorious for poor decision-making. Of course that is inevitable, given that their brains are still developing, and they have had relatively little life experience to show them how to predict what works and what doesn’t. Unfortunately, what doesn’t work may have more emotional appeal, and most of us at any age are more susceptible to our emotions than cold, hard logic.
Seniors also are prone to poor decision-making if senility has set it. Unscrupulous people take advantage of such seniors because a brain that is deteriorating has a hard time making wise decisions.
In between teenage and senility is when the brain is at its peak for good decision making. Wisdom comes with age, up to a point. Some Eastern cultures venerate their old people as generally being especially wise. After all, it you live long enough, and are still mentally healthy, you ought to make good decisions because you have a lifetime of experience to teach you what future choices are likely to work and which are not.
Much of that knowledge comes from learning from one’s mistakes. On the other hand, some people, regardless of age, can’t seem to learn from their mistakes. Most of the time the problem is not stupidity but a flawed habitual process by which one is motivated to make wise decisions and evaluate options. Best of all is learning from somebody else’s mistakes, so you don’t have to make them yourself.
Learning from your mistakes can be negative, if you fret about it. Learning what you can do to avoid repeating a mistake is one thing, but dwelling on it erodes one’s confidence and sense of self worth. I can never forget the good advice I read from, of all people, T. Boone Pickens. He was quoted in an interview as saying that he was able to re-make his fortune on multiple occasions because he didn’t dwell on losing the fortunes. He credited that attitude to his college basketball coach who told the team after each defeat, “Learn from your mistakes, but don’t dwell on them. Learn from what you did right and do more of that.”
It would help if we knew how the brain made decisions, so we could train it to operate better. “Decision neuroscience” is an emerging field of study aimed at how learning how brains make decisions and how to optimize the process. Neuroscientists seemed to have honed in on two theories, both of which deal with how the brain handles the processing of alternate options to arrive at a decision.
One theory is that each option is processed in its own competing pool of neurons. As processing evolves, the activity in each pool builds up and down as each pool competes for dominance. At some point, activity builds up in one of the pools to reach a threshold, in winner-take-all fashion, to allow the activity in that pool to dominate and issue the appropriate decision commands to the parts of the brain needed for execution. As one possible example, two or more pools of neurons separately receive input that reflects the representation of different options. Each pool sends an output to another set of neurons that feed back either excitatory or inhibitory influences, thus providing a way for competition among pools to select the pool that eventually dominates because it has built up more impulse activity than the others.

The other theory is based on guided gating wherein input to pools of decision-making neurons is gated to regulate how much excitatory influence can accumulate in each given pool. [i]The specific routing paths involve inhibitory neurons that shut down certain routes, thus preferentially routing input to a preferred accumulating circuit. The route is biased by estimated salience of each option, current emotional state, memories of past learning, and the expected reward value for the outcome of each option.
These decision-making possibilities involve what is called “integrate and fire.” That is, input to all relevant pools of neurons accumulates and leads to various levels of firing in each pool. The pool firing the most is most likely to dominate the output, that is, the decision.
However circuits make decisions, there is considerable evidence that nerve impulse representations for each given choice option simultaneously code for expected outcome and reward value. These value estimates update on the fly. Networks containing these representations compete to arrive at a decision.
Any choice among alternative options is affected by how much information for each option the brain has to work on. When the brain is consciously trying to make a decision, this often means how much relevant information the brain can hold in working memory. Working memory is notoriously low-capacity, so the key becomes remembering the sub-sets of information that are the most relevant to each option. Humans think with what is in their working memory. Experiments have shown that older people are more likely to hold the most useful information in working memory, and therefore they can think more effectively. The National Institute of Aging began funding decision-making research in 2010 at Stanford University’s Center on Longevity. Results of their research are showing that older people often make better decisions than younger people.
As one example, older people are more likely to make rational cost-benefit analyses. Older people are more likely to recognize when they have made a bad investment and walk away rather than throwing more good money after bad.
A key factor seems to be that older people are more selective about what they remember. For example, one study from the Stanford Center compared the ability of young and old people to remember a list of words. Not surprisingly, younger people remembered more words, but when words were assigned a number value, with some words being more valuable than others, older people were better at remembering high-value words and ignoring low-value words. It seems that older people selectively remember what is important, which should make it easier to make better decisions.
Decision-making skills are important of learning achievement in school. Students need to know how to focus in general, and focus on what is most relevant in particular. They are not learning that skill, and their multi-tasking culture is teaching them many bad habits.
Those of us who care deeply about educational development of youngsters need to push our schools to address the thinking and learning skills of students. "Teaching to the test" detracts from time spent in teaching what matters most. Today's culture of multi-tasking is making matters worse. Children don't learn how to attend selectively and intensely to the most relevant information, because they are distracted by superficial attention to everything. Despite their daily use of Apple computers and smart phones, only one college student out of 85 could draw the Apple logo correctly.[iii]
Memory training is generally absent from teacher training programs. Despite my locally well-publicized experience in memory training, no one in the College of Education at my university has ever asked me to share my knowledge with their faculty or with pre-service teachers. The paradox is that teachers are trained to help students remember curricular answers for high-stakes tests. What could be more important than learning how to decide what to remember and how to remember it? And we wonder why student performance is so poor?

Tuesday, May 19, 2015

Everything but the kitchen sink in public health

The buzzwords many use in medicine today are "personalized," "individualized," or "targeted." Rather than doctors prescribing tests or treatments that work in most people but might not work for you, proponents argue, we should tailor medical interventions to unique patient characteristics, such as genomic data. (The White House's Precision Medicine initiative is an example of this kind of thinking.) Although I am skeptical that big data-driven genetic sequencing will soon trump the personalized experience of a physician sitting down and speaking with a patient, many areas of clinical medicine stand to benefit from an improved understanding of genetic and environmental causes of diseases in individuals.

 the kitchen sink in public health

On the other hand, public health problems rarely have a single cause or respond to a targeted intervention. Many policies and actions combined to lower the prevalence of smoking in U.S. adults from 45 percent in the 1960s to about 18 percent in 2013. Counseling and cessation medications played a role, but so did raising tobacco taxes; restricting advertisements; requiring warning labels; and banning smoking in airplanes, restaurants, parks, and other public places. These concurrent interventions drove a widespread culture change, making smoking "uncool" to the extent that many adults who smoke today are embarrassed by their habit.

Two stories I've read in the past month offer good examples of throwing "everything but the kitchen sink" at public health problems that defy straightforward solutions: high infant mortality and incarceration rates in African Americans. In Cincinnati, babies have been dying in the first year of life at more than twice the national average. The reasons are many: premature births, inadequate prenatal care, poor nutrition, exposure to tobacco smoke in the womb and in the cradle, to name a few. Cradle Cincinnati, the strategy that the city's medical and public health professionals created to reduce infant mortality, addresses three modifiable behavioral issues: smoking (stop), spacing (pregnancies at least 12 months apart), and sleep (baby alone, on its back, in an empty crib). Just as important was how to deliver these messages to prospective parents who were suspicious of the health system:

Using focus groups from African-American and Appalachian neighborhoods, the [nonprofit Center for Closing the] Health Gap found that many young mothers rely primarily on friends and family for maternity information: They simply don’t trust doctors and nurses. ... The distrust is often driven by not feeling valued “and not believing that the person who is giving them instruction about prenatal care even cares.” Now the Health Gap is looking for ways to train peers and neighborhood leaders to share accurate information. It’s also producing a video “letter” to community medical providers to school them in the day-to-day interactions that make a patient feel judged, devalued, and dismissed—interactions that may keep her from showing up to monitor her pregnancy or following up after her child is born.

Infant mortality in Cincinnati has fallen in each of the last few years; how much of a difference the three Ss campaign is making is hard to measure, but it does seem to be helping.

A few states northwest in Milwaukee, Wisconsin, a controversial District Attorney has been tackling a completely different but no less urgent problem: the huge racial disparity in African American men in prisons. The statistics are staggering: although they comprise only 6 percent of the state's population, African Americans represent 37 percent of the incarcerated population. A thirty-something African American man living in Milwaukee County is more likely to have served time than not. Not only are these sky-high rates devastating to relationships with partners and children, lengthy prison stays and subsequent criminal records make many former inmates unemployable.

The causes of this problem are far from obvious, certainly not as simple as the "racist police brutality" narrative that has swept the nation in recent days. Are African American men committing more crimes than others? More likely to be arrested? Being prosecuted and convicted at disproportionate rates? More likely to receive a jail term rather than parole? In Milwaukee, it turned out to be most of the above. Those who commit violent crimes should be jailed, but mandating years behind bars for shoplifting or drug possession turned teenagers and young men into hardened criminals and was ultimately counterproductive. (See my previous post for alternatives to the "war on drugs.") So D.A. John Chisholm changed the focus of his team from "winning every case" to ensuring that the punishment fit the crime, and sentencing selected nonviolent offenders to substance abuse treatment or educational programs rather than prison terms. He also charged his prosecutors with developing ways to prevent crime before it began by, for example, persuading community organizations such as Habitat to Humanity to renovate abandoned homes in low-income neighborhoods. As in Cincinnati, the jury is still out on Chisholm's methods, which have modestly shrunk the Milwaukee County prison population but doesn't seem to have affected crime rates.

Like patients with multiple complex medical conditions, public health problems require innovative solutions, but as the the executive director of Cradle Cincinnati was quoted as saying, "There's no silver bullet. It's silver buckshot, and it all needs to be fired at once." Or maybe you prefer my metaphor: everything but the kitchen sink.

Monday, May 11, 2015

PSA screening by the numbers:" no benefits, many harms"

Previous studies found that two-thirds of men who receive prostate-specific antigen (PSA) screening for prostate cancer didn't have shared decision making with their physicians. If shared decision making occurred at all, patients were more likely to remember hearing about the advantages than the disadvantages of PSA screening, and many older men with a high probability of death within the next 9 years were screened nonetheless.
PSA screening by the numbers: no benefits, many harms

These findings, along with a Cochrane review and another systematic review (that I co-authored) which both found no pooled mortality benefits in several randomized controlled trials, led the U.S. Preventive Services Task Force to recommend against PSA-based screening for prostate cancer in 2012. Since then, the American Academy of Family Physicians and the American College of Preventive Medicine have added this service to their Choosing Wisely lists of tests and procedures that patients and physicians should question.

The Medicine By the Numbers published in the May 1st issue of American Family Physician clearly illustrates that the harms of PSA screening exceed the benefits. 1 in 5 men who received PSA screening ended up undergoing a biopsy for a false-positive test; 1 in 34 and 1 in 56 screened men, respectively, suffered erectile dysfunction or urinary incontinence as a result of prostate cancer treatment. In contrast, PSA screening prevented zero deaths from prostate cancer or all causes. In other words, no benefits.

This review begs the question of why clinicians should bother with shared decision making in most average-risk men, rather than simply telling them that this test is a bad deal.

**

This post first appeared on the AFP Community Blog.

Sunday, May 10, 2015

5 myths about cervical cancer that does not need to be trusted


Cervical cancer is a type of cancer that is feared by women. This cancer is caused by HPV and viruses that can be transmitted through physical contact. You can prevent it if making early detection. However, there are a few myths that make women mistaken about this cancer. To set the record straight, here are seven myths about cervical cancer that does not need to be trusted, like cancer. med. umich.edu.

Myth 1: cervical cancer cannot be prevented

Human papillomavirus human infections transmitted sexually, but it can be prevented with the new vaccine. By preventing infection with HPV, you can lower the risk of cervical cancer.

Myth 2: too young to terkana of cervical cancer

The average age for cervical cancer sufferers is 48 years old. However, women may be diagnosed with cancer at the age of 20s.

Myth 3: never have sex, do not need the HPV vaccine

HPV can be transmitted from one spouse to the other spouse through sex. However, just because someone has never had sex, does not mean he cannot be contracted. Experts believe that the vaccine should be given since a young age before a woman begins to sexually active.

Myth 4: do not need a Pap smear test

The first Pap smear tests for a woman should be given when he was age 21 years or three years after he started having sex. Even if you have received the HPV vaccine, you still need to do a Pap smear test.

Myth 5: too old for a Pap smear test

"We are seeing an increase in cervical cancer and HIV on the older population," said Lauren Zoschnick, MD, Clinical Assistant Professor of obstetrics and Gynecology at the UM Medical School. Therefore, women who have gone through menopause also need to do a Pap test.

Here are five myths about cervical cancer that does not need to be trusted