From the Pain Summit: The Word-Pair use Top-50

For what’s likely to be my last summary of  the 1811 tweets from the San Diego Pain Summit 2017 using various metrics of tweet importance, I created a ranked word-pair vocabulary from the body of tweets, where each word-pair had to occur within single tweets. With this pair-vocabulary in place, I then summed up the word-pair score for each tweet and sorted them into an order of decreasing scores. I’ve listed the top 50 tweets under this measure of importance below, in the sorted order.

Note that this post is a follow-on to “Things” from the Pain Summit (Literally) and From the Pain Summit — Doubles from The One Hundred.

 

  • We see small sample sizes, and small effect sizes, but why do we expect large effect sizes out of a singular treatment approach?
  • Sleep hygiene, activity levels based on preference, set goals, graded activity (social), min of 4 times/wk for 30 min
  • Set small realistic goals to set up our patient for success. Small achievable goals to activate rewards systems in the brain.
  • The cautionary tale of morphidex: tested only on male rats where it made a difference; on women it made no difference.
  • Tenderness of the pelvic floor caused more disability than weakness of the pelvic floor
  • Farmer, MindBody: When addressing fear of movement it is critical to determine the specific core belief underlying the fear.
  • Pelvic floor rehab is orthopedics in a cave. It is not only a female problem. Everyone has a pelvic floor.
  • Lots more rats than mice used in animal research. Genetics: most research is used on 1 kind of mouse and 1 kind of rat.
  • Cognitive factors, emotional, social, physical lifestyle factors all influence pain. All are modifiable factors.
  • Why would we expect to see large effect sizes from a single intervention – lots of interventions have small effect size.
  • Koban: social factors that modulate pain/relief: presence, social stress, cultural beliefs, therapeutic alliance, social norms.
  • The issue of genetics: rats and mice are used in Pain research. And only on 1 kind of mouse or rat. Not a lot of diversity.
  • It matters whether human presence is female or male! If human is male, mice grimace less.
  • Lifesyle changes to work on: sleep hygiene, regular exercise to preference (activity), set goals, min 30m 4x wk.
  • Physical factors – postural extension + cognitive factors – fear – tend to increase muscle contraction of lumbar muscles during movement.
  • Barriers to exercise adherence: low self efficacy, envisaging lots of barriers to exercise.
  • Cortisol levels in response to stress predict musculoskeletal pain: low response leads to increased inflammation response and pain.
  • Lifting from a flexed position – back muscles work more efficiently. Lifting in lordosis increases work/load.
  • Ben – Minds change slowly. Rarely do patients have an epiphany. Sometimes you are never going to change people’s mind.
  • CFT [Cognitive Functional Therapy] Management Plan sample: making sense of pain, exposure with control, lifestyle changes, give control over pain.
  • Do people approach or avoid pain? Women sit away from people pretending to be sick and close to people in pain. Men could care less either way.
  • And to humans. Male mice are standing in for human women. And women are the “real clinical problem”.
  • Engram: the physical changes in the brain that represent a specific memory (memory trace).
  • Increase in dendrite spine dictates where info goes. Learning changes neuron structure, spine growth/pruning.
  • Rat grimace scale findings are skewed in presence of scent of male humans – simple things making reproduction of research difficult.
  • With female mice that know each other, mouse in middle will spend time with mice in pain, lowering pain.
  • “Pelvic floor dysfunction treatment belongs in the Orthopedic Division, not Women’s Health. This is orthopedics.” C Vandyken
  • Barriers to exercise: Pain and anxiety re worsening, low self-efficacy, envisaging barriers (time, pessimism, dislike)
  • Exercise: means of imparting progresive physical and psychological stresses; dose depends on bio, psych, and soc factors.
  • A bit of geek heaven. Microglia studies done on male mice are positive, on female mice they are negative.
  • With strange mice, stress goes up, blocking empathy. Blocking stress, empathy appears.
  • Exercise doesn’t have to suck for it to be effective – set activity goals based on patient preference.
  • Focus on finding gaps in others knowledge, not ignorance of knowledge. Identify opportunities to build confidence and trust.
  • You don’t want to have an orgasm while you pee, and you don’t want pee while having an orgasm. Pelvic floor pain often undermines.
  • Making sense of pain is really important for patients. Exposure with control. Lifestyle change. It’s a journey.
  • Patient “It doesn’t feel normal it just feels unnatural – slouchy to sit like this. But it feels better.”
  • Refer out for major depressive disorder, anxiety disorder, major social stressors, work with other HCPs, don’t abandon.
  • First muscles to contract in a threatening situation are muscles of the pelvic floor – makes sense for survival https://t.co/xB6n1Ohv9M
  • Play the “long game”: Establish therapeutic alliance, find gaps in knowledge, Identify chances to build trust, lead gently.
  • Never attempt to directly correct patients’ or ‘clients’ beliefs. Changing minds takes time. https://t.co/QA2p4F7EBP
  • So you see clusters. Times of rapid change – emergence at adolescence, physical, emotional, physiological change.
  • Prof Pete O’Sullivan PT uses iPhone photos of the patient to show that relaxed posture (no pain) is still “good posture” to the patient.
  • Do: Goal setting (get your interview skills up!), outcome measures (patient specific functional scale!), preferences, expectations.
  • Can we think of cognitive bias as sensitized beliefs? Treat communication like we treat movement. Graded communication?
  • The goal, to create frequent high quality communication and strong relationships. Open avenues of opportunity to be accurate.
  • Negative beliefs about back pain, fear of movement, and decreased self efficacy are predictive of disability.
  • Reviewing biomechanics of lifting – no clear benefit to different lifting methods in terms of load/shear. Kingma 2010
  • Spending time helping a person work out how they will plan for exercise is a worthwhile pursuit.  Address barriers.
  • Henderson Reframing exercise. Imparts progress in physical and psych Stress. Dosing is dependent upon a variety of factors.
  • C. Vandyken – Data indicates that a large percent of women with LBP had pelvic floor dysfunction. (63% of n=1636, 78% of n=200)

From the Pain Summit — Doubles from The One Hundred

This post is a follow-up to yesterday’s “Things” from the Pain Summit (Literally); another means of extracting information from the 1811 tweets from the San Diego Pain Summit 2017.

This time I took the 100 top words from the Pain Summit tweets, excluding the first word, which was pain. Other than as a consistency check, the presence of the word pain contains little information at a conference whose theme is pain. The list of tweets below are those extracted from the entire body of tweets based on containing at least two words in the top 100 vocabulary words, a simple means of auto-summarizing a body of text. By the 100th word in the vocabulary, a given word is showing up in only 17 tweets, less than 1% of the total body of 1811 tweets.

It’s not a terribly profound method, but provides another digestible-size peek through the keyhole at the information tweeted from the 2017 summit.

 

  • Enhance placebo, avoid nocebo: How contextual factors affect PT outcomes https://t.co/U4aQoZMJvl Maybe of interest… https://t.co/KdpukTPcYW
  • Headache causes? tired, stress, etc. Risk factors for headache are same as for back pain. Sensitivities fluctuate.
  • Emerging research on systemic inflammation and gut health worth following in its relationship to pain.
  • Negative beliefs, fear of movement, and decreased self-efficacy are predictive of long term disability.
  •  We should be teaching healthy ways to respond to pain and injury in school health classes.
  • Psychosocial factors can affect specific hormones, such as HPD, that can affect severity of MS pain https://t.co/1dvxtPUNWj
  •  Psychosocial factors can affect specific hormones, such as HPD, that can affect severity of MS pain https://t.co/A246wazFa3
  • Distress, pain, and behavioral responses. Sleep deprivation and sleep cycle disruption are important also.
  • A lot of the timing of TrA [Transversus Abdominis] contraction studies have never been replicated.
  • Teach kids to be strong and resilient – walk or ride to school with backpack – load those bones while young!
  • People with chronic LBP [low back pain] tend to use double the amount of muscle activation to lift – increases load/work/force.
  • Many localized pain conditions do worse with continued direct treatment – less poking, rolling, needling maybe better.
  • Correlation between pain, disability, depression anxiety and increased trunk muscle contraction – protective, makes sense but unhelpful.
  • “You’re not going to end up like that (disability) because you’re not on that path, you don’t need to work so damned hard to avoid it”.
  • Total combined treatment effect: non-specific FX [function] of the clinical encounter and the specific FX of tx [treatment], e.g. of advice.
  • Disability burden of LBP rising in face of spending increasing amounts of $ on diagnostic imaging and procedures.
  • “Consider looking at exercise as a means of imparting progressive physical and psychological stress, not strengthening”.
  • Farmer – Can prime later learning and enhance encoding after (caffeine, amphetamine, scent of opposite sex).
  • Moyer: 2004 meta-analysis refutes claim that massage reduces stress hormones.
  • Diagnosis and classification of pelvic girdle pain disorders via case studies. https://t.co/JWhgUJ09wo and https://t.co/0zly6ZH52l
  • Mogil – We’ve settled on two species (mice and rats) and limited strains of those. Lacks genetic diversity.
  • How could we reasonably extrapolate to humans the results of a study on one strain of rat? #inconvenientquestion
  • 70% of chronic pain is reported by females – Irony: 79% of rats in studies are male only. #mismatch
  • 70% of chronic pain suffers are documented/reported to be female. To @jptlowy’s ?, do men suffer less, or more quietly? Impact on research?
  • C.Vandyken’s study: 182 invited to participate, n=100 post-exclusion criteria. [Major exclusion criteria were catastrophizing and refusal of pelvic exam]

“Things” from the Pain Summit (Literally)

A couple of weeks ago, I spent two days at the San Diego Pain Summit, to listen, tweet, and archive tweets from the conference and adjoining workshops. The purpose of the 2017 summit, as was the case for the prior two years, is to bridge pain research and manual therapy. I could say that it was a great conference, with much information conveyed in a lucid manner and with multiple opportunities to network and socialize. All of this is true, but not the focus of this post. Look instead to the 1811 tweets and the tweet vocabulary for the summit linked to my conference tweets index.

Bronnie Lennox Thompson noticed something in the the tweet vocabulary list that had previously caught my eye, bringing it back to mind; a hat-tip to her. The root word ‘thing‘, including thing and things, was flying high in the list at number eight. Now, thing and things are nouns of total vagueness, relying entirely on their referents to give them meaning. That also, however, means that ‘thing‘ can be a key to multiple concepts. A few minutes of cut and paste and Python scripting and I had a file containing just the text in context of all the summit tweets containing the words thing or things (Python’s the king, with which to catch the context of a thing). These tweets provide an interesting keyhole through which to view the summit’s concepts. Before looking at those, however, I’ll take a foray into background.

If there is a theory common to those who discuss “pain science” (i.e., scientific research on the neurological and psycho-social basis for the experience of (often chronic) pain), it is that the final synthesis of the experience of pain occurs in the brain. It appears to be multi-faceted, including the system’s current state (which depends on history), with inputs from psycho-social-emotional beliefs, the immune system, and current multi-sensory input.

The neurological input often associated with pain serves, at a reflexive level, to remove tissue from a situation producing damage. An experience of pain then serves as a notification that damage has occurred, and as a disincentive to fully use an area that needs time to heal. However, there is a group survival advantage for, under extreme need, not to feel pain but to be injured and still protect the group (or even oneself). This in itself conveys an evolutionary incentive for pain to be a mediated experience rather than a simple direct one.

There are adequate observations in the variety of the experience of pain and what modulates it to require synthesis in the brain as an inescapable conclusion, meaning that no simpler system of processing can explain the full scope of observations.  This scope includes: pain directly related to a tissue injury, pain continuing after an injury has healed, an injury without pain, pain without an injury, and pain in a part amputated or never grown. Only a state-dependent synthesis in the brain can cover this range. There are, of course, details about this process that are unknown, just as there are details of how we integrate a body-sense that are unknown. Specific to pain, Melzack (2001) and Malzack and Katz (2013) are commonly cited. Ramachandran and Blakeslee (1999) and Blakeslee and Blakeslee (2008) give readable and more general research connecting body perception with neurology.

From my background in computer science, I also conceptualize and simplify the experience of pain as a finite state machine, a system with multiple states in which the response to an input depends on the state. For pain, the problem then becomes figuring out an input that will prompt a transition from a painful state to one more benign. Nadim et al (2008) delve deeper into state-dependent output from networks of neurons.

Back to the tweets containing the words thing or things. In an informal qualitative analysis I separated them into six categories:

  • Empowering patients and getting them back to the things they love
  • Avoiding saying things that produce fear of activity
  • Extending things we measure beyond posture and structure
  • Things about the importance of movement
  • Things showing psycho-social effects in animal models
  • Things about provider flexibility in attitude and learning

The tweets within each category are listed below as an appendix. Please remember that this is not an overall summary of the pain summit, just a peek through a particular keyhole. That wraps ‘things‘ up. Thanks for reading.

 

References

Blakeslee, Sandra, B., & Blakeslee, Matthew, 2008. The Body Has a Mind of Its Own, Penguin Random House. Available at: http://www.penguinrandomhouse.com/books/14618/the-body-has-a-mind-of-its-own-by-sandra-blakeslee-and-matthew-blakeslee/9780812975277 [Accessed 23 Feb 2017], ISBN 9780812975277.

Melzack, R., 2001. Pain and the neuromatrix in the brain. Journal of Dental Education, 65(12), pp.1378–1382. Available at: http://www.jdentaled.org/content/65/12/1378.long [Accessed 24 Feb 2017]

Melzack, R. & Katz, J., 2013. Pain. Wiley Interdisciplinary Reviews: Cognitive Science, 4(1), pp.1–15. doi: 10.1002/wcs.1201

Nadim, F., Brezina, V., Destexhe, A, and Linster, C., 2008. State Dependence of Network Output: Modeling and Experiments. Journal of Neuroscience, 28(46), pp.11806–11813. doi: 10.1523/jneurosci.3796-08.2008

Ramachandran, V.S. & Blakeslee, Sandra, 1999. Phantoms in the Brain, William Morrow Paperbacks. Available at: https://www.harpercollins.com/9780688172176/phantoms-in-the-brain [Accessed 23 Feb 2017].

 

Appendix: Categorized Tweets containing the words Thing or Things

Empowering patients and getting them back to the things they love

  • Re: low mood, get people back to the things they love and that will lift mood.
  • Get people back to the things they love – helps with identity.
  • We spend too much time treating symptoms and not returning people to things they love.
  • Focuses on movements and positions that are related to things she cares about most from the interview.
  • Cognitive approach – focusing on unhelpful beliefs and behaviors. Take people back to the things they love.
  • People get sad when they don’t do the things they love.
  • What is it that you love to do? Has pain been stopping you doing them? These are the things we need to focus on.
  • Stop doing dumb stuff: when I stopped doing dumb stuff, I started filling time with things important to the patient.
  • Your goal setting should be things that the patient really cares about! You get this from listening to the patient.
  • ‘Goal setting – Things that people really care about. In the US, can be tricky when pt goals aren’t covered by insurance.
  • Empowering people to do things themselves is one of the best things we can do
  • Validation is one of the most important things that you can give a patient.
  • Validation is one of the most important things to give to patients, their pain is real. Relay their story back to them”.
  • The things we study in clinical trials for pain are not the things people worry about. Can we make science human centered?
  • What’s the #1 thing for prediction of recovery? Expectation of getting better.
  • Engagement is the crucial thing. Doesn’t matter “what”.

Avoiding saying things that produce fear of activity

  • Fear has the habit of feeding the very thing we’re frightened of.
  • Disconnection between body parts, thinking things are “out of place”, is linked to increased stress and pain.
  • Stop saying “Wear and Tear”. This implies that things will get worse with increased activity.  It increases fear and anxiety.
  • Too many people come to me and have been told that the one thing that made them feel better is ‘bad’ for them and will make them worse.
  • The biggest obstacle we have is getting people to start doing things they are afraid of – but it’s critical.
  • Most back pain is managed by avoiding things – that pattern can start young, and persist (also doesn’t fix the pain!).
  • Key thing is therapist’s confidence. YOU have to really believe movement won’t damage the patient

Extending things we measure beyond posture and structure

  • Is back pain associated with slump sitting? Whole bunch of things were associated but mostly being male.
    We use the back as a model but all these things go for pain experienced anywhere.
  • The unfortunate thing about directional preference (McKenzie) is that it was linked to anatomy and pathology. We need to shift that.
  • Change label system around back pain “the cool thing is, it doesn’t look like your scan is your problem” vs “Non-specific back pain”.
  • If you measure the wrong thing well, you still don’t know anything.
  • Measures still look at physical things. But in patients, we have to go past that.
  • Things related to patient  not regularly discussed in the literature: context, perception, emotion, self-efficacy, locus of control, education, reconceptualization.
  • The physical measures we target don’t have to get better for people to have less pain  (measuring the wrong thing?).
  • There is no such thing as perfect or ideal posture. Timing and variability of your posture matters more.
  • We assume at the bio level things should translate pretty well and not at psychosocial level, but this is backward. Pain researchers may be measuring the wrong thing.

Things about the importance of movement

  • Maladaptive or unhelpful movement patterns can often be in the direction of the very thing that triggers pain.
  • Nobody gets excited about ‘prone on elbows.’ They talk about interactions, new things they can do, how they can move.
  • The most difficult thing in being in pain is the commitment to activity, not activity itself”
  • Patient interview — Lots of reps of retrained movements in between discussing things – a lot of volume of new movements w/new pattern
  • Be strong – but only when you need it”. This is often the missing piece in the core stability thing.
  • Only two things that are really rested – major trauma or broken bones”. Everything else should be worked.
  • I’m biased about movement, which is a good thing. But exercise is not the answer.
  • The #1 thing preventing individuals from complying with regular exercise was inability to include it as part of daily routine.
  • Exercise really is pretty good. For general health it is a good thing. We need to educate patients about this.
  • We don’t have concrete ideas about how exercise works for pain – tricky thing to measure.

Things showing psycho-social effects in animal models

  • Rat grimace scale findings are skewed in presence of scent of male humans – simple things making reproduction of research difficult.
  • Lab mice have a social life! They do things even in a cage that can be measured/manipulated. Real paucity of research.

Things about provider flexibility in attitude and learning

  • Hearing all the awesome things going on at the 2017 Summit?
  • [Prof says controversial but accurate thing] “Oh you’re going to tweet that aren’t you?” Me: oh yeah
  • Two biggest things: patient education and behavioral experimentation.
  • These are basic things that should be taught in PT schools…how to listen, how to talk to patients.
  • The cool thing is that we get to learn, stop, move out of old beliefs as the evidence says to STOP.
  • ‘Research creep’ – what happens when you’re looking for literature on a topic and you get distracted by the other things to learn!
  • If we can change one thing, it might change everything. Thinking about the Why behind the What.
  • No singular intervention is going to provide the answer for complex things.
  • The best evidenced base thing is redundant if it never gets done.
  • “Whilst our Hearts are violently set upon any thing, there is no convincing us that we shall ever be of another Mind.” – Mary Astell
  • I’m not good at many things, but I’m great at procrastination.
  • Helpful to learn things that aren’t good – helps you compare to better things
  • Can be helpful to learn “not so good things” as student to appreciate those things that ARE good.
  • Folks who are curious are going to be fine; worry about the ones who are certain about things.
  • If you keep explaining things the same way and need to keep repeating yourself, you are doing it wrong [communicating].
  • You can’t just push people to see things your way and to value things as you do.

Science and Energy

We live in a strange age; an age in which science and technology are providing us new knowledge and capabilities in many areas of endeavor but also an age in which a significant number of people are rejecting scientific thinking in favor of belief-based narratives. While the scientific process stems from observations leading to conceptual models that yield predictions testable by new observations, belief systems lack such constraints.

We have groups rejecting the concept that seven billion humans could change the Earth’s climate at the same time that we see from orbit a globe covered by the lights produced by human activities. We have people rejecting the concept of evolution at the same time that cancer researchers are seeking means to keep cancer cells from evolving to become immune to treatments. It is a time in which the mass-conferring Higgs Boson has been found and a time in which signs of post Big Bang inflation have been spotted in the cosmic background radiation. It is also a time in which some, particularly in several health care professions, resist letting go of a belief that humans can willfully emit (or invoke) some form of “subtle healing energy” — energy undetectable by scientific methods, that is undefined as to its nature, and not specified as to how it interacts with matter and specifically with human tissue.

What I’ve observed in numerous discussions is a combination of tossing in “quantum mechanics” as an all-explaining meme while also falling back on the vitalistic beliefs of past centuries. A “life energy” is either proposed to be transferred from person to person or taken as something that “knows what is needed”, an implicit invocation of supposed energy entities. It is a belief in a world filled by thought-directed magic, but with far less thought given to implications and consistency than was evidenced by Larry Niven and J.K. Rowling in the creation of their fictional worlds of magic.

This article is my push-back against the pseudo-science of energy beliefs; my thoughts collected over a number of months of online discussions. On a spiritual level one is entitled to whatever beliefs one wishes. It is not credible, however, to extend the practice of such beliefs into the domains of state-regulated health professions.

Everything is Not Energy

For me, an inevitable “eye-roller” is the statement “Everything is energy and that’s all there is to it. Match the frequency of the reality you want”, often misattributed to Einstein as documented at Quote Investigator. This statement is generally trotted forth as justification for the idea that one’s thoughts affect the nature of reality, with energy being abstract and ethereal. In contrast, science has defined specific forms of energy.

It is true that everything was energy immediately after the Big Bang. It is still true that matter can be transformed into energy; in a nuclear reaction or when matter and anti-matter annihilate each other, the classic example being two 0.52 MeV gamma rays emitted when an electron and positron collide and destroy each other. It is also true that Einstein gave us an equation for the energy equivalent of matter and that de Broglie related such energy to the wavelength or frequency of a particle, a concept of wave-particle duality. Thus we have:

$$E = m c^2 = h \nu = h c / \lambda \textrm{,}$$

Where \(E\) is energy, \(m\) is mass, \(c\) is the speed of light, \(h\) is Planck’s constant, and \(\nu\) and \(\lambda\) are the de Broglie frequency and wavelength, respectively. Note that \(\nu\) and \(\lambda\) depend on a particle’s energy but not how you think about it. This is physics.

Matter can be transformed into energy, although it has proven difficult to do so in a controlled manner (i.e. in controlled fusion). Such conversion is what powers the sun (via the proton-proton chain reaction), although, even there, the mass loss between the reactant protons and the helium produced is a very small fraction of the mass involved.

A conversion from matter to energy is not symbolic. It destroys any material structure existing in the original matter. The results are also not healthy for those nearby. The annihilation of a mere 46 µg (millionths of a gram) of matter yields the equivalent energy of a ton of TNT. It is not conducive to a continued professional health practice to incinerate your clients or yourself; far better to leave matter as matter and stick with the energy released in the chemical reactions of metabolism.

Quantum Mechanics doesn’t Justify It

As a physicist, it’s sad to see the word quantum, stemming from the meaning discrete or countable (i.e. not continuous), take on the anything goes connotation of something magic; of something that allows one to “roll” your own cosmos as you wish. In the Quark and the Jaguar, Noble Laureate Physicist Murray Gell-Mann calls this move to pseudo-scientific word-salad “quantum flapdoodle“. Particle physicist Victor Stenger just stuck with rebutting it as quantum quackery.

It’s unfortunate that the use of the term “the observer” as one doing a measurement has led to over interpretation of the role of the observer and to a lack of focus on the process of measurement. Unlike experiments in the macro world, the observer of an experiment on the quantum scale, for example involving electron beams passing through a double slit, has no way of making an observation without disrupting what would have occurred otherwise. Richard Feynman provided a thorough discussion under Volume III, Chapter 1, Section 6 of the Feynman Lectures.

If a stream of (light) photons capable of localizing electrons is used, the electrons are perturbed and they act as particles. If the energy of the photons is decreased to where they don’t perturb the experiment, the wavelength (\(\lambda = h c / E\)) is too long to determine the electron path and wavelike behavior by the electrons is observed. It is not the thoughts of the observer that are important, but how or if the observer proceeds to make observations.

Feynman also addresses under philosophical implications that effectively having free will does require quantum mechanics, since even the classical world become unpredictable without absolute initial knowledge.

Of course we must emphasize that classical physics is also indeterminate, in a sense. It is usually thought that this indeterminacy, that we cannot predict the future, is an important quantum-mechanical thing, and this is said to explain the behavior of the mind, feelings of free will, etc. But if the world were classical—if the laws of mechanics were classical—it is not quite obvious that the mind would not feel more or less the same. … It is therefore not fair to say that from the apparent freedom and indeterminacy of the human mind, we should have realized that classical “deterministic” physics could not ever hope to understand it, and to welcome quantum mechanics as a release from a “completely mechanistic” universe. For already in classical mechanics there was indeterminability from a practical point of view.

If one wants to call upon quantum mechanics as a paradigm, it helps to learn something about it from reliable sources. One could do worse than Leonard Susskind’s The Theoretical Minimum. An introduction to quantum mechanics is one of the core courses and has an accompanying book.

Human Electromagnetic Fields aren’t “Out There”

Scientific research pretty much characterized EM fields and emissions from the human body. The dominant one is infrared radiation characteristic of temperature. We and all objects around us emit and absorb such (blackbody) radiation.

Because we contain water and water absorbs and emits in the microwave portion of the spectrum (used by atmospheric weather sensors), we also produce low level thermal emissions of microwaves.

We do emit a few biophotons in the UV-Visble (1-1000 \(photons\; cm^{-2} sec^{-1}\)), based on the presence of reactive oxygen species in our bodies and the electron energy-state transitions they undergo. Such photons are technically a form of bioluminesence. There have been proposals that such photons could be used as a health diagnostic. Imaging technology has also recently progressed to where such imaging is feasible in reasonable times. There are indications that emission of such photons may be reduced by meditation. There is no reason or research that I’m aware of that would suggest an ability of conscious control of the oxidative stress leading to such emissions. As noted in the articles above, formation of free radicals is one of the side effects of energy metabolism involving oxygen. Thus, you can forget anything you might read that says these are likely to be influenced by conscious mental processes.

We generate a very weak magnetic field in the immediate vicinity (i.e. near field) of our bodies because of small electric currents in our heart, long muscles, and brain. These are weak enough (at least 10,000 times weaker than ambient magnetic noise) that they require a shielded room and a cryogenic sensor (SQUID) to detect. These are also near-field, not EM wave emissions which require an oscillating charge to generate (think antennas and radio waves). Although it anthropomorphizes the universe a bit, this writeup on antenna basics discusses how accelerating charges are fundamental to EM far-field radiation. The human body is simply not equipped to be an oscillating charge generator.

David Cohen, a pioneer in human biomagnetic research, gave an overview of such research in his Jones Seminar presentation to the Thayer School of Engineering at Dartmouth on 7 Nov 2008. The lecture is also available on YouTube. The following graph showing relative magnetic field strengths is taken from that lecture.

Cohen: Human Magnetic Field Strengths

Human Magnetic Field Strengths. Taken from David Cohen’s “Jones Seminar” at the Thayer School of Engineering at Dartmouth, 7 Nov 2008.

Magnetic fields result from electric currents. Looking at the figure, we see that the urban background magnetic noise is about 0.002 Gauss. We can estimate the current that would be required in the human body to generate a magnetic field that strong using the equation for a long wire.

$$B = 0.2 \frac{I}{r} \textrm{,}$$

where \(B\) is the magnetic field strength in Gauss, \(I\) is the current in Amperes, and \(r\) is the radial distance of the measurement in cm. If we measure at 10 cm, the field will be 0.002 G when the current is 0.1 A. That’s a current large enough to electrocute; simply not something that is going to be generated in the body. We aren’t electric eels. Compared to background noise, the magnetic fields generated by the human body are very small because the currents generating them are equally small. You do need the shielded room and cryogenic sensor to detect them.

All in all, we humans are well-equipped to communicate directly by sound (compression waves in air) and touch but direct communication in the electromagnetic spectrum is simply not a feasible part of human-to-human interaction, apart from nonverbal communication in visible light (i.e. gesture and body language). What we can do by the ingenuity of our minds to create technology is another matter.

References for Further Reading

Cohen, D, EA Edelsack, and JE Zimmerman. 1970. “Magnetocardiograms Taken Inside a Shielded Room with a Superconducting Point-Contact Magnetometer.” Applied Physics Letters 16:278-280. DOI: http://dx.doi.org/10.1063/1.1653195.

Cohen, D, and E Givler. 1972. “Magnetomyography: magnetic fields around the human body produced by skeletal muscles.” Applied Physics Letters 21:114-116. DOI: http://dx.doi.org/10.1063/1.1654294.

Cohen, D. 1975. “Magnetic fields of the human body.” Phys. Today 28(8):34-43. DOI: http://dx.doi.org/10.1063/1.3069110.

Cohen, D, Y Palti, B N Cuffin, and S J Schmid. 1980. “Magnetic Fields Produced by Steady Currents in the Body.” Proceedings of the National Academy of Sciences 77 (3): 1447–51. URL: http://www.pnas.org/content/77/3/1447.abstract

The Implausibility of Something Unknown

It’s generally at this stage of the discussion that someone advances the thought that “science doesn’t know everything” or that “science hasn’t proven that some other energy doesn’t exist”.

It is true that science can’t explain all things, for example, “Why should one attempt to live life well?”. One can weigh costs and benefits of choices, but that never reaches the underlying philosophical question of the purpose. It is also true that there are many areas, for example in biology, that our knowledge is limited. However, as knowledge is slowly won, there is little likelihood of finding something that contradicts the well-established knowledge of chemistry and physics. That lack of contradiction is an important element of plausibility.

In contrast to biological matters such as networks of cell-signalling, areas of physics in fundamental particles, intrinsic types of forces, carrier particles, and types of energy, while not devoid of unknowns, are far simpler and far more evolved. Thus, the level of evidence required to contradict current theory is a lot higher.

It is also true that science cannot detect things that are purely spiritual beliefs or phantoms of our neurological system and lack external reality. This is not going to change in the future except in the ability of science to measure and analyze the processes occurring within our brains. In mentioning future abilities of science to analyze the processes occurring within our brains, I am thinking of this research on retrieving visual images from brain activity.

Science does not take on proving that something doesn’t exist. One would be forever chasing wild postulations and trying to nail down special contexts and situations. Instead, science asks if there is positive evidence that something does exist and, if so, under what conditions is it significant. Such proof has not been forthcoming; nor is it likely.

Nature is frugal in her use of available resources. If some other energy existed and interacted with living beings, we would expect to see multiple uses around us. That we don’t is a lack of internal consistency.

The lack of observations from other areas of life and the lack of evolution to specialize in use of any potential extra form of energy (nature is impartial – both negative and positive aspects would exist) is indicative that there is nothing there to detect (imagines spiders lulling/luring flies with … energy). Sci-fi writers tend to be much better at hypothesizing such self-consistency than health care providers, Chris Moore’s “The Lust Lizard of Melancholy Cove” comes to mind (comedy, sci-fi).

Finally, simply hypothesizing a new form of energy and leaving it at that is like placing a board in mid-air and expecting it to stay. Consistency demands support through multiple levels of existence. All the known forces depend on virtual particles to carry them (hence carrier particle) across space. For the electromagnetic force, the carrier particles are virtual photons. Electromagnetic radiation also is carried by photons. There has to be a physical means to to get from here to there. A proposed new form of energy, a form of energy that interacts strongly with matter (of which human tissue is an instance), would require such a carrier particle. Reorganizing particle physics to include a new energy and its accompanying particle presumes that something that should have been obvious was overlooked in all the particle experiments analyzed over the years. I wouldn’t hold my breath.

In the end, it is much more likely that what is experienced is consistent with Edzard Ernst’s comments on the effectiveness of Reiki.

The results of this study were impressive: reiki did, in fact, make the patients feel better. Specifically, it increased the comfort and wellbeing of the patients in comparison with those who received no such intervention. Intriguingly, however, the sham reiki had exactly the same effects, and there were no differences between real and sham reiki.

In short, whatever is experienced, it is far more likely to be a psycho-social effect than some undetected form of energy communication. The extraordinary evidence required for the latter supposition isn’t there. As humans, we are social animals. We do respond to attention, touch, and caring — no unknown energy required.

References for Further Reading

Lee, M. S., M. H. Pittler, and E. Ernst. 2008. “Effects of Reiki in Clinical Practice: A Systematic Review of Randomised Clinical Trials.” International Journal of Clinical Practice 62 (6): 947–54. DOI: http://dx.doi.org/10.1111/j.1742-1241.2008.01729.x.

Hartman, SE. 2009. “Why do ineffective treatments seem helpful? A brief review.” Chiropractic & Osteopathy 17:10. DOI: http://dx.doi.org/10.1186/1746-1340-17-10.

About the (not so far) Far Infrared

When it comes to the electromagnetic spectrum, I tend to view things from the perspective of an atmospheric or astro- physicist. Normally, to me, the term far infrared means that part of the infrared spectrum at wavelengths longer than 25µm (one µm being one-millionth of a meter). This page at Cal Tech gives the astrophysical use of the divisions for near, mid, and far infrared and another Cal Tech pages goes further into talking about What is Infrared?.

Journeying  into the domain of medical technology, however, we encounter a different set of divisions recommended by the CIE (International Commission on Illumination).

CIE Infrared Wavelength Divisions
Band Name Wavelength Range
IR-A near 0.7 µm – 1.4 µm
IR-B middle 1.4 µm – 3.0 µm
IR-C far 3.0 µm – 1000 µm

It’s this latter set of divisions that prompts the title of this piece. To this physicist/writer, the CIE “far infrared” just doesn’t seem to start all that far out.

If one searches on “far infrared” what comes up is a combination of pages with some actual science mixed with product promotion pages written by people who seemingly haven’t a clue about infrared. I’m going to ignore phrases like “quantum energetics” and “superconducting” that are simply word-salad hype and focus just on issues about the far infrared (FIR).

Infrared and the Solar Spectrum

One misconception that gets written is that the FIR band has some connection with photosynthesis. It doesn’t. Photosynthesis uses both red and blue visible light (shorter wavelengths than IR) as shown in this hyperphysics explainer.

Another, to some extent related, misconception is that the FIR band has a significant amount of solar energy. While about 53% of solar energy is in the infrared, most of it is in the IR-A and IR-B bands, with only about 2% in the IR-C band. Here’s a quick look at the solar emission spectrum. Note the red lines showing the IR band boundaries and, in particular, the tiny one on the right at 3µm.

Solar Blackbody Spectrum

Solar Blackbody Spectrum. The dash black line shows the wavelength of maximum emission. The dash-dot red lines show the CIE IR band limits.

The actual percentages of the solar emission are in the following table. These were obtained by doing a trapezoidal integration over the entire spectrum and, separately, over each band at 0.001 µm (1 nm) resolution (for those who want to know such details). It’s pretty obvious that the FIR band (IR-C) contains only a very minor part of the solar spectrum. Thinking otherwise is a rope you can’t push.

Percentage of Solar Emission in IR Bands
Band Wavelength limits Percent
IR-A 0.7 – 1.4 37.1
IR-B 1.4 – 3.0 11.9
IR-C 3.0 – 20.0 2.1
Total IR  0.7 – 20.0 51.1

Far Infrared and Skin Penetration

Now, I’m going to shift over to two types of “health products”, one being infrared “saunas” that have emitters between 300ºC and 400ºC. The other would be some kind of IR “reflective” mat that would emit back heat absorbed from the person laying on it; at 37ºC (body temperature) or somewhat below. Vatansever and Hamblin (2012) provide a general review of this territory. Crinnion (2011) delves into the theme of saunas, including infrared ones.

While skin has a window of transparency in the near-IR, at wavelengths longer than one µm the absorption by liquid water increases rapidly. Since human tissue is about 70% water, that also means that such tissue rapidly becomes opaque to IR as the wavelength increases into the FIR. This is consistent with what Crinnion states.

According to research published in the 1930s, near-infrared (IR-A) has the greatest tissue penetration of the three, while far-infrared (IR-C) has practically no penetration. IR-A (700 nm – 1400 nm) has a tissue
penetration up to 5 mm. This wavelength penetrates to the subcutaneous layer and provides the best dissipation of heat from the skin surface. Mid-infrared (IR-B; 1400 nm – 3000 nm) has the next deepest tissue penetration (about 0.5 mm). IR-C (3,000 nm – 1 mm) has a tissue penetration of about 0.1 mm.

I checked this out both at the sauna temperatures and at body temperature. I used the liquid water optical properties from Hale and Querry (1973) along with Planck function (spectral blackbody calculations, integrating the incident energy and energy penetrating to several depths and expressing the latter as a percentage of the former. In doing this I adjusted the Hale and Querry data both for tissue being only 70% water and for nonnormal incidence. For the incidence, I used a “diffusivity factor” of 1.66 to scale the depth.

Skin Penetration Depths for Emission at Several Temperatures
Depth (cm) 37ºC 300ºC 350ºC 400ºC
0.001 23.952 % 43.243 % 45.305 % 47.059 %
0.010 0.2361 % 2.7760 % 3.6335 % 4.6449 %
0.100 0.0000 % 0.0733 % 0.1475 % 0.2663 %
1.000 0.0000 % 0.0001 % 0.0005 % 0.0017 %

Why we get this very limited penetration of the radiation emitted at these temperatures becomes more apparent when we compare the blackbody spectra with that of the absorption coefficients. As a rough rule, the penetration depth (in cm) is one over the absorption coefficients. First for body temperature:

Infrared Emission at Body

Infrared Emission at Body Temperature (top) along with Liquid Water Absorption Coefficients (bottom).

And next for temperatures in the range of those for IR saunas:

Blackbody Emission for Infrared Saunas

Blackbody Emission for Infrared Saunas along with Liquid Water Absorption Coefficients.

The IR at these emission temperatures will be absorbed at the surface. The short summary is that there’s no energy where there’s transparency and no transparency where there’s energy. The heat may reach deeper layers via circulation and conduction but it won’t penetrate directly. That’s a dog that won’t hunt.

I thank Alice Sanvito and Katie Stade for a Facebook discussion that stimulated this post.

Addendum (04/22): If you want penetrating heat, you need to be in the IR-A band. There’s a technology that does that — it’s called an incandescent heat lamp, such as these. Take a look at the charts on penetration depth and emission spectra.

Holding the Lines

You can observe a lot by just watching. — Yogi Berra

As a physicist, I’m a strong believer in energy balances. If, over a prolonged period, you have more energy coming into a system than is going out from the system, then it’s a pretty sure bet that the system is storing energy. For a complex system, like the Earth, how that storage is implemented may vary, but that such storage implies change to the system seems a no brainer. Statistical detection of such change in a noisy system with a lot of internal variability is another matter.

Thinking about the Earth’s energy balance, about 240 W/m2 is absorbed by the Earth on a yearly, global average at wavelengths shorter than 4 μm. Balance would imply that the same energy is emitted back to space and, given the Earth’s temperature, at infrared wavelengths longer than 4 μm. Absorption and emission occur in separable regions of the electromagnetic spectrum. Estimates from radiative transfer calculations are that doubling the concentration of carbon dioxide would reduce what’s emitted back to space by about 4 W/m2, which is referred to as the climate forcing from doubling CO2.

If the Earth is taking on more energy than it is emitting as infrared radiation, balance can be restored by an increase in the effective radiating temperature of the earth. In terms of a simple model, the radiation emitted by earth depends on the Stefan-Boltzmann equation, \(E = \sigma T^4_e\), the subscript ‘e’ on the temperature indicating that it is an effective radiating temperature. We could start from a pre-industrial assumption of having an energy balance:
$$\sigma T^4_e = \frac{1 – \alpha}{4} S_0,$$
where \(S_0\) is the energy flux from the sun, \(\alpha\) is the albedo of the Earth (fraction of sunlight reflected), and the factor of 1/4 accounts for both geometry and daylight fraction. If the outgoing infrared is decreased by a fraction \(\beta = 4/240\) from a doubling of the CO2 volume mixing ratio (molecules of CO2 relative to molecules of air), then balance is restored when
$$(1 – \beta) \sigma (T_e + \Delta T)^4 = \frac{1 – \alpha}{4} S_0.$$
Keeping only the first-degree term in \(\Delta T\), that simplifies to
$$\Delta T = \frac{\beta}{1 – \beta} \frac{T}{4}.$$
Assuming the temperature change to scale with the temperature along the vertical profile and using an average surface temperature of 288 K, gives an estimated surface temperature change of 1.2 K, if everything else stays constant — spot on with more detailed estimates.

However, other things don’t stay the same. In particular, according to the Clausius-Clapeyron relation, the amount of water vapor that air can hold increases with increasing temperature and water vapor itself is a strong greenhouse (i.e. infrared absorbing/emitting) gas. So, we end up with
$$\Delta T = \frac{1}{1-f} \frac{\beta}{1 – \beta} \frac{T}{4},$$
where, as Andy Dessler points out in one of his recent short videos, f is estimated to be about 0.6, resulting in \(\Delta T \approx 3 \)K. Whatever the observations of what the planet does over time, that result would be my Bayesian a priori. The basic physics are, for me, compelling.

It’s important to understand that while water vapor is acting as a positive feedback to the greenhouse warming, the outgoing infrared is still increasing with temperature. The water vapor feedback decreases the slope of the positive temperature versus outgoing infrared relationship, but it does not make it negative or zero. Thus a new equilibrium temperature is still possible and there’s no run-away greenhouse warming. The sensitivity also is an estimate of the global surface temperature at which the planet will be in radiative balance and stop taking on heat. It doesn’t say anything about the specifics of the path to equilibrium, heat storage, or whether the extra heat results immediately in warming.

A few further points. A radiative forcing, such as the 4 W/m2 from doubling CO2, is generally estimated allowing the stratosphere to come into radiative equilibrium. As noted by Brasseur and Solomon in Aeronomy of the Middle Atmosphere, the radiative relaxation time in the stratosphere varies from 15-20 days near the tropopause to several days higher up. The effects of the daily sunlight cycle are thus minimal. We can also assume that carbon dioxide and all other long-lived gases are evenly mixed, apart from relatively local variations near sources and sinks. This essentially is what defines the homosphere, which extends up to about 100 km. Finally, we can assume local thermodynamic equilibrium (LTE), implying that greenhouse gases are absorbing and radiating based on a well-defined local atmospheric temperature.

At a fundamental level, the infrared absorption by CO2 stems from it being a linear (no bends), triatomic molecule. That, the atomic masses of carbon and oxygen, and the strength of the bonds determines everything else. I’m not going to go quite that basic, grabbing the spectroscopic lines for CO2 from the HITRAN database instead. As noted on the HiTRAN home page:

HITRAN is a compilation of spectroscopic parameters that a variety of computer codes use to predict and simulate the transmission and emission of light in the atmosphere. The database is a long-running project started by the Air Force Cambridge Research Laboratories (AFCRL) in the late 1960’s in response to the need for detailed knowledge of the infrared properties of the atmosphere.

HITRAN is.to my knowledge, one of two spectroscopic databases for atmospheric use. The other is GEISA from Laboratoire de Météorologie Dynamique (LMD) in France.

When I plot the line strengths for CO2 out to 2500 wavenumbers (4 \mum), what I see is this.

CO2 line intensities

Carbon dioxide line intensities between 0 and 2500 wavenumbers (HITRAN2012)

I can also calculate the Planck (blackbody) shape for 255 K, the effective radiation temperature of the Earth:

Planck shape (blackbody) for 255 K

Planck shape (blackbody) for 255 K

and see the effect of scaling the line strengths by the blackbody shape, convoluting line-strength with amount of energy.

CO2 line intensities, unweighted (red) and Planck-weighted (blue)

Carbon dioxide line intensities from 0 to 2500 wavenumbers (HITRAN2012). Unweighted intensities are in red. Intensities weighted by the Planck (blackbody) shape for 255 K are shown in blue.

One of the rules of thumb is that radiation can escape to space from the altitude at which the optical path from the top of the atmosphere is about one, including a diffusivity factor to account for the radiation traveling at different angles relative to the vertical. That diffusivity factor is often taken to be 1.66, corresponding to an average angle of about 53 degrees. Using that rule of thumb, I’ve estimated the corresponding altitudes for 300 ppm of CO2 (blue) and 600  of CO2 (red). I’ve added a black line representing the tropical tropopause. Any red seen below that line represents emission moved to a colder temperature due to doubling CO2. Since a colder temperature radiates less energy, there’s the basis for the CO2 greenhouse forcing.

CO2 altitudes for unit optical paths to space

Approximate altitudes at which the carbon dioxide optical paths from space as a function of wavenumber are one. Shown in blue for 300 ppm CO2 and red for 600 ppm. Linear pressure scaling and a diffusivity factor of 1.66 were included. Lines integrated with a Lorenz shape, increment of 0.01 cm-1, and cutoff of 20 cm-1.

Andy Dessler also hits this part of physics in his recent video How the greenhouse effect works

One final area I wanted to touch on in this post is the amount of effort the Air Force put into understanding the atmosphere and atmospheric radiative transfer. I’ve already added a quote above to this effect with HITRAN. An interview with Larry Rothman adds to that history as does the sequence of spectroscopic reports dating back to 1973. Bob McClatchey et al. put out a series of atmospheric profiles that became standards for testing and comparing radiative transfer models. In 1985, the Air Force Geophysics Laboratory published the 4th edition of their Handbook of Geophysics and Space Environments. Chapter 18, in particular, dealt with atmospheric radiative transfer.

Many of the people involved with these efforts continued on with them. Larry Rothman with HITRAN, Gail Anderson with the LowTran and ModTran models. Tony Clough at AER working on the water vapor continuum and RRTM models, Eric Shettle with characterization and satellite retrievals of atmospheric aerosols. I can’t emphasize too much the amount of history and effort behind all of this nor the solidity and pragmatism of those who helped bring us to where we are today. The forward radiative transfer models, spectroscopic data and algorithms, also are an essential part of retrieving physical measurements from both civilian and military satellite observations; observations which have been subjected to many comparisons and in situ verifications.

It’s Here!

Last July (2012), at the Society for Mathematical Biology (SMB) conference in Knoxville TN, I attended two talks on modeling the spread of Huanglongbing (HLB), commonly known as citrus greening disease. The two related talks were given by Jillian Stupiansky and Karly Jacobsen, both from the University of Florida, Gainesville. Stupiansky and Jacobsen note that HLB is “a vector-transmitted bacterial disease that is significantly impacting the citrus industry in Florida and poses a great risk to the remaining citrus-producing regions of the United States.”

An insect known as the Asian citrus psyllid, Diaphorina citri, carries the organism that causes citrus greening, Candidatus Liberibacter asiaticus (Las). An infected psyllid carries the bacteria in its saliva and infects a healthy tree when it feeds. Similarly, a healthy psyllid can get the bacteria from a diseased tree. Observations suggest that once a tree becomes infected, it may remain asymptomatic for six months to six years, contributing to the difficulty in controlling the disease. While an adult psyllid can usually only fly a mile, citrus greening has been able to spread throughout almost the entire state of Florida due to the transportation of infected citrus stock by discount stores. Recent findings also support the idea that the disease is transmitted transovarially and sexually amongst the psyllid population. This would help to explain why the disease has spread so rapidly.

Based on a control strategy of “roguing” (removal and burning of infected trees), Stupiansky and Jacobsen discussed their disease transmission model, aiming to estimate the reproduction number, R0, obtainable by conventional strategies.

The tree population is divided into susceptible, infectious and asymptomatic, infectious and symptomatic, and removed (considered to be dead) compartments. Roguing of symptomatic and dead trees occurs with a positive probability of the replanted tree directly entering the infectious state. Prior work also included calculation of the basic reproduction number, R0, and determining a condition for the existence of an endemic equilibrium.

With the talks by Stupiansky and Jacobsen as my prior background, I was extremely interested in Amy Harmon’s recent article: A Race to Save the Orange by Altering Its DNA. Following the science-writing precept of “write a story, not just an essay about a topic”, Harmon crafts a narrative around Ricke Kress, president of Southern Gardens Citrus, and his battle to save the orange. Harmon well-captures the race to create a resistant plant before production collapses, the lack of existing resistant species, and thus the decision to turn to DNA modification. She delves into the search for a gene that will convey resistance while providing both scientific background and the angst of potential public nonacceptance. A story well told and well-balanced.

While the story of citrus greening and the U.S. citrus industry is the economically largest example, the too often human-assisted spread of a blight and the search for a resistant variant is not unique. The helplessness of watching and waiting until the word comes down that “It’s here!” has been with us for over a century. A recent article in The Economist opens with the plight of the American Chestnut.

Once upon a time, according to folklore, a squirrel could travel through America’s chestnut forests from Maine to Florida without ever touching the ground. The chestnut population of North America was reckoned then to have been about 4 billion trees. No longer. Axes and chainsaws must take a share of the blame. But the principal culprit is Cryphonectria parasitica, the fungus that causes chestnut blight. In the late 19th century, some infected saplings from Asia brought C. parasitica to North America. By 1950 the chestnut was little more than a memory in most parts of the continent.

The article continues on describing recent efforts, including genetic manipulation, to create a chestnut that is sufficiently American yet also resistant to the fungus. The article briefly notes that the Chestnut projects, if successful, could provide a model for restoring the American Elm. Bruce Carley in Saving the American Elm describes the spread of Dutch Elm Disease.

Many of us remember how painful it was for our communities to witness the tragedy that recurred throughout the eastern states during the 1950’s, 1960’s, and 1970’s. Many remember watching helplessly as countless main streets, parks, historic sites, and neighborhoods that had been so handsomely graced with fine elms were transformed within a few years into barren, urban-looking landscapes devoid of trees, the result of a frighteningly efficient epidemic that had appeared suddenly. We can imagine the profound dismay of the citizens of Portland and New Haven as each “City of Elms” was transformed rapidly into a “City of Firewood,” necessitating almost phenomenal removal expenses. Some may recall marveling at the futility of the “cut and burn campaigns” which were initiated to halt the spread of an epidemic that was killing trees literally by the millions each year.

Turning to another threatened crop, in Transgenic Papaya in Hawaii and Beyond Dennis Gonsalves describes the spread of papaya ringspot virus (PRSV) in Hawaii and the development of the resistant rainbow papaya — a development that he helped bring about. Without the rainbow papaya, Hawaiian papaya production would have collapsed much as orange production is threatened now. Ironically, the herd immunity provided by the rainbow papaya (getting R0 less than one) is helping to sustain some production of nonresistant varieties. As rapid spreaders of blights across the world, we humans either come up with new ways of developing resistant strains or sit by and watch favorite species get decimated.

Lest we become too smug about our own genetic purity, it’s revealing to discover that we ourselves are genetically modified organisms. Carl Zimmer has written some fascinating articles on the bits of virus-injected DNA hiding and, at times, put to use within our own genome. See: The Lurker: How A Virus Hid In Our Genome For Six Million Years, Mammals Made by Viruses, We are Viral from the Beginning, and Hunting Fossil Viruses in Human DNA. Let they who have no inserted DNA cast the first stone.

IMTRC 2013: The Opening Keynote

It’s far from an easy task to reshape something that is not exactly a health care profession toward being one. The task becomes even more challenging when the tool of choice is voluntary education. This, however, appears to be the challenge taken on by the Massage Therapy Foundation (MTF). As the MTF expresses it, “It is our vision that the practice of massage therapy is evidence-informed and accessible to everyone.” April’s International Massage Therapy Research Conference (IMTRC) in Boston upheld that vision. This is my first segment of observations from there.

Massage therapy has long been a profession that couldn’t define itself in terms of core competencies; a landscape of trademarked empires; a haven for those escaping the complexities and dehumanization of modern medicine by going to the land of magical thinking. Magical thinking, however, is a poor basis for gaining credibility as a health profession. But how does a profession move away from that?

One answer that the MTF has has been actively pursuing is promoting research literacy. This doesn’t mean that all practitioners are going to participate in research, but that they should, at a minimum, be able to read a research paper and assess its conclusions. Jeanette Ezzo pushed that theme forward with her conference-opening keynote talk.

Mechanisms and Beyond: What is Needed to Prove the Effectiveness of Massage?

Jeanette Ezzo led off IMTRC’s plenary sessions talking about translating massage research into practice. Ezzo noted that there are now between 750 to 1000 random controlled trials (RCTs) and systematic reviews for massage therapy. “Research helps practitioners give evidence-based answers to clients’ questions, but the coverage isn’t uniform”, she said.

Hierarchy of Evidence

Hierarchy of Evidence

To help conference participants understand evidence-informed practice, Ezzo showed a four-level hierarchy of evidence, ranging in order of increasing strength: from case reports or case series, to non-randomized controlled trials, to randomized controlled trials (RCTs), up to systematic reviews. The top two levels test hypotheses and show efficacy while the bottom two levels help generate ideas. The various types of studies are looking at clinical outcomes, which are measures of how a patient feels, functions, or survives.

The efficacy of an intervention is a measure of the intervention’s effect under ideal conditions. RCTs generate the base data on efficacy, while systematic reviews collect, summarize, and assess that data. Such a meta-analysis, however, is only as good as the studies put into it. In contrast to efficacy, efficiency is based on effects under everyday (i.e. real-world) conditions.

The efficacy of an intervention is separate from understanding its mechanism of action. In this, massage therapy joins the ranks of other branches of health care who have learned what things work, but do not yet know why. The ‘legal’ way to establish a mechanism, according to Ezzo, is to show an outcome first and then look for a mechanism. If you don’t know that the intervention works, it’s premature to think about mechanisms.

Ezzo talked about the progression from a survey-based Consumer Reports analysis of treatments for low-back pain to a Cochrane study on massage effectiveness for non-specific low-back pain. Consumer Reports reported that 48% of its readers found massage therapy to be very helpful. CR concluded that “There is not enough research to be certain about the benefit of massage in treating lower-back pain. But it might be beneficial for patients with nonspecific subacute or chronic lower-back pain lasting four weeks or more.”

The Cochrane analysis, updated in 2008, found that “massage was more likely to work when combined with exercises (usually stretching) and education.” The authors concluded, “In summary, massage might be beneficial for patients with subacute (lasting four to 12 weeks) and chronic (lasting longer than 12 weeks) non-specific low-back pain, especially when combined with exercises and education.”

Last year (2012), The Ottawa Panel published “Evidence-Based Clinical Practice Guidelines on Therapeutic Massage for Low Back Pain”, finding that “massage interventions are effective to provide short term improvement of sub-acute and chronic LBP symptoms and decreasing disability at immediate post treatment and short term relief when massage therapy is combined with therapeutic exercise and education.”

These guidelines indicate that massage therapy is effective at patient pain relief and improving functional status. Ezzo noted that 90% of fresh low-back pain will resolve in six weeks if nothing is done. There’s a definite consumer choice. Do you want to wait six weeks?

Ezzo noted that the dose of massage is important. Lessons from best research on low back pain say that the ideal treatment should be 30-60 minutes weekly for 6-10 treatments. There’s a “Goldilocks effect”. If you plot gain versus dosage, you initially see greater benefits with more treatment but that eventually saturates. After that, there’s little or no gain from a further increase in dosage.

Having factorial studies allows for making statements about the combination of massage with postural education plus exercise. There’s a high level of evidence that the combined protocol is more effective than either protocol alone.

Percentage reporting no pain a month after treatment
Treatment Groups Without Massage With Massage
Without Postural Ed + Exercise 0% (sham laser) 27%
With Postural Ed + Exercise 14% 63%

Ezzo related a number of deficiencies of past studies. Too often studies are done in which the selection of practitioners was not recorded or not based on knowledge and experience. In general, trials done with experienced MTs have had better results than those with less experienced MTs. This may seem obvious but it’s important. The choice of what areas to treat for a specific condition can also be an issue. Not including important tissue areas leads to flawed evidence. Evidence from bad studies stick around for years, stressed Ezzo. We need to step up and bring our A game.

On the Cochrane ladder of evidence: efficacy, effectiveness, and cost-effectiveness, too often research has stopped on the first rung. Ezzo stressed that we need studies with adequate follow-up time (e.g. 1 year); cost effectiveness over time, not just immediate treatment. She cited the Sherman et al. neck pain trial as a stellar example of using experienced therapists with a time series of observations. “Give it 4 weeks!! Massage shows significant neck pain relief in the first 4 weeks”, said Ezzo.

Massage Therapy may be expensive up front, but not in the long run if it reduces health care utilization costs. Data for LBP shows that there can be 40% lower health care costs when massage therapy is included in the treatment [p=0.15]. Massage can also help to address the problem of “Once we get them there, how do we keep them there (i.e. pain-free, functional)”.

Finally, Ezzo encouraged, “Keep doing what you are doing, until all the evidence is in.”

Warning! Discipline Boundary

In climate science, there’s been a curious phenomenon of venerable physicists from other disciplines making pronouncements about climate change; curious in part because they seem to do this without familiarizing themselves with the particulars of climate science. This obvious lack of knowledge provokes immediate repudiations from climate scientists. Notable examples are William Happer and Freeman Dyson.

Discipline Boundary - Some Humility RequiredPhysics encompasses a large enough set of disciplines that it is relatively easy to wander away from one’s specialization and experience yet stay within the bounds of physics. Having spent most of my career in atmospheric and computational physics, I would not be facile in quantum mechanics or shock waves in solids without substantial study. Yet some things are so basic to a physicist’s education that you expect them to be universally present. Among those are the concepts of blackbody radiation and conservation of energy. While others might start with an a priori assumption of no climate change until observed beyond a doubt, physicists should, contrarily, expect that changes have to occur if the earth absorbs more energy than it emits; a different Bayesian starting point. As the above examples show, such has not always occurred. That is the second curious part.

Last night, I was reading through the transcript of a 1987 interview of Edward Teller by William F. Buckley for Firing Line. I think Teller hit upon the personality trait that can lead otherwise respected scientists into such poor scientific behavior.

DR. TELLER: I believe the scientists are hard to reach. Scientists—and I include myself—have a terrible habit. They believe they are right. Now, you know, I know about quantum mechanics. I know about relativity, you don’t. And they are hard concepts and I know I am right, and indeed I am right And most people don’t even understand what I am talking about. So when next time I find myself in a discussion about whatever- abortion, defense, any of the popular issues–and a scientist has an opinion, he has gotten into the habit of being right. So he can no longer be convinced. Because in his specialty he was right, he is very hard to convince.

MR. BUCKLEY: That’s right.

None of this should discourage physicists from crossing discipline boundaries, as long as it is done with a modicum of humility and the realization that learning is required. There’s a great example in how cosmologist Paul Davies found himself probing the physics of cancer.

As best he can remember, says Paul Davies, the telephone call that changed his professional life came some time in November 2007, as he was sitting in the small suite of offices that comprise his Beyond Center at Arizona State University (ASU) in Tempe.

Until then, the questions that animated Davies’ research and 19 popular-science books had grown out of his training in physics and cosmology: how did the Universe come to exist? Why are the laws of physics suited for life? What is time? And how did life begin? But this particular call was nothing to do with that. The caller — Anna Barker, then the deputy director of the US National Cancer Institute (NCI) in Bethesda, Maryland — explained that she needed his help in the ‘War on cancer’. Forty years into the government’s multibillion-dollar fight, said Barker, cancer survival rates had barely budged. The hope now was that physicists could bring some radical new ideas to the table, and she wanted Davies to give a keynote address at an NCI workshop explaining how.

Ummm, sure, said Davies, who until that minute had been only vaguely aware that the NCI existed. “But I don’t know anything about cancer.”

“That’s okay,” Barker replied. “We’re after fresh insights.”

Knowing that we don’t know is a powerful tool. It is akin to the classic Zen story that a cup must be empty if new tea is to be poured into it. It allows bringing the tools and perspectives learned in one discipline appropriately across the boundary without arrogance. Some humility is required.

Reflections from SMB 2012 – One

Introduction

The recent Society of Mathematical Biology annual meeting, 25-28 July, in Knoxville TN was an interesting interdisciplinary journey. Attendees had backgrounds in biology, medicine, mathematics, physics, engineering, computer science, ecology, and public health. Coming from my own computational physics background, I’m doing a few reflective posts on what struck me during the conference.

The meeting was hosted by NIMBioS and the University of Tennessee, Knoxville (UTK). NIMBioS has provided a post-meeting overview that includes comments, pictures, and a Storify of meeting tweets. There’s also a compendium of abstracts. I’ve also done my own time-line Storify, searching on both the #smb2012 meeting hashtag and along the time-lines of the principle meeting “tweeters”. The logistics of the conference were great. The UTK conference center was a nice venue and the conference provided both breakfasts and lunches, facilitating interstitial discussions. The Friday evening BBQ and contra dance was also great fun.

Major conference themes included modeling of tumors and spread of disease, and evolution of resistance. The conference schedule often had seven parallel tracks, so my individual reflections are unavoidably incomplete. The talks were run on a strict time schedule, so it was possible to move from one track to another to catch specific papers. This series of write-ups is an expansion of my tweeting at the conference, so inclusion or omission of any particular paper has no significance other than my being there and able to capture key points at the time.

Claire Tomlin

Claire Tomlin opened the conference with her plenary talk, Insights gained from mathematical modeling of HER2 positive breast cancer. Among her stage setting comments was, “We want to use mathematical models to make predictions about aspects of the biology we don’t understand.” This adds to the context from her talk abstract.

In studying biological systems, often only incomplete abstracted hypotheses exist to explain observed complex patterning and functions. The challenge has become to show that enough of a network is understood to explain the behavior of the system. Mathematical modeling must simultaneously characterize the complex and nonintuitive behavior of a network, while revealing deficiencies in the model and suggesting new experimental
directions.

You can learn structure and identify phenotypes by static observations. A stringent test of understanding, however, comes in creating a model that matches the dynamics of the real world, evolving in accord with observations. As I’m writing this, my Twitter stream informs me that it’s Louis Armstrong’s birthday. By synchronicity, the link given matches the idea of capturing correct system dynamics. Putting Tomlin’s concepts into Armstrong’s vernacular, It Don’t Mean a Thing, If It Ain’t Got That Swing.

To drop down into specifics, Claire Tomlin is looking at HER2/HER3 (Human Epidermal growth factor Receptor) system and its recovery following intervention. Part of the key comes in elucidating the signaling network for HER2/HER3. The abstract from Amin et al. (2012), puts the HER2/HER3 signaling system in context.

HER2-amplified tumors are characterized by constitutive signaling via the HER2-HER3 co-receptor complex. While phosphorylation activity is driven entirely by the HER2 kinase, signal volume generated by the complex is under control of HER3 and a large capacity to increase its signaling output accounts for the resiliency of the HER2-HER3 tumor driver and accounts for the limited efficacies of anti-cancer drugs designed to target it. Here we describe deeper insights into the dynamic nature of HER3 signaling. Signaling output by HER3 is under several modes of regulation including transcriptional, post-transcriptional, translational, post-translational, and localizational control. These redundant mechanisms can each increase HER3 signaling output and are engaged in various degrees depending on how the HER3-PI3K-Akt-mTor signaling network is disturbed. The highly dynamic nature of HER3 expression and signaling, and the plurality of downstream elements and redundant mechanisms that function to ensure HER3 signaling throughput identify HER3 as a major signaling hub in HER2-amplified cancers and a highly resourceful guardian of tumorigenic signaling in these tumors.

Consistent with Tomlin’s engineering background, she’s considering control methodology, using different drugs at different times to “steer” the HER3 network to maximize treatment efficacy. Tomlin and Axelrod (2005) do a nice job of describing control theory applied to biology. Tomlin was not the only one mentioning control theory at the conference. It’s become an important enough topic in understanding biological systems to have prompted a book, Feedback Control in Systems Biology. In short, control theory deals with providing input to a system to “steer” it along a desired path or series of states. When a feedback loop is included, deviations from the desired path are detected and additional corrections are made. The trick is to understand response lag and to avoid over-correcting.

Sometimes before tackling a hard problem, it’s wise to practice with a simpler one. As a step toward working with such signaling systems, Tomlin shifts to looking at a simpler model for drosophila wing hairs. The wing cells know how they are oriented and grow a single wing hair on the lateral side. Chemically disrupting one cell disrupts adjacent cells, indicating transfer of information from one cell to the next. She models (Ma, 2008) concentrations of four core signaling proteins known as Frizzled (Fz), Disheveled (Dsh), Prickle (Pk), and Van Gogh (Vang). As outline in the supplemental material appendix of Ma (2008), a reaction-diffusion system of ten Ordinary Differential Equations (ODEs) is solved for a combination of cells and cell edges.

The ODEs were solved using CVODES. I note this largely because CVODES is now part of SUNDIALS (SUite of Nonlinear and DIfferential/ALgebraic equation Solvers), a project I was once part of while at LLNL.

Transforming Biology Education

One of the sessions I dropped in on was on transforming first-year biology education. This session, convened by Carrie Diaz Eaton (Unity College) drew its motivation from the BIO 2010 report, aimed at transforming biology education to develop 21st century biomedical research skills. In it’s own way, this session was a self-referential exercise in control theory; how to steer biology students through acquiring calculus without triggering activation of latent math phobias.

Erin Bodine (Rhodes College) talked about adding matrix math and Matlab use to biology major courses. Her longer term goal is to launch a biomath major. In her biomath course, Bodine introduces the concepts of feedbacks, derivatives, discrete models, and continuity.

A problems Bodine faced was that her biomath course was not being counted as an elective by either of the math or biology departments. This in-between existence, while unfortunate, is far from unique. There have long been roadblocks to efforts outside of the traditional disciplinary “cell walls”, articles by Austin (2003), Rhoten and Parker (2004), and Paytan and Zoback (2007) being examples. This topic also spawned the NAS/NAE/IOM report Facilitating Interdisciplinary Research (2004).

As I listened to Bodine, I remembered Gil Strang’s (MIT) comments in the Recitation 1 video of his Computational Science & Engineering class.

This is the one and only review, you could say, of linear algebra. I just think linear algebra is very important. You may have got that idea. And my website even has a little essay called Too Much Calculus. Because I think it’s crazy for all the U.S. universities do this pretty much, you get semester after semester in differential calculus, integral calculus, ultimately differential equations. You run out of steam before the good stuff, before you run out of time. And anybody who computes, who’s living in the real world is using linear algebra. You’re taking a differential equation, you’re taking your model, making it discrete and computing with matrices. The world’s digital now, not analog.

Strang also developed a Highlights of Calculus course for high school. Just to fill out the bill, Cornette and Ackerman’s Calculus for Life Sciences is also available online under a Creative Commons license.

Moving onward, Sarah Hews (Hampshire College) teaches Calculus in Context. She remarks that, unlikely many, she is largely free to create a class as she wishes. Part of her approach, in the first couple of weeks is in using research articles to intro biomath concepts. In particular, she uses Mumby et al.’s 2007 letter to Nature, Thresholds and the resilience of Caribbean coral reefs. Hews spoon-feeds this first article to students, using worksheets to guide their reading and get the sense of an ODE.

Timothy Comar (Benedictine University) talked about the transition from biocalculus to undergraduate research — i.e. getting our hands dirty. Comar mentioned getting students to understand stability and bifurcation. He discusses predator/prey modeling with impulses (e.g. spraying pesticide). Comar uses a network model (nodes, weighted edges) to study human spreading of an invasive plant, for example, along railroad lines.

Listening to Comar, I’m reminded of Steven Strogatz’s book and videos on YouTube on nonlinear dynamics and chaos.

So, I’ll stop here for this first post, with more to come shortly in reasonable increments.