Reflections from SMB 2012 – One

Introduction

The recent Society of Mathematical Biology annual meeting, 25-28 July, in Knoxville TN was an interesting interdisciplinary journey. Attendees had backgrounds in biology, medicine, mathematics, physics, engineering, computer science, ecology, and public health. Coming from my own computational physics background, I’m doing a few reflective posts on what struck me during the conference.

The meeting was hosted by NIMBioS and the University of Tennessee, Knoxville (UTK). NIMBioS has provided a post-meeting overview that includes comments, pictures, and a Storify of meeting tweets. There’s also a compendium of abstracts. I’ve also done my own time-line Storify, searching on both the #smb2012 meeting hashtag and along the time-lines of the principle meeting “tweeters”. The logistics of the conference were great. The UTK conference center was a nice venue and the conference provided both breakfasts and lunches, facilitating interstitial discussions. The Friday evening BBQ and contra dance was also great fun.

Major conference themes included modeling of tumors and spread of disease, and evolution of resistance. The conference schedule often had seven parallel tracks, so my individual reflections are unavoidably incomplete. The talks were run on a strict time schedule, so it was possible to move from one track to another to catch specific papers. This series of write-ups is an expansion of my tweeting at the conference, so inclusion or omission of any particular paper has no significance other than my being there and able to capture key points at the time.

Claire Tomlin

Claire Tomlin opened the conference with her plenary talk, Insights gained from mathematical modeling of HER2 positive breast cancer. Among her stage setting comments was, “We want to use mathematical models to make predictions about aspects of the biology we don’t understand.” This adds to the context from her talk abstract.

In studying biological systems, often only incomplete abstracted hypotheses exist to explain observed complex patterning and functions. The challenge has become to show that enough of a network is understood to explain the behavior of the system. Mathematical modeling must simultaneously characterize the complex and nonintuitive behavior of a network, while revealing deficiencies in the model and suggesting new experimental
directions.

You can learn structure and identify phenotypes by static observations. A stringent test of understanding, however, comes in creating a model that matches the dynamics of the real world, evolving in accord with observations. As I’m writing this, my Twitter stream informs me that it’s Louis Armstrong’s birthday. By synchronicity, the link given matches the idea of capturing correct system dynamics. Putting Tomlin’s concepts into Armstrong’s vernacular, It Don’t Mean a Thing, If It Ain’t Got That Swing.

To drop down into specifics, Claire Tomlin is looking at HER2/HER3 (Human Epidermal growth factor Receptor) system and its recovery following intervention. Part of the key comes in elucidating the signaling network for HER2/HER3. The abstract from Amin et al. (2012), puts the HER2/HER3 signaling system in context.

HER2-amplified tumors are characterized by constitutive signaling via the HER2-HER3 co-receptor complex. While phosphorylation activity is driven entirely by the HER2 kinase, signal volume generated by the complex is under control of HER3 and a large capacity to increase its signaling output accounts for the resiliency of the HER2-HER3 tumor driver and accounts for the limited efficacies of anti-cancer drugs designed to target it. Here we describe deeper insights into the dynamic nature of HER3 signaling. Signaling output by HER3 is under several modes of regulation including transcriptional, post-transcriptional, translational, post-translational, and localizational control. These redundant mechanisms can each increase HER3 signaling output and are engaged in various degrees depending on how the HER3-PI3K-Akt-mTor signaling network is disturbed. The highly dynamic nature of HER3 expression and signaling, and the plurality of downstream elements and redundant mechanisms that function to ensure HER3 signaling throughput identify HER3 as a major signaling hub in HER2-amplified cancers and a highly resourceful guardian of tumorigenic signaling in these tumors.

Consistent with Tomlin’s engineering background, she’s considering control methodology, using different drugs at different times to “steer” the HER3 network to maximize treatment efficacy. Tomlin and Axelrod (2005) do a nice job of describing control theory applied to biology. Tomlin was not the only one mentioning control theory at the conference. It’s become an important enough topic in understanding biological systems to have prompted a book, Feedback Control in Systems Biology. In short, control theory deals with providing input to a system to “steer” it along a desired path or series of states. When a feedback loop is included, deviations from the desired path are detected and additional corrections are made. The trick is to understand response lag and to avoid over-correcting.

Sometimes before tackling a hard problem, it’s wise to practice with a simpler one. As a step toward working with such signaling systems, Tomlin shifts to looking at a simpler model for drosophila wing hairs. The wing cells know how they are oriented and grow a single wing hair on the lateral side. Chemically disrupting one cell disrupts adjacent cells, indicating transfer of information from one cell to the next. She models (Ma, 2008) concentrations of four core signaling proteins known as Frizzled (Fz), Disheveled (Dsh), Prickle (Pk), and Van Gogh (Vang). As outline in the supplemental material appendix of Ma (2008), a reaction-diffusion system of ten Ordinary Differential Equations (ODEs) is solved for a combination of cells and cell edges.

The ODEs were solved using CVODES. I note this largely because CVODES is now part of SUNDIALS (SUite of Nonlinear and DIfferential/ALgebraic equation Solvers), a project I was once part of while at LLNL.

Transforming Biology Education

One of the sessions I dropped in on was on transforming first-year biology education. This session, convened by Carrie Diaz Eaton (Unity College) drew its motivation from the BIO 2010 report, aimed at transforming biology education to develop 21st century biomedical research skills. In it’s own way, this session was a self-referential exercise in control theory; how to steer biology students through acquiring calculus without triggering activation of latent math phobias.

Erin Bodine (Rhodes College) talked about adding matrix math and Matlab use to biology major courses. Her longer term goal is to launch a biomath major. In her biomath course, Bodine introduces the concepts of feedbacks, derivatives, discrete models, and continuity.

A problems Bodine faced was that her biomath course was not being counted as an elective by either of the math or biology departments. This in-between existence, while unfortunate, is far from unique. There have long been roadblocks to efforts outside of the traditional disciplinary “cell walls”, articles by Austin (2003), Rhoten and Parker (2004), and Paytan and Zoback (2007) being examples. This topic also spawned the NAS/NAE/IOM report Facilitating Interdisciplinary Research (2004).

As I listened to Bodine, I remembered Gil Strang’s (MIT) comments in the Recitation 1 video of his Computational Science & Engineering class.

This is the one and only review, you could say, of linear algebra. I just think linear algebra is very important. You may have got that idea. And my website even has a little essay called Too Much Calculus. Because I think it’s crazy for all the U.S. universities do this pretty much, you get semester after semester in differential calculus, integral calculus, ultimately differential equations. You run out of steam before the good stuff, before you run out of time. And anybody who computes, who’s living in the real world is using linear algebra. You’re taking a differential equation, you’re taking your model, making it discrete and computing with matrices. The world’s digital now, not analog.

Strang also developed a Highlights of Calculus course for high school. Just to fill out the bill, Cornette and Ackerman’s Calculus for Life Sciences is also available online under a Creative Commons license.

Moving onward, Sarah Hews (Hampshire College) teaches Calculus in Context. She remarks that, unlikely many, she is largely free to create a class as she wishes. Part of her approach, in the first couple of weeks is in using research articles to intro biomath concepts. In particular, she uses Mumby et al.’s 2007 letter to Nature, Thresholds and the resilience of Caribbean coral reefs. Hews spoon-feeds this first article to students, using worksheets to guide their reading and get the sense of an ODE.

Timothy Comar (Benedictine University) talked about the transition from biocalculus to undergraduate research — i.e. getting our hands dirty. Comar mentioned getting students to understand stability and bifurcation. He discusses predator/prey modeling with impulses (e.g. spraying pesticide). Comar uses a network model (nodes, weighted edges) to study human spreading of an invasive plant, for example, along railroad lines.

Listening to Comar, I’m reminded of Steven Strogatz’s book and videos on YouTube on nonlinear dynamics and chaos.

So, I’ll stop here for this first post, with more to come shortly in reasonable increments.

Models and Insight

In reading through Ian Stewart’s The Mathematics of Life, I came across an interesting statement on models and modeling (pp. 273-274). It speaks both to the approximate nature of models and to the observation that exactness is neither the prerequisite for usefulness nor even always desirable.

These three models of the foot-and-mouth epidemic show how mathematics can help to answer biological questions. Each model was much simpler than any truly ‘realistic’ scenario. The models did not always agree with one another, and each did better than the others in appropriate circumstances, so a simple-minded verdict on their performance would be that all of them were wrong.

However, the more realistic the model was, the longer it took to extract anything useful from real-world data. Since time was of the essence, crude models that gave useful information quickly were of greater practical utility than more refined models. Even in the physical sciences, models mimic reality; they never represent it exactly. Neither relativity nor quantum mechanics captures the universe precisely, even though these are the two most successful physical theories ever. It is pointless to expect a model of a biological system to do better. What matters is whether the model provides useful insight and information, and if so, in which circumstances. Several different models, each with its own strengths and weaknesses, each performing better in its own particular context, each providing a significant part of an overall picture, can be superior to a more exact representation of reality that is so complicated to analyse that the results aren’t available when they’re needed.

The complexity of biological systems, often presented as an insuperable obstacle to any mathematical analysis, actually represents a major opportunity. Mathematics, properly used, can make complex problems simpler. But it does so by focusing on essentials, not by faithfully reproducing every facet of the real world.

Business Survival Rates

This post was spurred by a recent discussion of the massage therapy Entry-Level Analysis Project (ELAP). The project itself concerns expanding a job task analysis into a set of competencies. However, the description of project motivations contains the statement that inconsistencies in massage education have resulted in “too many massage school graduates who experience short, unsuccessful careers”.

While spa employers of massage school graduates are becoming more prevalent, I believe that single-person practices still dominate massage careers. Thus, it’s worth asking if the survival rate of such practices is vastly different than that for small businesses in general. With that as a goal, I’ll put aside anything specific to massage for a bit and first look at small businesses in general.

Looking for business survival rates brings us to the Bureau of Labor Statistics (BLS) and the table of Survival of private sector establishments by opening year. Scott Shane (see resources) already did an article on survival rates of the 1992 cohort of businesses and another one on the mortality rates of ‘infant’ businesses using the 1994 cohort. I’m going to look at the 1994 to 1999 cohorts — the businesses for which I have at least twelve years of data from the BLS tables. My starting point is the percentage of businesses that survive at a particular age given that they’ve survived prior years (i.e. incremental survival rates). This data makes up the first seven columns of Table 1 and is displayed as a scatter plot in Figure 1.

The next column of Table 1 is a nonlinear, least-squares fit to the incremental survival rates (s1). The fit, being the solid line in Figure 1, is of the form:

where, a, b, and c are 2.65, 2.77, and 0.151, respectively. The only motivation for this particular equation is that it naturally lends itself to the shape of the data, changing rapidly when years is small and saturating when years is large.

The last two columns are the incremental loss rates (100 minus the survival rates) and the net survival rates over the span of years. These data are plotted in Figure 2.

Table 1: Businesses — Yearly and Net Survival Rates
Age
(years)
1994
(%)
1995
(%)
1996
(%)
1997
(%)
1998
(%)
1999
(%)
Fit
(%)
Yearly
Loss (%)
Survival
(%)
0 100
1 79.8 79.2 79.0 78.8 80.6 79.6 79.7 20.3 79.7
2 85.8 86.6 85.6 87.1 85.7 84.9 85.4 14.6 68.0
3 89.3 88.2 89.3 88.2 87.2 87.3 88.2 11.8 60.0
4 89.6 90.5 89.6 88.3 88.9 90.1 89.9 10.1 54.0
5 91.5 90.5 90.2 89.9 90.9 91.6 91.1 8.9 49.1
6 91.3 91.0 91.2 91.9 92.5 92.3 91.9 8.1 45.2
7 91.8 91.9 92.5 92.8 93.1 93.5 92.6 7.4 41.8
8 92.5 93.3 93.5 93.2 94.0 93.5 93.1 6.9 38.9
9 93.3 94.0 93.6 94.7 93.9 93.5 93.5 6.5 36.4
10 94.2 93.8 94.4 94.1 94.0 92.0 93.8 6.2 34.1
11 94.6 94.7 94.5 94.3 92.7 93.5 94.1 5.9 32.1
12 95.6 94.7 94.6 92.9 93.9 95.3 94.4 5.7 30.3
13 94.7 94.9 94.0 94.2 95.4 94.6 5.5 28.7
14 94.8 93.7 94.2 95.3 94.7 5.3 27.2
15 93.7 94.6 95.5 94.9 5.1 25.8
16 94.5 95.6 95.0 5.0 24.5
17 95.7 95.2 4.8 23.3
18 95.3 4.7 22.2
19 95.4 4.6 21.2

Figure 1: Percent of businesses surviving that survived the prior year

Figure 2: Percent of businesses surviving from opening (blue) and yearly closure rates of businesses that survived prior years (red)

What these data make clear is that the first years are critical ones for a business — 20% of total closures happening in the first year and 77% of the closures by year 10 occurring in the first 5 years. These are total U.S. statistics and vary between economic sectors and geographic areas. While the 5 year survival on the total private sector is 49.1%, the healthcare and social assistance businesses have a 5-year rate of 59.5%, averaged over the same set of cohorts. Retail trades has a similar average of 52.3%.

In a report analyzing business closures, Headd (2001) makes several points. About two-thirds of employer firms survive at least 2 years and about half survive at least 4 years. Half of non-employer businesses (self-employed without employees) would survive about 2 years, and about a third would survive 4 years. Non-employer survival rates are thus about 70% of employer survival rates (page 6).

Closures are not synonymous with business failures. Of the business closures, about a third are considered to be successful at closure (page 15). Successful closures could be a result of finding value in the learning experience and/or being enticed to close a business and work for an employer (page 12).

Being a relatively young owner, being in services or retail trade, not having any capital, and being in an urban/suburban area led to a higher likelihood of closure. Part of these closures likely are a result of availability of alternative job opportunities (page 9).

With all the above in place, it’s time to return to some considerations specific to massage therapy. For a 2004 report on massage therapy status and training tends in California, I had obtained survival cohort data from Associated Bodywork & Massage Professionals (ABMP) for massage school graduates as a function of initial education hours (pp. 20-21, Table 10, Figure 9 of that report). Figure 3 here displays the entry and survival data for the 500+ hour graduate cohort; 92.0%, 71.0%, 56.6%, 48.1%, 40.9%, and 34.7%, from zero to five years. The first number, 92.0%, is the percentage of graduates initially entering practice. Also shown are the net survival curve from Figure 2 and Headd’s two and four-year survival estimates for non-employer businesses.

It’s notable that the massage therapy survival data falls between the two other curves. Also, as seen in the training trends report, survival upon entering practice was not strongly dependent on training hours. Of those actually entering practice, 5 year survival was 37.7% for the 500+ hour training cohort and 34.3% for the 250-300 hour cohort; a 3.4% increase in 5-year survival for a doubling of training hours.

Figure 3: Percentage of businesses surviving over time (as in Figure 2; blue), non-employer survival, and massage graduate business survival.

The data do not reveal that massage practice survival is anomalously poor compared to businesses in general; particularly for non-employer businesses. While massage therapy is generally considered to be healthcare, the economic structuring of practice likely differs. There is, for example, far less use of joining an existing group practice as a means of entering the business side of the profession. Those creating a practice are less likely to have substantial prior experience in practice nor as likely to have working capital to “buy time” for a business to become successful.

Nor is a self-employed creator of a new business as likely to be able to provide for their health insurance as a person entering an established healthcare practice. While the Affordable Healthcare Act (ACA), may somewhat relieve this (and/or allow those up to 26 to remain under parent-provided health insurance), the structure of healthcare in the U.S. is known to create an entrepreneurial disincentive. While education, particularly realistic education in business, plays a part, business survival rates have many more facets than just education to consider, including, for massage therapy, the physical and close interpersonal aspects of the work.

Oat Hulls and Wheat Chaff. Not for everyone. Submit your resume to see if you qualify. — Garrison Keillor

Resources

Brian Headd, 2001. Business Success: Factors Leading to Surviving and
Closing Successfully
, Working Papers 01-01, Center for Economic Studies,
U.S. Census Bureau.

Bureau of Labor Statistics. Business Employment Dynamics – Entrepreneurship and the U.S. Economy.

Bureau of Labor Statistics. Business Employment Dynamics – Establishment Age and Survival Data.

Bureau of Labor Statistics. Survival of private sector establishments by opening year.

Shane, Scott. Startup Failure Rates — The REAL Numbers, Small Business Trends. 28 April  2008 [based on 1992 cohort data].

Shane, Scott. Businesses Face High Rates of Infant Mortality, Small Business Trends. 14 May 2012. [Based on 1994 cohort data].

Slamdunk: How a Good Idea for Outreach was Soured by Yellow Journalism

Much of the impetus for the collapse of the top newsroom managers was credited to the Internet on which many of the Times employees posted the complaints that had been ignored. Staff members who used the open architecture of the new medium to become “the outside voice” provided a check on internal behavior. Along with others, they realized that the Web had assumed an important role in opening new channels through which values and standards could be questioned and judged by a large communikty that depends upon the integrity of the press. In the end journalism is an act of character. …

As Chicago newscaster Carol Marin told the Committee of the Concerned Journalists at its first forum, “I think a journalist is someone who believes in something that they would be willing to quit over.”

—Bill Kovach & Tom Rosenstiel: 2007. The Elements of Journalism. pp. 229-232.

First, a disclosure, I am a massage educator, a retired physicist, a freelance science writer, and a current member of the board of directors of the California Massage Therapy Council (CAMTC). I was also, until I resigned it last night, a columnist for Massage Today (since January 2001). The opinions expressed here are solely my own. Period. Second, a definition, I use the term yellow journalism in the sense of writing not designed to promote oversight and scrutiny but to stir up fear and outrage against a situation that, in my belief, is largely delusional. In the end, readers can judge for themselves.

This piece is about clarity and responding to the spread of misinformation. It is not about whether I agreed or disagreed with a particular board vote. For example, here’s a situation in which I voted in the minority yet did what I could to clear up confusion and misinterpretation. It is also in response to an article in Massage Today by Kathryn Feather, Updated: CA Massage Board Votes to Send “Roving Ambassadors” to San Diego Convention, and to an editorial by Donald Petersen called The CAMTC Money Grab.

As a bit of background, CAMTC is a nonprofit certifying board created under California state law (SB731; Oropeza, 2007-2008 session). That enabling law, with some more recent clean-up adjustments, is in Business & Professions code Sec 4600 and following. CAMTC, while a creature of the state, is not a state agency. It does not license (a function reserved for agencies of the state). It does not ‘state certify”. What CAMTC does do is provide a state-wide system of education and background checks as part of two-tiers of certification that exempt certified massage professionals from local licensing laws. Certification, at least at the state level is voluntary. A local agency, city or county, may also have their own local licensing (for those not certified by CAMTC) or may require CAMTC certification to practice.

So, time to move onward to the core of controversy. In, I believe, early March, I was asked by CAMTC CEO Ahmos Netanel whether I would be willing to volunteer my time to do outreach for CAMTC at the upcoming American Massage Conference in San Diego. Prior to that moment, I had no plans to attend the AMC yet I agreed that this sounded like a positive thing to do. Let me explain why.

CAMTC started accepting applications in Fall 2009, expecting several thousand by the end of the year. They received about 12,000, totally swamping the administrative agency and resulting in long delays. On top of that deluge, the state agency overseeing private post-secondary and vocational schools (BPPVE) had been sunsetted (i.e. killed) in July 2007. While a new agency (BPPE) came into being in 2009 along with CAMTC, neither its capabilities nor focus (use of loan funds) were up to verifying the massage education provided by massage schools. Thus CAMTC had to develop its own procedures for verifying that schools provided the massage training claimed on transcripts. More delay for some as several thousand applications went on hold while this was done for schools deemed uncertain.

In 2010, the California Police Chiefs Association (CPCA) sponsored AB-1822, initially designed to dismantle CAMTC’s state-level regulation. I created a video of one of the hearings on that bill; a lot of misinformation was put forth. Note that the California League of Cities and the California Association of Counties have always had the ability to appoint representatives to the CAMTC board. I’ll note at this point that, in shepherding CAMTC’s survival to this point, Ahmos Netanel, the CEO for CAMTC, has had a far more challenging and demanding set of duties than the normal regulatory board executive director would encounter. That CAMTC is still functional is, at least in good part, a tribute to his effectiveness. His position as CEO is also an interim, at-will, appointment; essentially that of a pilot to get CAMTC past the snags and sandbars and into open water.

This history is one of the reasons I believe that outreach to the community CAMTC regulates is important. Given a novel structure of regulation and a rough start-up, seeming accessible and not remote is important. CAMTC, by the way, does have to observe the requirements of the Bagley-Keene open meeting act. I was queried by Kathryn Feather of Massage Today if I planned to attend AMC and if I would request reimbursement for expenses. Part of my reply was quoted in her recent article, but I’ll include my entire response here.

I’m attending the AMC conference at the request of Ahmos Netanel and to network on behalf of CAMTC, not to take or give workshops there. I will be requesting reimbursement for my expenses. My time I’m donating even though it means canceling an online class I normally teach on Thursday evenings and missing an online Q&A session I’m normally part of on Sunday evenings. I also found out today that there will be an initial get together of a Science Online Bay Area on the 19th, a new follow up to a Science Online conference held in January on science communication. My prior commitment to Ahmos stands.

I would not be attending AMC otherwise. I’m already planning on attending a mathematical biology conference in Knoxville TN in July and a science writers conference in Raleigh NC in October, both on my own “dime”. My dimes only extend so far. I’ve got 16 hours of teaching kinesiology coming up the following weekend (that I need to prep for) and am working on analyzing an initial subset of transcripts from a symposium on massage practice guidelines prior to a Massage Therapy Foundation board meeting in early May. If the CAMTC board vote had gone the other way, I could have used the time freed productively. It was only adding to the things I have stacked to vote yes.

My yes vote was motivated both by my concurrence with a comment Mark Dixon made during the phone call and by my belief that management by walking about (MBWA) is a good idea. Mark’s comment was that this is a major conference and it is happening in CAMTC’s jurisdiction — much like an event happening in a congress-person’s home district. I would not be in favor of it if the meeting were outside of California or exorbitantly priced (the room rate is about the same as that for a typical scientific conference).

In short, I believe it behooves CAMTC to be seen as accessible and not remote — almost literally “in touch” with those regulated, particularly, given several factors in the start-up of CAMTC: a much greater response than was envisioned for the first 6 months; having to create procedures to handle schools and transcripts that weren’t as claimed, and having to fight back against the California Police Chief’s Associations initial attempt to dismantle CAMTC (that relationship has changed dramatically). Again, in short, it was a rough start and I believe there are still people out there that need a chance to vent and be listened to face-to-face.

I see a major conference with educators and professionals in CAMTC’s home terrain as an opportunity to correct misconceptions (we are not a licensing board, we are not a state agency, we don’t “state license” or “state certify”) and to take responsibility for and apologize to those who experienced delays in certification or have questions that I can answer or facilitate.

I have let it be known on Twitter and Facebook that I will be there and available to do such networking.

Note that the above are my personal beliefs and motivations, not board policy.

That’s about it.

…Keith

There never was, in the board approval of reimbursement on 10 April an intent that any board members other than the seven specifically asked by CEO Netanel would attend. If the motion was unspecific, it was a failure of using too much common sense. Nor could, as had been suggested by Massage Today, the entire board have shown up without violating the Bagley-Keene law. Same with the idea that reimbursement would fall along the same lines used for meetings, the board assumed common sense would prevail.

Now, let’s do the numbers. Back in 2008 I looked at How much does a California Regulatory Board cost per Licensee? My fit to the least expensive boards was $900,000 per year plus about $55 per licensee per year. For CAMTC, now with about 30,000 certificants, that would be about $30+$55=$85 per year per certificant or $170 for a two-year renewal. What CAMTC actually charges is $150. That’s $20 saved per certificant per renewal or a total savings to the profession of about $600,000 per renewal cycle.

The AMC hotel at $125 compares with the Medbiquitous conference in Baltimore at $189, Society for Mathematical Biology at $112 in Knoxville, Society for Industrial and Applied Math in Anaheim at $149, and another SIAM conference in Philadelphia at $179. These are all negotiated conference rates, and AMC comes out on the cheaper side.

How about the number of board members for which reimbursement was expected (after travel, via normal forms)? AMC expects about 2100 attendees. If seven board members were to network with all of these, dividing them between us, we would each network with 300 in 3 days. Let’s say we networked or conversed with half, for 10 minutes each. That’s 25 hours of networking, explaining, facilitating for each board member over three days. By the way, CAMTC board members are never reimbursed for their time, only for expenses. When I fly down to a board meeting in L.A., it’s generally up at 3:30am and back home at about 8:30pm.

As it turns out, despite still considering this outreach to be a good idea, Ahmos and I concurred that this has become more trouble than it’s worth. I canceled my flights. For the record, if I really had wanted CAMTC to send me on a trip for personal benefit, it would not have been to AMC (which is not to knock AMC). I’m interested in research and having funding for the Third International Fascia Research Conference in Vancouver would have been nice. I didn’t ask. I do hear that the organizers did an wonderful job.

You’ll have to decide for yourself if this is a “Money Grab”. That’s not my personal take on this nor do I believe it was the intent of any of my fellow board members.

Addendum 20 April 2012 — After having discussed networking for CAMTC at the AMC with Ahmos Netanel earlier in March (phone conversation), I had pulsed him by email about logistics on 31 March. On 2 April, he had requested my estimated travel expenses. What the entire board saw and based their approval of reimbursement on was this table of estimated expenses. Amounts apart from travel were filled in by Mr.Netanel. At least in my case, the understanding was that these were the only board members involved relative to the motion to reimburse.

Maintaining Core Competency vs Continuing Profesional Development

In March, the Federation of State Massage Therapy Boards (FSMTB) proposed a significant change to state requirements for continuing education (CE). They have termed this proposal MOCC, for Maintenance Of Core Competence.The FSMTB is accepting feedback on the proposal through 30 April.

Looking at reactions to the MOCC proposal, the Associated Bodywork & Massage Professionals (ABMP) have come out in support of it. As Les Sweeney stated:

I have for a long, long time argued with chapters, organizations, and individuals that we need to stop using state regulation of our profession as a means for professional development. We can’t and shouldn’t legislate professional development; we can and should legislate competence.

In contrast, the American Massage Therapy Association (AMTA), has taken a strong stance against the MOCC proposal. Laura Allen has now written two pieces, on 5 March and on 15 April, both strongly critical of the MOCC proposal.

I am going to add my voice to those supporting the MOCC proposal. Going back to the Supreme Court ruling of Dent v. State of West Virginia, 129 U.S. 114 (1889), the legal purpose of state occupational regulation is to protect the public from harms of incompetence and malfeasance. States pursue this goal of protection both by enacting requirements for entry to practice and by continuing oversight of those practicing.

As part of continuing oversight, states do have an interest in ensuring that licensed practitioners stay current on information that might have changed, such as regulations and jurisprudence, and that such pratitioners maintain skills and knowledge deemed necessary for entry but that are rarely used. Recurring training in CPR is a good example for both aspects, catching changes in recommendations since the last training and reenforcing skills and knowledge. So, we need to ask, “Does continuing education, the way it has been implemented in the past, fulfill this purpose?”

The unfortunate answer is that simply requiring CE hours has little or no predictable effect on actual practice. I’d noted that in a column I wrote a while back on Why Most CE Courses are Dead on Delivery. The studies I cited there on CE course ineffectiveness have since been reenforced by an Institute of Medicine (IOM) report on Redesigning Continuing Education in the Health Professions.

For health professionals, continuing education encompasses the period of learning from postlicensure to career’s end. CE is intended to enable health professionals to keep their knowledge and skills up to date, with the ultimate goal of helping health professionals provide the best possible care, improve patient outcomes, and protect patient safety.

The reality of continuing education, however, is far different. Although there are instances of programs focused on those goals, on an overarching level the U.S. approach to CE has many flaws. …

Requirements that are based on credit hours rather than outcomes—and that vary by state and profession—are not conducive to teaching and maintaining these core competencies aimed at providing quality care.

In light of such research and conclusions, it does not surprise me at all that the FSMTB is proposing a policy that would move away from requirements of x hours of CE to renew a license and toward one of providing specific information and training. I would both hope and expect the the material presented came from the ethics lapses and observed injuries that state boards see in their process of oversight.

The state boards and thus the FSMTB have an interest in these matters because a number of state laws contain requirements for CE hours for renewal yet provide no guidance on what those hours should address to further public protection and benefit. Requirements for licensing renewal are both the beginning and end of state board interests. The boards can’t ignore such an requirement but neither do they have jurisdiction to extend beyond it. Note, however, that adoption of a policy or proposal by the FSMTB does not change any state regulatory laws. It simply sets a common direction.

Ultimately, the entire massage profession needs to move away from thinking about hours and toward thinking about objectively-determined core competencies. Until we have clarity on such competencies and on the contexts of practice, it is next to impossible to assess whether or not licensing is fulfilling it’s responsibilities to the public. As recommended both by Les Sweeney and by the IOM report, we need to encourage more thinking of Continuing Professional Development (CPD) and get away from forcing practitioners to chase last minute hours for renewal. It is the role of the states to protect the public and that of the individual and the professional organizations to foster professional development. I see the FSMTB’s MOCC proposal as an overdue step in this direction.

Verner Suomi – The Need for Climate Monitoring

I was noticing in the tweets coming out of the American Meteorological Society meeting in New Orleans that NASA and NOAA have renamed the recently launched polar orbiter, NPOESS preparatory project, the Suomi NPP, after the late Verner Suomi.

This brought back a memory confirming in my own mind the suitability of the choice. Years ago in writing my PhD thesis, I quoted from a section by Suomi on The Need for Climate Monitoring from a larger National Academies Press Report on Energy and Climate: Studies in Geophysics (1977). Looking back through Suomi’s writing, I’m amazed by how pertinent his plea for adequate monitoring remains today.

The sentences I’d quoted also remain particularly pertinent. They focused on the need to gain information from the immense potential flows of data.

An extremely important aspect of the entire climate monitoring activity is the data-processing effort required. It is possible for the secrets of nature to be hidden in a flood of data as well as in nature. Clearly, we need information more than we need data.

Thanks for the insights, Verner Suomi

Accounting for Total Costs

One of the economic shortcomings that’s more or less obvious in “just letting the market handle it” is that the market often doesn’t include total costs of use. The situation is akin to letting someone buy supplies for a large party, holding the party on common land, and then simply walking off, leaving the trash and costs of clean-up to others. The costs of producing the party supplies are included in their price, but not their costs of use. The market can only achieve an equitable balance, one that does not foster costs onto third parties, if a way is found to include all costs within the price. In short, the costs of use need to be born by those receiving the benefits of use.

The Economist column Do economists all favour a carbon tax? addresses just this situation applied to use of fossil fuels and carbon emissions.

Carbon emissions represent a negative externality. When an individual takes an economic action with some fossil-fuel energy content—whether running a petrol-powered lawnmower, turning on a light, or buying bunch of grapes—that person balances their personal benefits against the costs of the action. The cost to them of the climate change resulting from the carbon content of that decisions, however, is effectively zero and is rationally ignored. The decision to ignore carbon content, when aggregated over the whole of humanity, generates huge carbon dioxide emissions and rising global temperatures. The economic solution is to tax the externality so that the social cost of carbon is reflected in the individual consumer’s decision.

The concept is far more general than just fossil fuel use. It applies to any situation in which side-effect costs of production or costs of disposal are not included in the cost accounting, be it slag from mining or disposal of chemicals or sewage into streams. Regulation and/or taxation is required, not to interfere with the market, but to ensure that the market correctly accounts for total costs.

Santorini Topography – From Shuttle to Sim

This past week I had a rare opportunity to explore using the virtual world of Second Life (SL) as an immersive means of visualizing and exploring real life topography. Generally, a SL sim already has set terrain and arbitrarily replacing that would break everything developed on it. When a friend is obtaining a new sim (Numantia Maris) and wants something with islands and a lot of water, however, it’s a fresh canvas on which to paint data and create an immersive experience of the topography.

The topography data were part of the dataset produced by the Shuttle Radar Topography Mission (SRTM). The location used was the Greek islands of Santorini. Santorini is interesting both historically and geologically in being the site of a massive volcanic eruption about 1600 BCE.The eruption is thought to have sent tsunami waves up to 75 ft high into the coast of Crete (68 miles distant), contributing to the end of the Minoan civilization. This explosion also may have been the source of the Atlantis legend. Because it was buried by the eruption, the excavation site of Akrotiri was exceptionally well-preserved and has yielded insights into both artwork and architecture.

The finished global data product is at three arc-second resolution (roughly 90 m) delivered in one-degree tiles. The area of interest is selectable using the Earth Explorer. Selecting an area, choosing the SRTM elevation data set, and requesting data makes one or more tile files available. In the case of Santorini, only the N36E25 tile was needed. This included Santorini and all or parts of several other islands as shown in this reduced resolution picture of the tile.

Topo N036E025

I used Python 2.6 to read in the BIL (Band Interleaved by Line) file and the corresponding Python Imaging Library (PIL) to display it. Once read as binary data, the 1,442,401 16-bit integers of the data tile could be unpacked into a numpy integer array and then reshaped into a 1201 by 1201 array. It was then trivial to sub-select the portion of the array containing Santorini. I then filled-in the small number of missing data points by diffusing in data from the sides of the missing area.

Under it’s “Miscellaneous Functions” category, SciPy contains several PIL interface routines that allow resizing and rotating images stored as arrays. These made it trivial to resample and rotate the selected data portion to a 256 by 256 array with the desired orientation. By happenstance, it turned out that the desired region for Santorini contained 256 x 256 measurements and didn’t have to be resampled.

topo_santorini_sw_v02_max50_100pc

The 256 by 256 image size was chosen because a SL sim is 256 by 256 meters. Given that measurements are about 90 m apart, mapping the measurements onto a sim one-for-one creates about a 1:90 scale model horizontally. Vertically, I took the log of the elevations and then scaled the result to 50 m. This compresses the vertical scale in a manner appropriate for SL terrain. Since the SRTM data did not provide ocean bottom values, I created a set of fractal noise varying about an initial value of 7.5 m. Note that SL uses 20 m as its nominal sea-level height. The noise was produced by successive doubling from an initial grid of one square, where the scale of vertical noise was reduced by 2-1/2 at each iteration. The noise was then added in where the topography height was zero. Finally, the data were written out as a SL raw file.

One uploads the raw file onto the waiting sim. Nothing happens for a minute or two. Then, suddenly the land takes form with the shape of Santorini. A bit of playing with the four terrain textures height interpolated by the SL sim software, and there is reddish volcanic rock rising from the sea before us. It’s very definitely a scale model, but one large enough to sail a boat into the caldera.

SantoriniPost_01_480

SantoriniPost_02_480

My appreciation to Stonehedg Magic, owner of Numantia Maris, and to Desmond Shang, governor of the Victorian, steampunk realm of Caledon, for making this effort possible.

Just the Terrain, Ma’am

I’ve been playing with uploading simulator (sim) terrain from Second Life and displaying it with Python and Matplotlib. The figure displays two snapshots of a sim taken a year apart. The major difference is that a deep hole has been “nuked” into the ridge near the northwest corner to make room to create caverns, caves, dungeons, and the like in the right-hand figure.

The sim terrain at two times

The basic terrain is of two ridges with a river valley between them. The western ridge angles slightly from NW to SE, leaving room for a beach in the SW corner. The eastern ridge eases into a flat plateau in the south. In the figure, just the terrain is shown (hence the post title), without the standard water level at 20 meters. I used the Matplotlib ‘copper’ color-table for both plots. The data for the left figure came from a Second Life raw file. The data for the right figure was obtained by scripting a surveyor prim to upload data via HTTP to a PHP script. Both plots are of a full sim of 256m by 256m at 2m by 2m resolution.

Given all that technical detail, the result captures an almost Rubenesque sense of the terrain.

The Pervasiveness of Models

Models and simulations of many kinds are tools for dealing with reality; they are as old as humanity itself. Humans have always used mental models to better understand reality, to make plans, to consider different possibilities, to share their ideas with others, to try out changes and alternatives, to develop blueprints for realization of some ideas, or to convince themselves and others that certain ideas cannot be realized. — Hartmut Bossel, Modeling and Simulation

The thought thread for this post started from the amount of flak that’s been tossed out attacking the usefulness and results of climate models. Reflecting on this situation, I decided that one of the reasons that such negative characterizations can take hold is that few people realize the pervasive use of models in everyday life, thus thinking of climate models as something apart from what we, as human beings, do day-in and day-out. This is the first in what I plan as a series of posts on models and modeling.

Marvin Minsky, in The Society of Mind, defines a model as: “Any structure that a person can use to simulate or anticipate the behavior of something else.”

Gene Bellinger, on his website on Systems Thinking, defines a model as: “a simplified representation of a system at some particular point in time or space intended to promote understanding of the real system.” Bellinger goes on to say:

The most important question to ask should relate to the extent to which the models we develop promote the intentioned development of our understanding. The extent to which a model aids in the development of our understanding is the basis for deciding how good the model is. In developing models there is always a trade off. A model is a simplification of reality, and as such, certain details are excluded from it. The question is always what to include and what to exclude.

Bellenger’s statement that a model is a simplified representation of a system is important. Polish-American scientist and philosopher Alfred Korzybski stated this as “the map is not the territory,” encapsulating his view that an abstraction derived from something, or a reaction to it, is not the thing itself.

In Empirical Modeling-Building and Response Surfaces (1987), Box and Draper noted that: “essentially, all models are wrong, but some are useful” (p. 424) and “remember that all models are wrong; the practical question is how wrong do they have to be to not be useful” (p. 74).

Gregory Bateson, in “Form, Substance and Difference,” from Steps to an Ecology of Mind, elucidates the essential impossibility of knowing what the territory is, as any understanding of it is based on some representation:

We say the map is different from the territory. But what is the territory? Operationally, somebody went out with a retina or a measuring stick and made representations which were then put on paper. What is on the paper map is a representation of what was in the retinal representation of the man who made the map; and as you push the question back, what you find is an infinite regress, an infinite series of maps. The territory never gets in at all. […] Always, the process of representation will filter it out so that the mental world is only maps of maps, ad infinitum.

Bateson also points out that the usefulness of a map (a representation of reality) is not necessarily a matter of its literal truthfulness, but its having a structure analogous, for the purpose at hand, to the territory.

Before going further and deeper into modeling in the abstract, I wanted to bring up an example of mental modeling from Gary Klein’s book Sources of Power, in which he talks about field research on the nature of expertise and expert decision making. He also goes into this same material in a video on adaptive decision making. In the following, Klein is commenting on the observation that firefighter commanders don’t compare different solutions to a problem but pick one that matches the pattern of the current situation and that they expect to be a sufficient solution. Part of arriving at this expectation is a process of mental simulation, i.e. execution of a mental model.

The commanders’ secret was that their experience let them see a situation, even a nonroutine one, as an example of a prototype, so they knew the typical course of action right away. Their experience let them identify a reasonable reaction as the first one they considered, so they did not bother thinking of others. They were not being perverse. They were being skillful. We now call this strategy recognition-primed decision making

To evaluate a single course of action, the lieutenant imagined himself carrying it out. Fire-ground commanders use the power of mental simulation, running the action through in their minds. If they spot a potential problem, like the rescue harness not working well, they will move on to the next option, and the next, until they find one that seems to work. Then they carry it out. As the example shows, this is not a foolproof strategy. The advantage is that it is usually better than anything else they can do.

Klein’s observations that the strategy is not foolproof coincides with models being approximations of reality not reality itself. The commander runs a mental simulation, but his or her model does not contain all details of the situations, only the ones that have been unconsciously flagged as being relevant. Nonetheless, in most cases the model is useful.

A simulation is a model-based experiment. An experiment is done using the model with the anticipation that the result will increase our understanding of how the real system would respond. This requires that the experiment be within the set (or space) of valid experiments for a given model; i.e. that the experiment is within the design range of the model. François Cellier, in Continuous System Modeling (pp. 5-6), stresses the connection between model and experiment.

If people say that “a model of a system is invalid” (as can be frequently read), they don’t know what they are talking about. A model of a system may be valid for one experiment and invalid for another, that is: the term “model validation” always relates to an experiment or class of experiments to be performed on a system rather than to the system alone. Clearly, any model is valid for the “null experiment” applied to any system (if we don’t want to get any answers out of a simulation, we can use any model for that purpose). On the other hand, no model of a system is valid for all possible experiments except the system itself or an identical copy thereof.

A model may take many forms: a mental model, a map or sketch on a piece of paper, a reduced-scale physical model of the system, an electrical circuit or hydraulic system that is an analog of the real system, or a computer model. The latter might be equation-based or agent-based or a hybrid of the two. Delving deeper into types of models will be another post. I’ll close this overview with a quote from Richard Hamming.

The purpose of computing is insight, not numbers.