Massagetherapy.com

   Articles
Understanding Scientific Results: Fundamental Building Blocks
somatic research

By Ravensara S. Travillian

Originally published in Massage Bodywork magazine, March/April 2008. Copyright 2008. Associated Bodywork and Massage Professionals. All rights reserved.

Last month in this space, we discussed how we planned to mentor the development of skills in research literacy. Just like with any other skill, we'll start with the fundamental building blocks to create a solid foundation, and we'll build up on that foundation step by step. So, right now, if reading something like the following text about aromatherapy massage for anxiety and depression in cancer patients makes you "want to turn and run away" (a quote from a massage therapist I know), that's totally understandable. The researcher who wrote the article is speaking a very specialized language.

Most of us have not been exposed to this language in the course of our education, so at first it can naturally appear a little intimidating. But, this language has meanings and purposes, and we'll tease those meanings and purposes out in a methodical and easily digestible way.


Results: patients who received aromatherapy massage had no significant improvement in clinical anxiety and/or depression compared with those receiving usual care at 10 weeks postrandomization (odds ratio [OR], 1.3; 95% CI, 0.9 to 1.7; P = .1), but did at six weeks postrandomization (OR, 1.4; 95% CI, 1.1 to 1.9; P = .01). Patients receiving aromatherapy massage also described greater improvement in self-reported anxiety at both six and 10 weeks postrandomization (OR, 3.4; 95% CI, 0.2 to 6.7; P = .04 and OR, 3.4; 95% CI, 0.2 to 6.6; P = .04), respectively. [Wilkinson SM, Love SB, Westcombe AM, Gambles MA, Burgess CC, Cargill A, Young T, Maher EJ, Ramirez AJ. 2007. Effectiveness of aromatherapy massage in the management of anxiety and depression in patients with cancer: a multicenter randomized controlled trial. J Clin Oncol 25, no.5:532-9.]

If we were to reword the passage above to say the following, it would mean much the same thing: "The improvement that patients who received aromatherapy massage, compared to the patients receiving usual care, showed in anxiety and depression, as measured by particular clinical measurement scales, could not be demonstrated at 10 weeks after the study to be so much greater than the results we would expect by chance alone that we could be reasonably confident that it was due to the massage, but the improvement at six weeks was much greater than the results we would expect by chance alone. Also, patients receiving aromatherapy massage reported a greater improvement in anxiety, as measured by what the patients report feeling, and this demonstrated improvement is so much greater than what we would expect to see by chance alone, that we are reasonably confident that the improvement is due to the massage."

But it would also be quite a bit longer, it would be even more difficult to read, and it would lose some of the nuanced detail communicated by the numbers in the abstract above. So the research jargon serves a purpose: to efficiently communicate a great amount of detailed information.

Once you have the translation to that jargon, you'll have access to those details as well. You'll gain access to the information the researcher is trying to communicate and can decide if that information is useful to your practice, and if so, how it is useful. Let's get started on our path.


The Scientific Method
At the heart of all research lies a set of techniques known as the scientific method. This term is shorthand for a procedure that has been developed and refined through observation, experimentation, and communication. Although the application of the scientific method can be very complicated, involving multiple research centers, a large team of researchers, and a multistage research design, the basic concept is straightforward. In fact, we've all done it many times, even if we didn't call it by its formal name. Every time you work with a client and modify what you are doing in response to feedback (more/less pressure, that feels good/do more, that doesn't feel so great/change it, etc.), you're responding to your observations and testing an intervention to see whether it works. In a nutshell, that is the core of the scientific method.

Have you ever watched a young child discover something new and exciting about the physical world around us--maybe dropping a pan on the floor to make a loud noise or flushing a toilet to watch the water swirl and disappear? Delighted with the new discovery, the child does it again and again to see if the desired effect keeps happening.

So what is the child actually doing?

1) Making an observation about an action in the physical world (for example, dropping the pan) and the outcome of that action (it makes noise).

2) Coming up with a hypothesis (a proposed connection between the action and the outcome) "If I drop this pan on the floor, it makes a loud noise."

3) Testing the hypothesis: drop the pan on the floor.

4) Observing the results: see if dropping the pan does indeed make a loud noise.

5) Modifying the hypothesis, if necessary. Here, it's not necessary, because the predicted outcome resulted, but maybe on carpet the outcome would be different, for example. In that case, the hypothesis would have to be modified: "If I drop this pan on a hard floor, it makes a loud noise."

6) Repeating the experiment to verify the results.

So while that scene looks like--and certainly is--child's play, it's also an example of the scientific method at work. A huge clinical trial across multiple research centers will of course be much more complex than our example, and its research design will contain many features to account for that complexity, but it operates with the same basic principles behind it.

- First, make an observation about the world around us.
- Come up with an explanation for what you've observed.
- Test that observation.
- See if the outcome can be repeated reliably.
- If necessary, modify the observation to account for what the results of your experiment showed.

Once you understand the scientific method, you understand the essence of reading and understanding research articles. However, please don't misunderstand me--we still have a lot of details to discuss and understand. Much of the jargon in a research article is simply shorthand to communicate those details in a limited space, so we still have some real work to do before it will all fall into place. But once you understand the purpose of each step in the scientific method, you have passed a major milestone on the path to research literacy.

Each step in the scientific method contributes to the purpose of understanding whether or not there is a connection between an action (for us, some form of massage or bodywork) and a result (for us, some form of improvement for the client). In order to understand whether there is a connection, the question has to be testable, and there has to be some way in the test to distinguish whether the connection was shown to occur or whether it was shown not to. In other words, there must be a way to ask yourself, "If my hypothesis is wrong, how can I tell?" This is a very important point, and one we'll return to later for discussion.

A published research article is the researcher's way of communicating about his or her work. Since we said that the scientific method lies at the heart of research, we would naturally expect that communications about that work would reflect that structure--and so it does. There is a predictable structure for research articles that links them into the scientific method. We can also use that familiar structure to navigate our way through understanding the article.

The structure of an experimental research article can be remembered by the abbreviation IMRaD:

- "I" stands for the Introduction, which lays the groundwork (explains the observation and introduces the hypothesis or research question).
- "M" stands for the Methods, which explains the details of how the hypothesis will be tested.
- "R" stands for Results, which reports what happened when the hypothesis was tested.
- "a" is just to help with the pronunciation of the abbreviation.
- "D" stands for Discussion, where the researcher discusses the meaning of the results: Does the hypothesis need to be modified? Is future research on this question justified? What does this mean for clinical practice?
In future columns, we'll return to this structure with examples that show how it is applied. For now, if you can see at an abstract conceptual level how the scientific method operates, and how the different sections of a research article can line up with parts of the scientific method, you're in good shape for our upcoming discussions.


Thinking Statistically
A lot of the jargon in the abstract above was some very specific, statistical measures, with particular relevance to research. I promise to keep the statistics at the minimum, because it's not my job to try to turn you into a biostatistician, but a certain amount of statistical familiarity is unavoidable, because it's essential to understanding the results of a study.

The use of statistics becomes necessary because the complexity and diversity of life sometimes makes it hard to tell exactly what's going on with a client. Statistics is a tool for describing and predicting what happens in populations or groups. Knowing what to expect in a group will not always tell us exactly what is going on with an individual, of course, but it can give us a context to know what to look for and what to compare against.

Remember the question, "If my hypothesis is wrong, how can I tell?" Let's look at a few examples.

Imagine a scenario where you are treating a client living with cancer. The client asks for massage to relieve pain, and you ask the client before and after the massage to rate the pain on a scale of 1 to 10, where 1 is the least pain and 10 is the most. Your client reports that pain was a 7 before the massage and a 4 afterward. (In Chart 1, the numbers at the left running up vertically from 0 to 8 represent reported pain numbers; the first bar represents the pain reported before massage, and the second bar represents the pain reported after massage.)

From Chart 1, we can clearly see that the pain the client felt before massage decreased after the massage. It seems pretty reasonable to proceed with the assumption that massage may have reduced the pain. We haven't repeated the test, of course, but for now, based on what we already know about the research showing massage helps lower pain, it's not unreasonable to provisionally accept the hypothesis, awaiting further evidence. Working with the information we have, it seems pretty straightforward for the moment to go with the assumption that massage helps pain in cancer patients.

But what about a more complex situation? If we have 10 patients who each show the same kind of improvement, that gives us more confidence that our hypothesis is correct. But what if our 10 patients report something like the following? (In Chart 2, the numbers at the left running up vertically from 0 to 9 represent reported pain numbers. The numbers at the bottom running from left to right represent the patient by a number from 1 to 10. For each of those patients, the first bar represents the pain reported before massage, and the second bar represents the pain reported after massage.)

As the bars show, patients 1, 2, 4, 5, 7, 8, and 10 report the expected decrease in pain, but patients 3 and 9 report no change, and patient 6 actually reports an increase in pain after the massage. So what's going on here? Is there something wrong with your massage for these particular patients or is something else going on that is separate from your massage? How can you tell the difference?

Statistics give us a set of tools for describing groups and making predictions about trends in those groups, even though no single individual in that group is exactly like any other individual. It gives us ways of evaluating and understanding what is going on in that group, even when they show differences from each other. So in making statements such as "massage improves pain in cancer patients," statistics permits us to describe which patients show improvement in response to treatment, how much improvement they show, and which patients do not. From that information, we can start to make predictions about what these trends mean to our own clients in practice.


Research Overview--Massage And Cancer
I just got back from an informative and well-presented program at the Fourth International Conference of the Society for Integrative Oncology. An entire afternoon was devoted to the presentation of clinical and basic science research on massage and cancer, and the November 2007 issue of the Journal of the Society for Integrative Oncology is devoted entirely to articles about massage for people living with cancer.

The topics range from measuring how massage helps reduce the anxiety of patients awaiting radiation therapy to a theoretical paper about appropriate research design for studying massage. Rounding out the issue is a review of training programs for massage therapists who desire to work in clinical oncology settings and a proposal for possible mechanisms for further research on why massage helps cancer patients. So, there is a nice balance between the theoretical and the practical, the practice and the teaching, and the emphasis on the patient and on the caregiver in the articles in this issue. This is a welcome emphasis in a greater context where there is evidence that massage is regarded as useful and patients are already utilizing it widely, as many of those perceptions are being borne out as effective, not only in perception but in actual practice.

The available evidence demonstrates that anxiety, pain, fatigue, nausea, and depression respond well to massage in the short term and that these measures can enhance the quality of life and the sense of empowerment and taking back control over some aspects of the experience of living with cancer. Many of the treatments used for cancer, such as chemotherapy, surgery, radiation, bone marrow transplants, etc., have distressing physical side effects, and patients report that massage is an effective complementary therapy in dealing with those side effects and the stress from them. In studies, it has been shown to reduce the effects mentioned above, as well as to promote sleep quality. The research shows that massage is one of the most widely used complementary therapies that people with cancer avail themselves of, but many of the studies have methodological problems, such as small sample size, that interferes with the reliability of their results.

By addressing questions such as appropriate research design for massage and possible mechanisms of how massage helps people with cancer, this journal is taking steps to make the next generation of research into massage for cancer of higher quality and thus more reliable and useful than the research to date. Together with other emerging research of better quality than before, it is contributing to our knowledge base about the benefits of massage in clinical and home settings for cancer.

Ravensara S. Travillian is a massage practitioner and biomedical informatician in Seattle, Washington. She has practiced massage at the former Refugee Clinic at Harborview Medical Center and in private practice. In addition to teaching research methods in massage since 1996, she is the author of an upcoming book on research literacy in massage. Contact her at researching.massage@gmail.com with questions and comments.




Skin Care Therapy
Sidebars:


Related Articles:

 
Sports Massage
A public education site brought to you by Associated Bodywork & Massage Professionals. Privacy Policy. Copyright Policy. Terms of Use.
Find a Massage Therapist     Find ABMP Members on MassageBook
© 2017 Associated Bodywork & Massage Professionals.