4

In our last chapter, we addressed the ways in which practice evaluation techniques apply to situations in which a social worker is working with individual clients or client systems. In this chapter, we move to a focus on evaluating practice with groups of clients or client systems. In this chapter, we will focus on our example of the child welfare social worker. As you will recall, she has twenty clients all with the same goal, family reunification. In order to support her clients in obtaining their goal, she is running a parenting skills group.

As her clients all have the same goal, and similar objectives, we can evaluate the effectiveness of practice for the entire group at once. If all child welfare social workers in a given unit conduct practice evaluation of this nature, comparisons can be made, leading to potentially important discussions for the improvement of case practice and/or identification of a best practice. These data will speak to the effectiveness of our social workers’ intervention but may also point to areas in which confounding factors impeding case progress can be identified. (Insert callout on confounding factors)

In order to engage in the conduct of practice evaluation for a group of clients, we need to learn about the basics of research design (when data are collected) and sampling (who is included in the practice evaluation). These are two very “research-y” sounding words, but we are going to break them down into something more simple for your palatable consumption. In order to conduct practice evaluation, we need these two types of research methods (i.e. sampling, research design), even though research and evaluation are different things. Remember, evaluation uses research methods.

Research or evaluation design

Let’s start by taking a simple approach to the explanation of research or evaluation design for group practice evaluation. Remember, design is a term that refers to decisions about when we collect data.

Perhaps the simplest research design format or approach in evaluation is a cross-sectional design taken at one point in time, often for a post-test only. In this scenario, data are collected after an intervention. These data are then analyzed to see what scores clients have had if quantitative or what experiences clients have had if qualitative. Some social workers like to use a notation system in order to create a visual about research design. In this notation system, an “O” is indicative of an observation or a point of data collection and an “X” is indicative of a social work intervention. So, in a cross-sectional research design you would just see XO.  Analyses are usually of the whole group and data are reported as percentages, frequencies, and means/standard deviations.

Perhaps the most common research design format or approach in evaluation is known as the pre and post-test, a simple form of longitudinal research design. This can be used in either qualitative or quantitative scenarios but it is more common in the latter. In this scenario, data are collected before an intervention and after an intervention. These data are then compared to see whether any improvements are noted. This design is considered the ideal designs because of the ability to compare before and after. By far the most common evaluation design, and the one that you will likely use the most is the pre-post design. So, in a pre-post test evaluation design, you would see this notation: OXO. This implies that an observation would start the process, followed by an intervention, followed by another observation. In chapter 10, you will read about how an evaluation using this research design could be tested through the use of a t-test, an odds ratio or a chi-square test, for example, depending on the outcome measure used.

Another design is referred to as a longitudinal or repeated measures design. In this setup, data are collected at multiple points in time after the pre-test and/or at or after the end of the intervention. This can be used in either qualitative or quantitative scenarios but it is more common in the latter. While this is truly the gold standard design to use in evaluation, as it allows for the tracking of measures over time after the intervention ceases, it is often hard to use this approach as client tracking is logistically complicated and expensive. Using our notation system again, we could conceptualize the longitudinal design as OXOO, for example. Here, the process would start with an observation, continue with an intervention, and conclude with two later observation points, as an example. In chapter 10, you will read how an evaluation like this would be tested through the use of an ANOVA statistical test. Just hold your horses, we will get to that!

In applying the pre-post research design to our case example, our social worker could collect outcome measures from parents and caregivers before and after they engaged in group work on parenting skills (outcome meaures to be compared before and after). If she was interested in how the process of the intervention was going, she would track process measures during the intervention only.

If our child welfare social worker did a post-test only, she would only gather such data after her clients had finished the parenting skills curriculum, with a focus on the outcome. If our social worker took a longitudinal approach, she might collect her data at the start of the intervention, during the intervention, at the end of intervention and several months after the intervention stopped. The latter approach is most ideal, as it captures information before, during and after the intervention, but it is often impossible for social workers to do given funding constraints and logistical challenges in tracking clients.

Sampling approaches

Research design and sampling are closely connected. Sampling refers to who will participate in the evaluation, and from whom data will be collected according to the research design’s timing. Let’s start by thinking about the two groups involved in a classic pre-post-test design.

In an ideal world, a pre and post-test research design would be given to a group of clients receiving the same intervention as well as a group of similar clients not receiving any intervention (or possibly receiving a different intervention).

The group receiving the intervention is often referred to as the treatment group, or the sample group. This is the group of primary interest to the social worker conducting the practice evaluation. The group that is not receiving an intervention, or who is receiving a different intervention is usually referred to as the control or comparison group. In different disciplines, people use different terms for the same groups, so we want you to be familiar with the language you might see in the literature even though that is a little bit confusing.

Next, let’s think about who ends up in the treatment/sample and control/comparison group during the sampling process, or the process of choosing evaluation participants. Every practice evaluation will have selection criteria, or characteristics that must be present in participants sampled for the study. For people in the treatment/sample group, one basic selection criterion (singular) would include participating in the social worker’s intervention, and vice versa for the control/comparison group. For both groups, another selection criterion might be a particular age range (such as anyone aged 18 or older) or geographic location (such as receipt of treatment in town A or town B). Together, individual selection criterions make up selection criteria.

In quantitative practice evaluations, evaluators often have only enough clients to sample the entire population in the program, which is fine – actually it’s really great!. Namely, sometimes evaluators just use convenience samples based on who is available to them – say, whomever walks in on a Tuesday. In the latter situation, it would not be clear whether that sample was generalizable to the larger population served by the program. However, evaluators in larger agencies or with larger programs may also hope to use random samples to lessen their workload and lower their evaluation costs (where the number of clients available and willing to participate makes that possible). This takes us back to a basic research methods principle. Researchers have determined that random sampling gives you a better chance of ending up with a sample that reflects the larger population it seeks to represent.

In this situation, a random sample can also be thought of in terms of being more likely to be generalizable or having external validity to the practice setting in question (as opposed to the larger community or society). Using random sampling relates to a practice evaluation being as accurate as possible in its findings. It is important to remember that in qualitative practice evaluations, we are not concerned with generalizability outside of our practice setting, and are instead focused on learning about the experiences of our purposive, or purposefully collected sample as it relates to our program population. Purposive sampling is not the same as convenience sampling.

In our case example, parents or caregivers receiving parenting skills training would be the treatment/sample group, and parents waiting for a space in the same group would be the control/comparison group.

Now, we need to connect our thinking about sampling back to our discussion on research design – there is an area where these things overlap. In an experimental practice evaluation design, random sampling is ideally used to take a group of social work clients and randomly split them into a treatment/sample group that receives an intervention and a control/comparison group that does not receive it. However, it is unethical to withhold an available intervention from a client, meaning that most often, group practice evaluations use people on the waiting list as a control group, or have no control group at all. In these situations, researchers would term the practice evaluation as being quasi-experimental. In our child welfare case example, a quasi-experimental design would be used in order to avoid an ethics problem. So you need to make choices about this as well when you are deciding on sampling.

Now that you have mastered the application of these two research methods to evaluation, next, let’s think about best practices you should consider when planning to do group-focused practice evaluation.

Best practices in the ethical conduct of client group/program tracking to inform your practice:

Used in group settings, with individual clients all getting the same intervention for the same problem, or for other group interventions.

  • Each social worker participating in the practice evaluation should explain to their clients that you will be evaluating your practice during the time you will be working together (i.e. the intervention period and maybe after) in order to determine whether the intervention is effective.
  • Explicate that your practice evaluation data will be kept confidential and will not be reported outside of the agency unless a funder requires it, but may be used in supervision.
  • Explain the observable and measurable objectives that will be used as process and/or outcome measures during the intervention (these will be the same for all in the program).
  • Commit to sharing your practice evaluation results with your client during the course of the intervention and after. This can lead to good conversations.

Any practice evaluation designed for publication or conference presentations:

  • Run your proposal through an Institutional Review Board, a process that will include the creation of an informed consent form (see exemplar, on the next page).
  • Explain to clients that you/your agency would like to evaluate your practice during the time you will be working together (i.e. the intervention period and maybe after) in order to determine whether the intervention is effective.
  • Explicate that your practice evaluation data will be de-identified and kept confidential and read through an informed consent document (designed for your study, see example on next page) together.
  • Commit to sharing your practice evaluation results with your client during the course of the intervention and after. This can lead to good conversations.

Informed Consent Form for Study Title (insert study title)

INTRODUCTION: Please read this form carefully. If you consent to take part, as a participant, in the studies being undertaken by (Principal Investigator’s name), then you should sign the consent form. If you have any questions, or are unsure about anything, then you should not sign until your concerns have been resolved and you are completely happy to volunteer. (In plain language describe of the reason for the study, the techniques being used, and the practical details of participating in the study from the subjects’ perspective. Focus on points relating to the subject’s likely experience, and emphasize any risks involved and how you will minimize those risks.)

PARTICIPATION: You may at any time withdraw from the study. You do not have to give any reason, and no one can attempt to dissuade you. If you ever require any further explanation, please do not hesitate to ask.

RISKS: (Choose from the following statements as applicable to your individual study and delete the rest): There are no foreseeable risks involved in participating in this study other than those minimal risks encountered in day-to-day life [OR] There is the minimal risk that you may find some of the questions to be sensitive in nature [OR] There is the minimal risk that some questions may cause emotional discomfort [OR] Some of the survey questions ask about (insert information here) and may be distressing to you as you think about your experiences [OR] In order to mitigate (this/these risk/s), the research team will (insert mitigation plan here).

BENEFITS: The benefits of your participation in this survey are (insert information here). The benefits of this study in general are (insert information here).

ANONYMITY/CONFIDENTIALITY: (Choose one of the following: anonymous or confidential, you can’t have both) Anonymity: Data obtained during this study will not be able to be linked to your identity. Confidentiality: Any personal data obtained during this study will remain confidential as to your identity. If personal information can be specifically identified with you, your permission will be sought in writing before it will be published. Other data, which cannot be connected to you, will be published or presented at meetings with the aim of benefiting others.

For questions or concerns about this study, please contact (insert principal investigator name, title, contact information).

Initial if in agreement

I confirm that I have read and understood the attached information sheet for the above study. I confirm that I have had the opportunity to consider the information and ask questions and that these have been answered satisfactorily.

I understand that my participation is voluntary and that I am free to withdraw at any time without negative consequences without giving any reason

I agree to take part in this study.

I understand that the results of this study may be published and/or presented at meetings and may be provided to research sponsors or regulatory authorities. I give my permission for my (Choose one: anonymous/confidential) data, which does not identify me, to be disseminated in this way.

In summary, when conducting practice evaluation about groups of clients, we collect process and outcome measures, but we also have to think about research design and sampling. Research design refers to when data will be collected, or at which points in time they will be collected. Sampling refers to who data will be gathered from. In thinking about sampling, we must consider our selection criteria. We have to think about research design and sampling together in determining whether we are using an experimental or a quasi-experimental design in our practice evaluation.

Discussion questions for chapter 4:

  • Explain the difference between experimental and quasi-experimental practice evaluation designs, and why social workers should care about the difference.
  • In thinking about evaluating your own field placement, which research design seems most appropriate, a pre-posttest, a post-test only or a longitudinal approach?
  • What are the key elements of an informed consent document?
  • What is the difference between anonymity and confidentiality?