6

Clint evans headshot
Clint Evans, MA, BCBA
Author: “Use of Preference Assessments in the Field of Applied Behavior Analysis”
Contact for correspondence, revision, and commentary: thebehaviorchef@gmail.com

Stimulus preference assessments are a vital piece to the puzzle of figuring out what potential stimuli could be reinforcing for any given individual under a particular set of circumstances. (Cooper, Herron, & Heward 2007) There is a plethora of literature on the use of preference assessments regarding potential reinforcers. How can we know what assessment to use in each situation; and furthermore, how do we see the choice of preference concerning the magnitude of reinforcement that it may produce for an individual client? Looking at the literature, we can surmise what is considered best practice and where the attitudes of our science concerning replication and growth of preference assessment knowledge have come.

Before 1985, practitioners would choose preferred stimuli for clients without first testing the reinforcing effects of the observed preferred choice, or would arbitrarily select a suspected reinforcer without study (Fisher, Piazza, & Roane, 2011). That all changed when a cornerstone article was released by Pace, Ivancic, Edwards, Iwata, and Page detailing a strategy that tested the reinforcer potential for some preferred stimuli chosen by persons with disabilities. This article examined the effect that a preferred item had on that specific stimulus’s reinforcement properties. They found that several of the choices that were chosen as preferred during the preference assessment was not directly useful in increasing desired behavior when applied to reinforcing values. Conversely, they found that some of the non-preferred stimuli had a higher magnitude of reinforcer value compared to the preferred stimuli chosen. This led to the discussion of the arbitrary relation between preferred choice and direct reinforcer value. This article has since opened the door for many methods of evaluating both preference and reinforcer value independent and interdependent of one another. This speaks to the growth of our science and the use of the scientific method to choose best practice procedures when conducting analysis (Pace et al., 1985).

Pace and colleagues were not without their limitations in the single case stimulus preference design. Although they showed a need to assess both preference and reinforcement value they only did so with a single stimulus approach. Mazaleski, Iwata, Volimer, Zarcone, & Smith conducted a study in 1993 where two of the three participants approached all the stimuli that were presented marking possible “reinforcers” for all items; this led to the idea of hierarchy for preference in assessing stimuli may be present.  These findings shed more light on the need for the development of other strategies to efficiently produce stimuli that are genuinely preferred and eventually testing those stimuli for their reinforcing value when creating an effective treatment plan. False-positive stimuli is also a possible detractor from single presentation, in an article by Paclawskyj & Vollmer released in 1995 they found that environmental issues play a factor as well. Paclawskyj & Vollmer, (1995) tested preference assessments with participants that were disabled and some that were visually impaired. They concluded that preference assessments conducted on individuals with specific impairments such as blindness might report false-positives because of the participant approaching every tangible item that is presented to them giving the impression that they are in effect, preferred. These examples of limitations with single-stimulus preference assessments gave birth to the notion of exploring other avenues of preference assessments.

Philosophic doubt would dictate that there may be a better solution to the preference assessment approach other than a single-stimulus design. In the years that followed 1985, there have been at least four significant protocols developed to approach preference assessments, and each has variations. The four main types that are widely used are the paired-choice preference assessment, the multiple stimulus assessment, multiple stimulus without replacement assessment, and the free-operant assessment (Fisher, Piazza, & Roane, 2011). An attitude of the scientific method is to use experimentation to further the practice of currently accepted theories and protocols. Fisher, Piazza, Bowman, Hagopian, Owens, & Slevin, (1992) compared the original single-stimulus preference assessment against an updated paired-choice stimulus preference assessment. Conducting both protocols showed that all stimuli were marked as preferred, but when the single-stimulus assessment was used the subject chose nearly every stimulus in a highly preferred manner, and the paired-choice assessment differentiated the level of preference among two concurrently approachable stimuli. In doing this, the article evaluated not only the preference of the stimuli, but it’s inherent reinforcing value as well. The article went on to say that it may be beneficial to use the new paired-stimulus assessment over the single-stimulus because it showed more concurrent validity, but the single-stimulus assessment may still be preferred with subjects that are severely disabled due to its novel approach to assessing preference (Fisher et al.,1992). Just like most therapeutic approaches in the field of Applied Behavior Analysis, the more specific practices are advanced, the more the answers depend on the individual with which a behavior change agent is working. Fisher and associates gave the field a new approach to use that had more validity in some cases compared to the original, pure single-stimulus preference assessment. Even though advancements had been made, there were still limitations with the paired-choice assessment that led to new developments in the field of preference assessment.

According to The Handbook of Applied Behavior Analysis, paired-preference assessments are not without their limitations. Paired-preference assessments are a noted improvement over single-stimulus preference assessment, but they are potentially limited in at least two areas. The administration of a paired-preference assessment may take longer to complete than that of other assessment tools. Also, unwanted behavioral side effects can come about because of repeated introduction and removal of preferred stimuli in some subjects (Fisher, Piazza, and Roane, 2011). Direct observation of these effects over time administering the paired-preference assessment has led to yet another protocol being developed.

Windsor, Piche, & Locke, (1994) saw the limitations of the paired-choice preference assessment and wanted to take it a bit farther. They presented multiple stimuli to determine preference for subjects that had severe disabilities. A therapist introduced six items one after another to a subject over five sessions, with each session holding ten trials. Each trial began with a novel stimulus of “which one do you want?” The therapist allowed for a twenty-second latency period and the subject gained twenty seconds of interaction with the item if they reached for it or grasped it. For those items that the subject did not choose to interact with observers would score “no response” and the trial would end, and they would move to the next trial. The results showed favorably that this method was very useful in helping choose a preference of items, but it did not test the reinforcement value of the items that were deemed preferable (Windsor, Piche, & Locke, 1994).

Although the multiple-stimulus assessment was shown to be an apparent growth over the single-stimulus assessment, there was room to enhance it further. In 1996 Deleon and Iwata evaluated an extension of the multiple-stimulus assessment. In the original study, Windsor et al., (1994) replaced the chosen stimuli in each trial with a new stimulus in the next trial. This extension of the previous research is known as the Multiple Stimulus Without Replacement Assessment (MSWO). During their study, Deleon and Iwata did not replace a chosen stimulus in the following trial. Instead, they left the field the same during each trial and scored the preference choices based on a discrete hierarchy. As a result, the subjects showed increases in responding to the field of stimuli when the experimenter presented the preferred stimuli contingently (Deleon & Iwata, 1996). Not only did Deleon and Iwata further the research of the previous implementers, but they also compared the two assessments along three dimensions: rank order of preferred stimuli, the time required for administering the assessments, and the number of potential reinforcers identified. Four of the seven participants showed a top preference for the same stimulus in all three of the assessments; the remaining three showed a high correlation as well. The Multiple-Stimulus with Replacement (MSW) as outlined by Windsor et al., demonstrated to be the quickest to administer, followed by the MSWO and the paired-choice respectively. One drawback of the MSW is that it does not rank items into a hierarchy as the other two (paired-choice, MSWO) will. Another aspect that makes the MSWO so attractive to use is its generality to the general education setting (Fisher, Piazza, & Roane, 2011).

Building upon the previous research in preference assessment, Roane, Vollmer, Rihgdahl, & Marcus, (1998) developed a “free-operant” preference assessment. In this assessment, participants had free and continuous access to an array of stimuli for five minutes. The participants were free to manipulate any of the stimuli in the room without replacement or withdraw. Roane et al., (2011), compared these findings with that of a paired-choice preference assessment and found that the mean time to administer the free-operant assessment was less than that of the paired-choice. Also, 84.6 percent of the participants engaged in higher levels of problematic behavior during the paired-choice assessment as compared to the free-operant assessment (Fisher, Piazza, & Roane, 2011).  Other research has dictated that free-operant assessments even show less problematic behavior during implementation than MSWO (Fisher, Piazza, & Roane 2011).

Preference assessments have evolved since the early days of ABA, in doing, so there have been suggestions to adapt each of the significant assessments discussed here for individuals with other disabilities and needs. Fisher, Piazza, and Roane detail in many sub-sections the ideas of adaptation for these assessments in such areas as activity restriction, duration, vocal report, caregiver nomination, pictorial representation, concurrent chains and even group arrangements. With a multitude of research-based around preference assessments and the introduction of assessments such as MSWO and paired-choice that give us a preference hierarchy, the question then shifts from ideas of preference to the efficacy of reinforcement for the preferred items.

Research into the realm of reinforcer efficacy would state that not all preference assessments accurately depict levels of reinforcement. According to Fisher, Piazza, and Roane, a preference assessment of four participants with severe behavioral problems was conducted by Piazza, Fisher, Hagopian, Bowman, and Toole in 1996 to differentiate high, medium, and low preference of stimuli. While their assessment was looking at the level of preference of stimuli, it was also concurrently assessing the level of reinforcer value for the choices in preference. The preference choices in this assessment predicted the reinforcer values by charting low-high preference. Although this was a great find the study of preference assessments, others such as DeLeon, Iwata, and Roscoe, (1997) and Taravella, Lerman, Contrucci, and Roane (2000) showed that in some circumstances, the lower-ranked preferred items could have some reinforcing value (Fisher, Piazza, & Roane 2011).

While there have been successful studies into the efficacy of reinforcement after preference has been chosen, there are methods for evaluating reinforcer effect that was born from these investigations. Most investigators use a free-operant approach to test the reinforcer value of preferred items. Fisher, Piazza, & Roane, (2011) suggest that most of the studies in reinforcer assessment choose free-operant procedures for several reasons. The goal of the assessment is to see if the stimulus in question acts as an appropriate reinforcer rather than teach a specific response; therefore, simple responses are ideal for these types of assessments. Because of this approach, the application of free-operant reinforcer assessments is easily generalized across a wide array of disabilities and participants (Fisher, Piazza, & Roane 2011). The use of a simple response is typically more time efficient, and it can display deficiencies in behavioral repertoires for more complex responses, shedding light on areas of improvement. Conversely, using a simple response target can eliminate the possibility of a skills deficit if reinforcement values are absent, thus showing a motivational deficit directly related to reinforcement (Fisher, Piazza, & Roane 2011).

Evaluating the effectiveness of simple versus complex responses in reinforcer assessment proved to strengthen the validity of using simple responses for more precise results, which led to the development of comparing operants of reinforcement. Fisher et al., used a concurrent operant arrangement to evaluate the efficacy of reinforcement for preferred stimuli. The advantage here is that the magnitude of responding for each operant is a function of the magnitude of the reinforcement and the schedule that is in place and is not related to response competition or interference (Fisher, Piazza, & Roane 2011). The matching law is observed with the concurrent operant study because of the competing reinforcers that are being tested, in most natural environments this is a commonly occurring phenomenon due to the prevalence of potential reinforcers available in any given environment.

As the studies on preference and reinforcer efficacy grew, the idea of progressive schedules of reinforcement emerged. Fisher, Piazza, & Roane (2011) discuss how progressive ratio schedules were used in landmark fashion during a study in 2001 conducted by Roane, Lerman, and Vorndran. This study used progressive ratio schedules of reinforcement during one observation instead of across multiple sessions. The schedule would begin with a simple 1:1 ratio for a participant to access a preferred item and potential reinforcer. As the trial went on, the criteria to obtain reinforcement was increased (from 1:1 then 2:1, 3:1., etc.), until a threshold was observed. The observers reported that the progressive ratio schedule assessment was more time conscious than other methods because of the need only to use one session to garner results. Also, they could see the difference in the effect of reinforcers based on changing criteria and a correlation to lower rates of problematic behavior. They used this input for treatment design because of the display of data on the magnitude of reinforcement concerning performance per subject and individual stimulus (Fisher, Piazza, & Roane 2011).

Research has come a long way since the mid-1980’s. The scope of preference assessments has grown, the identification of reinforcer assessment need and the schedules of reinforcement in direct relation to the magnitude of reinforcement and treatment development have been major milestones. Along with those methods and procedures, evidence that other factors could be at play has also risen over the years. Fisher, Piazza, & Roane (2011) discuss a myriad of other factors that play into influence on reinforcement such as choice as reinforcement, schedules, and dimensions of reinforcement, changes in preference over time, and stimulus variation. Stimulus variation has proven, through study to be an effective way to enhance the effectiveness of reinforcement by placing a list of hierarchically chosen items on a varied rotation as not to cause satiation or deprivation for the reinforcer in relation to the participant and the accompanied behavioral effects (Fisher, Piazza, & Roane 2011).

When assessing environmental factors, research shows that simple things like availability of access to preferred items play a role in the effect of reinforcers. Consider Gottschalk, Libby, & Graff (2000), and they compared the impact of deprivation (Establishing Operation) and satiation (Abolishing Operation) on reinforcer effects of preferred items for four individuals with developmental disabilities. They used a paired-choice assessment to determine preference and then would allow access to three different conditions for a period. 24 hours before the assessment in the study the participants were allowed access to their preferred edibles on a controlled timetable of small portions in the control condition. In the deprivation condition, participants were allowed access to three of the four preferred items for a 24-hour period and were deprived of the fourth for 48 hours. Finally, in the satiation condition, the participants were allowed access to the four preferred edible items in the same 24-hour structure as the control condition but then were allowed ten minutes of free-operant access to one of the highly preferred items. Four assessments were conducted under both satiation and deprivation and three under the control condition. Following these conditions, there was a directly observed correlation between satiation, deprivation, and choice. During satiation, the one item that was in free-operant access before the assessment was chosen less frequently than the other three, and the opposite effect was observed during the deprivation condition (Gottschalk, Libby, & Graff, 2000). Many factors play part in the choice and efficacy of reinforcers through preference assessments. As the field continues to grow, the question then becomes what the next addition to the research of preference assessments is?

With the increase of technology readily available and the reduced response effort to gain information, it seems video is being used at a faster pace in ABA than ever before. Preference assessments are also evolving to use the benefits of video. Snyder, Higbee, and Dayton (2012) compared the correspondence between tangible stimuli and video representation of tangible stimuli for six participants. The results showed that the top-ranked item corresponded in the hierarchical assessments in five out of six participants and the high and low ranked items corresponded for four participants (Snyder, Higbee, & Dayton 2012). This study shed light on the use of video as it pertains to displaying particularly complex items in a social environment for participants and it showed that video could be a viable resource for preference selection. There were some limitations discussed, such as choice bias and lack of prerequisite skills to advocate for choice, but the discussion for future research is enthusiastic among the researchers.

Preference assessments have grown by leaps and bounds. The research shows that as our world changes, the approach to preference assessments adapts to it. Scientifically there has been a move from arbitrary and archaic choice methods for preference to advancements and refinements that now show reinforcer efficacy and treatment productivity. The science is even now breaking into the realm of video as a tool for reinforcer class identification. The future is bright for the study of preference assessments and the subsequent spill-over effect into the realm of ABA because of this science growing.

Preference assessments have become an integral part of the assessment and development process for effective treatment of behavioral issues in participants of ABA therapy. The science has grown leaps and bounds since the inception of preference assessments, along with the scientific growth ABA practitioners are held to an ethical standard in the practice of these assessments and reinforcer methods. According to the BACB Professional and Ethical Compliance Code, the very first article regarding ethical practice is reliance on scientific knowledge. Scientific knowledge is the basis of all practice in the ABA world. According to Van Houten, Axelrod, Bailey, Favell, Fox, Iwata, and Lovaas 1988 those who give behavior analytic services are to have a grasp of the foundational knowledge that they are using to effect behavior change on the participants that are a part of their practice. This article analyzed the foundational beginnings of Applied Behavior Analysis as it relates to the rights and welfare of the individuals receiving services. It was observed that the early stages of the science were not rooted in a scientific foundation and therefore caused harm to the individuals that received services. As discussed earlier, the issues that occurred early in the application of ABA in Florida led to a formation of an ethics committee and governing board for the science and practice of ABA. Van Houten et al., said very poignantly:

Behavior analysts have a responsibility to ensure that their clients’ rights are protected, that their specialized services are based on the most recent scientific and technological findings, that treatment is provided in a manner consistent with the highest standards of excellence, and that individuals who need service will not be denied access to the most effective treatment available. (Van Houten et al., p384)

This article was released in 1988 and was written by some of the foundational names in recent history within the field. This article states the need for services to be evaluated under scientific scrutiny as well as professional care for the clients that are under the supervision of Certified Analysts. Practitioners not only have to make sure that their services are under the umbrella of scientific knowledge, they also must include the client in the process of creating treatment protocols. Each of the articles that were reviewed in relation to the preference assessment scope of practice was written after gaining consent from the parties involved and ensuring a safe environment not only for the practice of the study but moreover for the client themselves. With the history of abuse and neglect that had been reported to create the board of ethics, care is now the utmost priority when dealing with clients. These notions can be found in the second section of the BACB Professional and Ethical Compliance Code as they relate to the Behavior Analysts’ responsibility to clients.

The Association for Behavior Analysis International has a set of guidelines for student development that was reviewed in 1990 available on their website. This is important to the study of ABA because not all instances of treatment will be done in a clinical setting. The student that learns in a typical classroom has the same rights as a client that is in a treatment facility receiving services. The PECC and guidelines set forth by the ABAI both address the needs of the client as imperative but through different lenses. The ABAI list includes six categories for student rights to adequate education, each of which is centered around the success and safety of the learner. The guidelines both presented by the ABAI and the PECC record that the student/client should be a part of the planning and implementation process to effect lasting and ethical practices for behavior change.

Outside of the ethical consideration of the scientific basis and the client involvement, the Behavior Analyst is also held responsible to the Behavior Analyst Certification Board. The BACB has a strict set of guidelines that one must follow to attain the title of BCBA. According to the BACB: BCBA Eligibility Option 1 applicants that wish to seek the credential of BCBA must have a master’s degree in a related field to ABA. They must also follow through with coursework that is specifically tailored to the foundational knowledge of ABA and must complete a rigorous practicum whereby they are observed and supervised by BCBA’s who hold the supervision accreditation (BCBA Option 1). If that was not enough, applicants must finish their coursework satisfactorily, submit their completed coursework and application to the BACB and then must pass the certification board exam to qualify for the title of BCBA.

With these safeguards in place, one would think that the field of ABA as it relates to certificants would be a slowly growing field, but current data suggests the opposite. According to the Behavior Analysis Certification Board, Board Certified Behavior Analysts alone have increased from a recorded 28 in 1999 to a staggering 26,879 as of 2017. This data suggests that as the science of ABA grows and the need for services increase, the response from caring individuals who seek to make a significant impact in the world of behaviorism are not slowly increasing but are growing at an alarming rate.

There are a lot of guidelines that are in place for one to become a BCBA. The reason for the guidelines in position is to ensure that all clients who receive behavior analytic services are done so under the watchful eye of credentialed, vetted, and competent practitioners, as stated by Van Houten et al. relating all this back to the literature regarding the preference assessment scope of practice is simple. The articles that were submitted for publication were peer-reviewed and formatted for the design of discussion, implementation, and replication of results as it relates to the field of ABA and the science that governs the practice. The Journal of Applied Behavior Analysis is a journal that is overseen by BCBA’s for BCBA’s and the growth of the science of ABA. According to the author guidelines for the Journal of Applied Behavior Analysis, the purpose of the journal is to discuss socially appropriate functional relations in the field.  Everything that is done in the current science of ABA has grown from the early groans and tragedies of the field into a reputable scientific field of study that is designed to help those who have socially significant behavioral needs receive treatment to enhance their independence in daily living.

Ethics of Preference Assessments

One of the core tenets of applied behavior analysis is social validity. In everything that a practitioner does they have to make sure that what programs they are writing are socially valid for the client and team involved. According to an article by Montrose Wolf in 1978, social appropriateness is one of the forms of social validity that one must give attention. It sounds like a novel idea, but a good deal of the Ethics Code deals with client-centered practice because that is the root of what practitioners do, they care for a client’s needs. This core principle does not skip over the topic of preference assessments.

When a BCBA conducts a preference assessment, there are several things that they will need to consider and gather before carrying out the process. One of the first things that a practitioner will need to do is talk to the client or his/her guardians directly. This will help narrow the field of potential preferred items that can be used during the assessment (Meller & Runyon, 2017). Having an idea of what may or may not be a preferred item may seem like a small feat, but it is crucial to pinpoint the most effective potential reinforcers available for the success of the client. Everything that a practitioner does regarding choosing a list of items to what type of assessment to conduct comes down to a collaboration of team members.

There is an adage “it takes a village to raise a child” this same principle can be applied to the services of a BCBA. A practitioner is not a “one-man” operation and certainly cannot do all things that are necessary by themselves. Without a collaborative team effort, it would be unethical for a practitioner to act in any realm of ABA, even conducting a preference assessment. According to the Professional and Ethical Compliance Code for Behavior Analysts section 2.0, every licensed practitioner has an obligation to the client first, including the use of third-party consults. Even more than just the third-party collaboration is the exploration of documents and other indirect assessments as well as direct assessments and interviews that are a part of the process of creating a competent treatment package. In creating a responsible treatment package, a preference assessment is a large part of figuring out what will be useful as a potential reinforcer for a client to build their behavioral repertoire in a socially significant manner.

A useful skill of any practitioner is the ability to collaborate with others. Inflated egos due to a wealth of knowledge hamper interpersonal skills. As a practitioner of ABA grows in competence and confidence, they must realize that working with other professionals is a necessity. This idea is rampant in the field of ABA; peer-reviewed journal articles are one of the back-bone informational pieces to the entire science. More than just working alongside other practitioners of various expertise, the practitioner must work even closer to the client and the nuclear family of caregivers around them. In the applied setting, the BCBA is unique because of the assessment protocol and way practitioners approach behavior-specific situations.

When working with a team of professionals from other fields with a focus on the same client, interpersonal skills are ever necessary. BCBA’s are scientific minds that approach behavioral issues with an assessment first approach. In the field, BCBAs may work with other disciplines that do not share the same level of expertise or honed thoughtful approach. When a BCBA finds themselves in a position that a colleague from another discipline is offering a solution to a behavioral issue that the BCBA may not agree with or understand, they are confronted with an ethical dilemma. Brodhead, (2015) tells us that there are ways a BCBA can approach an unfamiliar situation analytically and still decide what is best practice while maintaining professional integrity with the colleague whom they may disagree. Brodhead proposed that the BCBA use a checklist of sorts to determine whether the issue at hand was worth concern or not, just because it is not familiar does not make it inherently wrong. The proposed checklist begins with client safety and goes through a litany of other analytical protocols to determine if the BCBA should question the use of the non-behavioral tactic suggested by others.

The protocol observed by Brodhead would be useful in assessing ideas from other facilitators regarding preference assessments. Although a preference assessment is primarily a behavioral analytic tool, that does not mean that others may have suggestions on how one should apply that tool. Consider the free-operant preference assessment: if an analyst is new to a setting where the client has been receiving in-home services from a speech-path for months, it would not be unethical to collaborate with the speech-path to conduct a more naturalistic FOA based on the knowledge the speech therapist has of the client. In the field of ABA, practitioners must consider the relationships that their non-behavioral counterparts may have with clients and how to utilize them well. Citing the second section of the Compliance Code again, any behavioral practitioner under the watch of the BACB must do right by clients first that include the ethical responsibility to listen to non-behavioral colleagues.

The concept that the client comes first is nothing new. Around that client, there is a community of involved citizens both professional and civilian. The first thing to consider in the assessment of helping a client is a nuclear family. According to an article from simplypsychology.org written by McLeod, love, and belonging is the third tier of Maslow’s Hierarchy of Needs and those include feelings of love and intimacy from family, friends and a sense of connection. Without considering the feelings of the client concerning the family and those around them, the efficacy of treatment may be in question. BCBA’s are trained to examine more than just behaviors; they are taught to consider environmental factors that may play upon the observed behavioral responses of clients. Tying together the information from Maslow and the directive of the BCBA to examine the environment, including the community both big and small in any assessment for the BCBA is crucial to the success of any critical plan of action. A big part of connecting with a client on a personal level is understanding their cultural heritage.

Culture is an essential element in constructing a treatment plan for a client. Different cultures interact in different ways, and there are expectations in some cultures that others do not have. When a BCBA is working with a perception that they are not familiar with, the BCBA should do the research required to understanding the different cultures. Liao, Dillenberger, and Buchanan, (2017) investigated the differences in the UK and China to see if there were any marked differences in the way that behavior analytic services were delivered. In short, they found that there were several cultural differences in culture and policy of the implementation of early intervention ABA principles in both countries. Understanding culture is important.

Working with a family in the field of ABA is a unique experience. One does not show up at a family’s home and conduct assessments, write protocols, train staff and leave. When one is working in this field, there is a personal connection unique to the discipline regarding interaction with the client and their family. Each family that a practitioner works with has a unique culture all their own, and the practitioner becomes a part of that culture. What is unique to services like ABA is the personal approach that a scientific process must take on. Due to the environment in which a practitioner is working personal interactions will increase over time and the culture of a client and their family takes center stage to the treatment and care that a practitioner is providing. This is true of the entire field and especially true to the use of preference assessments in the treatment protocol for any given client.

During a preference assessment, the culture of a client must be considered due to the variability of items and activities in which one can participate. Ignorance of a cultural difference could cause environmental stress if the practitioner were to implement something that was not first studied and observed, and this would be breaking the ethical obligation that a client comes first. Cultural responsiveness is a more significant concern for the BCBA than one may first think. Without the proper tools to guide a practitioner in the use of cultural infusion the practitioner may fail to be successful.

ABA is a field that is fundamentally based on scientific research. Regarding cultural significance and support, research must be applied in these areas as well, but how would one find the resources needed? Fong and Tanaka, 2013 offer a complete list of cultural guidelines that all ABA practitioners should adopt. They cited that over the next four decades the cultural diversity of the USA is going to change dramatically and as a result, the applied sciences are going to have to adapt to ensure the ability to work effectively with the new clientele that will be available.

Regarding the principle of parsimony, the most straightforward answer is the simplest. The most straightforward answer to the question of cultural diversity is merely to ask questions! The danger of the applied sciences is moving away from the personal interactions in favor of the acquired knowledge. In other words, if a practitioner has questions about what to do with a cultural issue, alongside doing their research they should talk to the family directly. Being educated by the family is a part of the bonding experience and will only help with professional equity. As stated earlier, environmental factors are something that the BCBA must consider, and if the BCBA can be taught by the family to learn about their culture and use the knowledge to build their assessments, the compliance from the family should increase because of the attention is given from the therapist. In short, listen to people and consider their culture before making any significant changes.

Along with examining the communal aspect of behavior analysis, one must consider the functionality of the tools used in those communal settings. Regarding preference assessments, there are several tried and valid methods that already exist per literature. One of the more popular choices for preference assessments is the Multiple Stimulus-Without Replacement (MSWO). The MSWO has been designed and implemented in many settings, one of the most applicable settings is the classroom. One study evaluated the use of the MSWO concerning the general education population (Resetar & Noelle, 2008). In their research, Resetar & Noelle used the MSWO in conjunction with the teacher in the class choosing students that they observed and documented doing poorly in class work. The study used an MSWO with the typically developing children and a preference survey taken by a teacher that was not present in the study. The study was designed to see if teacher chosen preferred items were comparable to the choices made by the child. The results showed that the child selected items and the teacher related decisions were no different. This demonstrated that the selection of preferred items either by the child or the teacher were both shown effective for the present study (Resetar and Noelle, 2008). Research like this is significant because it reaches outside of the special education classroom and generalizes into the general education setting. This is important because the scientific methodology that ABA uses is not solely to be confined to a specific group with specific needs. Anyone can benefit from ABA, and that is a vital role of the assessment process for the science, the method must be generalizable.

There are limitations to the use of preference assessments, however. According to Verridan and Roscoe, (2016) there are limitations to preference assessments concerning the inherent reinforcing value. They cited that during the MSWO and PS sessions of their research, they had a hierarchy of preferred vs. non-preferred items, but during the reinforcer assessment, the non-preferred items showed more reinforcing in some instances (Verridan & Roscoe, 2016). One of the significant limitations of any preference assessment is the inability to assess reinforcer value of the observed preferred item accurately. Although the preferred item may not have an inherent reinforcer value, there are reinforcer assessments that are designed to test the reinforcing aptitude of the preferred items, as previously noted.

Chazin and Ledford, (2016) have an excellent summary of the types of preference assessment that are used and do a great job explaining them in layman’s terms. According to the literature from Chazin & Ledford, there are five distinct preference assessments: Multiple Stimulus Without Replacement, Multiple Stimulus with Replacement, Free-Operant, Paired Stimulus, and Single Stimulus. Each one of these assessments has distinct variables and are adaptable to fit the need of each setting. The versatility of these tools is essential to the overall scope of ABA as a science. Regarding the client first approach as designated by the Compliance Code section 2, practitioners are to use the tools at their disposal to create treatment packages that will benefit everyone that they serve.

One of the ways that the preference assessments are evaluated is with the created hierarchy that the MSWO and the PS assessments provide (Cooper, Heron, & Heward, 2006). An issue in the measurement of the assessments used that is often overlooked is the rise of problematic behavior during these assessments. Kang, O’Reilly, Davis, Machalicek, Rispoli, and Chan, (2010) assessed the occurrence of problem behavior through three formats of the preference assessment (PS, FO, and MSWO). Their findings supported that problem behavior can occur in the settings of these assessment tools when the preferred items are related to the proposed function of behavior (Kang et al., 2010). They suggested that practitioners make sure that they have the resources needed to complete these assessments in a time-sensitive manner as not to attract unwanted behavioral issues concerning the duration of assessments. They also observed that if high-levels of problem behavior occur during an initial assessment, it may be beneficial to switch to a free-operant assessment to effectively see the preference assessment through without problematic behavior happening at high rates. The evidence that was put forth by Kang et al., (2010) was focused on access to tangibles but the lesson that remains is generalizable to the rest of the research; make sure environmental factors are taken care of before starting any form of assessment.

Preference assessments are unique when it comes to the subject of experimental design because they are fluid depending on the client or situation. Each assessment can be utilized individually or together to help give a broader picture of the preferred items in question. Paramore and Higbee, (2005) conducted a brief MSWO assessment of three young boys using an alternating treatments design to show preference and reinforcer value of stimuli. Research using preference assessments has been demonstrated that depending on the type of assessment the graphic data will be displayed uniquely.

As discussed in Kang et al., (2010) some preference assessments can produce a hierarchy that can then be displayed graphically so that a therapist can visually inspect the information and decide on which preferred item may be reinforcing. Once a potential reinforcer is chosen, a reinforcer assessment can take place wherein that preferred item is applied to a treatment package or a contrived situation to show the overall effect it has on the desired behavioral outcome (Verridan & Roscoe, 2016). Preference assessments have a pivotal role in the assessment process overall. They are used to narrow the field of potential reinforcers and help shed light on areas that problem behavior can arise as noted by Kang et al., (2010). Overall, preference assessments are unique because of the ability to adapt them to every social setting, culture difference and client repertoire. Any treatment package that is quality will utilize a preference assessment to help in the process of bringing about behavioral change in a socially valid way.

Leadership Standards and Future Directions

The research for preference assessments is relatively new, but the impact that it has for professionals in the field has been formidable. One of the pillars of science is the ability to replicate and expand upon current or past research, and this is no different with preference assessments. With the last few years seeing a boom in the certificants that are coming into the field there is more quality research as well (Behavior Analyst Certification Board). With more people coming into the field at such a rapid pace, however, there is the possibility of professional drift due to the increased demand. Every area of education should be using best practice procedures based on a culmination of current and past research in their approach; this is also very important in the realm of ABA.

According to the Missouri Leader Standards, the first thing that an educational leader must do is establish the mission, vision, and goals. This is incredibly similar to the best practice approach in Applied Behavior Analysis. The good news for the field is the strict code that professionals are held to practice effectively. Evidence-Based Practice is a common starting point for all interventions that are created in ABA. According to an article by Slocum et al., released in 2014, practitioners in Applied Behavior Analysis are not alone in understanding the need for evidence-based practice but are aware of it for the sake of quality interventions. One way that the field of ABA can grow upon itself in confidence is following the literature that is available through resources like the Journal of Applied Behavior Analysis and other prominent resources that all undergo a peer-review process before an article can be accepted and ultimately published. This is important to note because of the need for a scientific basis for any decision that is made regarding intervention for clients. The ultimate goal of any programming is to help the client receiving services to get the best and most appropriate care for their needs, and this requires that the practitioner be current on best-practice intervention information.

Slocum et al., (2014), discussed how the problem of evidence-based interventions was not isolated to just the work of behavior analysis, but to similar fields as well. Through reviewing Psychotherapy for Children and Adolescents: Directions for Research and Practice, a 2000 publication written by Kazdin, there are a lot of difference between what is scientifically valid and what is widely accepted in the application of treatments. Slocum et al., (2014), pointed out that Kazdin, (2000) promotes 10 percent of all accepted therapeutic processes are evidenced based and therefore have scientific grounding for their usage. That is a staggering fact. However, it seems that more of the psychological principalities are moving away from the old guard and starting to adopt the Evidence-Based Practice model that begins with current scientific research to support interventions.

It is not enough to be able to state the mission, vision, and goals of any treatment plan, but the practitioner must demonstrate competence with the interventions that are in place as well. According to the Missouri Leadership Standards: Standard 1, Quality Indicator 2 an effective leader must be able to “implement the mission, vision, and goals that they have created for effective teaching.” Likewise, a competent practitioner in the field of ABA must be aware and skilled in the knowledge necessary for implementing any treatment plan. That also begs the question, how does one efficiently go about implementing the mission, vision, and goals?

Thankfully for professionals in applied behavior analysis, there are governing agencies that pride themselves on the continual growth and guidance of the practice of ABA. From the board of Association for Behavior Analysis International comes a strategic plan for making sure the competence of practitioners is always growing in many areas, one of which is the practice of applied behavior analysis. From the Strategic Guide from ABAI, the objective under Practice is “to develop, improve, and disseminate best practices in the application of behavior analysis.”  Groups like ABAI and the BACB are committed to making sure that certified professionals are equipped with accurate literature through avenues like the many peer-reviewed journals accessible to the field.

Out of that plethora of available information comes the call to best-practice procedures specific to each client. For example, a practitioner working with a student who is known to have high levels of self-aggression would most likely not use a Multiple Stimulus Without Replacement assessment due to the possible increase of problem behavior when preferred stimuli are removed (Chazin & Ledford, 2016). The knowledge that is available to practitioners is presented in such a way to make sure that the client’s safety is paramount in all areas of competency. Preference assessments can and should be used to create a positive environment for everyone on the team involved in client care.

Point 6.02 of the Professional and Ethical Compliance Code for Behavior Analysis states that every practitioner is responsible for making knowledge about what they are doing and how they are doing it readily available to the public. Simply put, a practitioner is responsible for teaching the public about the ins and outs of ABA. This idea rings true for the use of preference assessments as well. If a family is new to the therapeutic approach of ABA and a practitioner is working with their child, that family should be taught the reason why a professional is using the techniques they choose for the client. Creating a favorable environment for the whole team and client (family included) is essential to the overall success of any treatment plan that is put in place.

Some significant resources have been allocated for professionals to introduce the topics that may be foreign to them in ABA. One of those great organizations is Autism Speaks. At the foundation’s website, there are a plethora of resources available for professionals to take hold of and use for the families that they are working with (Tools for Professionals, 2012). Autism Speaks, a global organization that is dedicated to creating awareness to the issues that face families affected by autism and bring about lasting change in the world.

According to the Missouri Educational Leader Standards: Standard 2, Quality Indicator 2 a leader must be able to “provide an effective instructional program.” What does that mean for the professional working in ABA? There is more to the field of ABA than working in homes with clients. A lot of the field is also constructive to the educational field, especially classroom management. In St. Louis Missouri there is an organization that works with all schools in the greater St. Louis County are providing services for those with special needs called Special School District of St Louis (District Overview, 2017). According to the SSD “About Us” section of their website, they provide services to one-in-six students in the greater St. Louis County area (District Overview, 2017).

One of the ways that SSD supports its students is using Positive Behavior Interventions and Supports (PBIS: The Home-School Connection, 2017). PBIS is a tiered system of support that can be implemented in a wide variety of ways and applied to a range of students all at the same time. Each tier represents a deeper level of support depending on the needs of the student: tier 1 is a general support system designed to incorporate the needs of all students at the school, tier 2 is more personal for students who are more at risk for social-negative interactions, and tier 3 is for those students who still need additional support after being exposed to the other two tiers (What is SW-PBS?, 2018). The application of ABA fits directly into the scope of PBIS by merging the evidence-based practices with the support needed for every student. This is a way that the ABA practitioner can ensure a sufficient instructional program is being implemented by the school standards and still maintain the ethical obligation that the BACB has for the professional.

There are great support systems like PBIS that are implemented in the school setting, and there are ethical guidelines that make the private setting supportive as well, how does a practitioner efficiently teach a team of workers to implement programming in these supportive areas? One way that a practitioner teaches staff members to be efficient and competent is the use of Competency Based Training. According to Parsons, Rollyson, and Reid, (2012), there are six distinct steps to using CBT effectively for staff to gain mastery of a topic. Teaching a preference assessment can be plugged directly into this setup. The first step is for the professional to describe the target skill to the staff member: explain what specific preference assessment technique is to be taught (Parsons, Rollyson, & Reid, 2012). Step 2 is to provide detailed written instructions: a step-by-step guide how to implement the specific assessment chosen (Parsons, Rollyson. & Reid, 2012). Step three is to demonstrate the particular preference assessment for the staff in detail (Parsons, Rollyson. & Reid, 2012). Next, the team is expected to practice the specific preference assessment, followed by direct feedback from the instructor (Parsons, Rollyson. & Reid, 2012). Finally, the instructor is to repeat the process of viewing the practice of and giving feedback to the staff member until mastery is attained for the specific preference assessment being taught (Parsons, Rollyson. & Reid, 2012). It may seem like an arduous task to explain something in so many steps, but as the literature states, this is an evidence-based practice that has a myriad of proven trials.

Teaching a skill to a staff member is essential, making sure the right motivation is employed is paramount. One of the big things discussed in the literature for training is the idea of “buy-in.” Buy in is the layman’s term for various establishing operations for staff to improve their repertoire and performance regarding the implementation of ABA procedures. One way that a practitioner could set up an establishing operation for the team to perform well is enriching the environment. According to Parsons, Rollyson, and Reid, (2012) a practical use of on the job training is making sure the setting is appropriate for the training. Applying this concept to training preference assessments, a practitioner would include proper knowledge of the skill, resources for collecting data, an array of stimuli for the specified assessment and feedback from the practitioner, all of which are evidenced in the confines of competency-based training (Parsons, Rollyson. & Reid, 2012).

Supporting staff to optimize their role best is essential, but it is also necessary to gain buy-in from other stakeholders. According to an article published by the Behavioral Health Center of Excellence in 2016, caregiver involvement is a priority when considering the success of any behavioral program. The impact that a parent or guardian can have in the process is evident especially in the early stages of life (The Role of Caregiver Involvement in ABA Therapy, 2016).

Even though the buy-in from caregivers and parents is crucial, it is not without its obstacles. According to the article produced by the BHCOE in 2016 stress is one of the main reasons that caregivers and parents do not buy-in to the treatment programs. When looking at the whole treatment package, a practitioner should not only look for environmental issues for the staff but for the families that are involved as well. When a practitioner is consulting with a family, the means and reality of the situation must be considered for the efficacy of the treatment package. Going back to the evidence-based practices, one must create an environment that is best suited for the programming ahead. According to the Professional and Ethical Compliance Code for Behavior Analysis, standard 2.0 all treatment is to be in the client’s best interest. If there is an issue with the family environment that is not conducive to the treatment package that a practitioner employs, then it will not only be unethical, but ineffective. Caregiver involvement is as much a part of the process as competent and confident staff, and this applies to every area of ABA including preference assessments. A protocol and environment are only as effective as the ability of the practitioner implementing them.

A competent practitioner can breed a qualified staff, and a skilled team needs to be reinforced just like the clientele that receives analytic services. The topic of preference assessments is not only applied to the clients that receive services, but also to the staff that employs them. A good team is built on positive supports for one another, and a practitioner must understand what is reinforcing for the team and to employ it. According to Aubrey Daniels and Jon Bailey in their book Performance Management: Changing Behavior that Drives Organizational Effectiveness one of the most significant questions that managers often ask is how to choose effective reinforcers?

Finding effective reinforcers for a team is about understanding the team on a personal level. Just as in finding reinforcers for the clients, a practitioner should do an indirect assessment with the staff. Asking questions is a great way to get the process rolling of what is motivating for staff members. It may be seeing the effects of the client adapting to the programming or social positive praise from supervisors that drive the staff, the only way to honestly know is to ask the person. Daniels points out that one of the most significant mistakes that managers of any kind can make in choosing possible reinforcers for staff is assuming that staff would want the same things management would want (Dainels & Bailey, 2014).

A competent team is a holistic team. A practitioner should be able to use the tools that are available to the field to teach and implement behavior change principles with staff efficiently. The guidelines and principles that are set out by the governing bodies of the field are done so in a way to protect first the client, and then the family and staff that are working alongside the practitioner for the whole experience. A correctly implemented protocol is not done by one individual, but it is made up of many parts to help ensure the client gets the best care that is available. A practitioner should make sure the environment in which they are working is suitable for the client and the whole team. The research that is available through avenues like ABAI, BHCOE, BACB, and others suggests that environmental supports are essential to any successful treatment plan. A preference assessment is just one part of the overall scope of successful intervention. Understanding the current research and environmental factors can help ensure best practice is being followed by all agents involved.

 

References

Association for Behavior Analysis International Strategic Plan. (2016). ABAI.

Association for Behavior Analysis International, Retrieved March 01, 2018, from https://www.abainternational.org/about-us/policies-and-positions/students-rights-to-effective-education,-1990.aspx

Bailey, J., & Burch, M. (2016). Ethics for Behavior Analysts, 3rd Edition. Florence: Taylor and Francis.

BCBA OPTION 1. Behavior Analyst Certification Board, www.bacb.com/bcba/bcba-option-1/.

Behavior Analyst Certification Board. (n.d). BACB certificant data. Retrieved from https://www.bacb.com/BACB-certificant-data.

Behavior Analyst Certification Board: Professional and Ethical Compliance Code for Behavior Analysts. (2016, March 21). Retrieved from https://www.bacb.com/wp-content/uploads/2017/09/170706-compliance-code-english.pdf

Brodhead, M. T. (2015). Maintaining Professional Relationships in an Interdisciplinary Setting: Strategies for Navigating Nonbehavioral Treatment Recommendations for Individuals with Autism. Behavior Analysis in Practice, 8(1), 70–78. http://doi.org/10.1007/s40617-015-0042-7

Chazin, K.T. & Ledford, J.R. (2016). Preference assessments. In Evidence-based instructional practices for young children with autism and other disabilities.

Chazin, K.T. & Ledford, J.R. (2016). Multiple stimulus without replacement (MSWO) preference assessment. In Evidence-based instructional practices for young children with autism and other disabilities. Retrieved

Cooper, John O.. Heron, Timothy E.. Heward, William L. Applied Behavior Analysis 2nd ed., Pearson, 2007

Daniels, A. C., & Bailey, J. S. (2014). Performance Management: Changing Behavior that Drives Organizational Effectiveness. Atlanta, GA: Performance Management Publications.

DeLeon, I. G., & Iwata, B. A. (1996). Evaluation of a multiple-stimulus presentation format for assessing reinforcer preferences. Journal Of Applied Behavior Analysis, 29(4), 519-533.

District Overview (2017). Retrieved from www.ssdmo.org/about_us/district_overview.html

Fisher, W. W., Piazza, C. C., & Roane, H. S. (2014). Handbook of Applied Behavior Analysis. New York: Guilford Press. 151-160

Fisher, W., Piazza, C. C., Bowman, L. G., Hagopian, L. P., Owens, J. C., & Slevin, I. (1992). A comparison of two approaches for identifying reinforcers for persons with severe and profound disabilities. Journal of Applied Behavior Analysis, 25(2), 491–498. http://doi.org/10.1901/jaba.1992.25-491

Fong, E. H., & Tanaka, S. (2013). Multicultural Alliance of behavior analysis standards for cultural competence in behavior analysis. International Journal of Behavioral Consultation and Therapy, 8(2), 17-19. doi:10.1037/h0100970

Gottschalk, J. M., Libby, M. E., & Graff, R. B. (2000). The effects of establishing operations on preference assessment outcomes. Journal Of Applied Behavior Analysis, 33(1), 85-88

Journal of Applied Behavior Analysis. Journal of Applied Behavior Analysis – Author Guidelines – Wiley Online Library, onlinelibrary.wiley.com/journal/10.1002/(ISSN)1938-3703/homepage/ForAuthors.html.

Kang, S., Lang, R. B., O’Reilly, M., F., Davis, T. N., Machalicek, W., Rispoli, M. J., & Chan, J. M. (2010). Problem Behavior During Preference Assessments: An Empirical Analysis and Practical Recommendations. Journal of Applied Behavior Analysis, 43(1), 137-41.

Kazdin, A. E. (2000). Psychotherapy for Children and Adolescents: Directions for Research and Practice. New York: Oxford University Press

Leader Standards: Missouri’s Educator Evaluation System [PDF]. (2013, June). Missouri Department of Education.

Mazaleski, J. L., & And, O. (1993). Analysis of the Reinforcement and Extinction Components in DRO Contingencies with Self-Injury. Journal Of Applied Behavior Analysis, 26(2), 143-56.

McGill, P. (1999). Establishing operations: implications for the assessment, treatment, and prevention of problem behavior. Journal of Applied Behavior Analysis, 32(3), 393-418.

Mcgimsey, J. F., Greene, B. F., & Lutzker, J. R. (1995). Competence in aspects of behavioral treatment and consultation: implications for service delivery and graduate training. Journal of Applied Behavior Analysis, 28(3), 301-315.

McLeod, S. Maslow’s Hierarchy of Needs. Retrieved April 10, 2018, from https://www.simplypsychology.org/maslow.html

Meller, D., & Runyon, P., (2017). Pass the Big ABA Exam! ABA Exam Prep Study Manual (8th ed.) p. 273-75

Missouri’s Educator Evaluation System Teacher Standards. (2013, May). Retrieved from https://dese.mo.gov/sites/default/files/TeacherStandards.pdf

Pace, G. M., Ivancic, M. T., & Edwards, G. L. (1985). Assessment of stimulus preference and reinforcer value with profoundly retarded individuals. Journal Of Applied Behavior Analysis, 18(4), 249-255.

Paclawskyj, T. R., & Vollmer, T. R. (1995). Reinforcer assessment for children with developmental disabilities and visual impairments. Journal Of Applied Behavior Analysis, 28(2), 219-224.

Paramore, N. W., & Higbee, T. S. (2005). An Evaluation of a Brief Multiple-stimulus Preference assessment with Adolescents with Emotional-behavioral disorders in an Educational Setting. Journal of Applied Behavior Analysis, 38(3), 399–403. http://doi.org/10.1901/jaba.2005.76-04

Parsons, M. B., Rollyson, J. H., & Reid, D. H. (2012). Evidence-Based Staff Training: A Guide for Practitioners. Behavior Analysis in Practice, 5(2), 2–11.

PBIS: The Home-School Connection. (2017). Retrieved from www.ssdmo.org/rotate_features/12_12/PBIS_tips.html

Resetar, J. L., & Noell, G. H. (2008). Evaluating Preference Assessments for Use in the General Education Population. Journal of Applied Behavior Analysis, 41(3), 447–451. http://doi.org/10.1901/jaba.2008.41-447

Roane, H. S., Vollmer, T. R., & Ringdahl, J. E. (1998). Evaluation of a brief stimulus preference assessment. Journal Of Applied Behavior Analysis, 31(4), 605-620.

Slocum, T. A., Detrich, R., Wilczynski, S. M., Spencer, T. D., Lewis, T., & Wolfe, K. (2014). The Evidence-Based Practice of Applied Behavior Analysis. The Behavior Analyst, 37(1), 41–56.

Snyder, K., Higbee, T. S., & Dayton, E. (2012). Preliminary Investigation of a Video-Based Stimulus Preference Assessment. Journal of Applied Behavior Analysis, 45(2), 413–418. http://doi.org/10.1901/jaba.2012.45-413

The Role of Caregiver Involvement in ABA Therapy. (2016). Retrieved from https://bhcoe.org/2016/07/the-role-of-caregiver-involvement-in-aba-therapy/Daniels and Bailey

Tools for Professionals. (2012). Retrieved from https://www.autismspeaks.org/family-services/resource-library/tools-professionals

Turner, L. B., Fischer, A. J., & Luiselli, J. K. (2016). Towards a Competency-Based, Ethical, and Socially Valid Approach to the Supervision of Applied Behavior Analytic Trainees. Behavior Analysis in Practice, 9(4), 287–298.

Van Houten, R., Axelrod, S., Bailey, J. S., Favell, J. E., Foxx, R. M., Iwata, B. A., & Lovaas, O. I. (1988). The right to effective behavioral treatment. Journal of Applied Behavior Analysis, 21(4), 381–384. http://doi.org/10.1901/jaba.1988.21-381

Verriden, A. L., & Roscoe, E. M. (2016). A comparison of preference-assessment methods. Journal of Applied Behavior Analysis, 49(2), 265-285. doi:10.1002/jaba.302

What is SW-PBS? (2018). Retrieved from http://pbismissouri.org/what-is-swpbs/

Windsor, J., & And, O. (1994). Preference Testing: A Comparison of Two Presentation Methods. Research In Developmental Disabilities, 15(6-), 439-56.

Wolf, M.  (1978). Social validity: The case for subjective measurement or how applied behavior analysis is finding its heart1. Journal of Applied Behavior Analysis, 11(2), 203-214. doi:10.1901/jaba.1978.11-203

Yini Liao, Karola Dillenburger & Ian Buchanan (2017) Does culture matter in ABA-based autism interventions? Parent and professional experiences in the UK and China, European Journal of Behavior Analysis

License

Icon for the Creative Commons Attribution 4.0 International License

Special Topics in Behavior Analysis Copyright © by Lauren Milburn, MAT, Ed. S, BCBA, LBA; Madison Wilkinson, MA, BCBA, LBA; Sadiqa Reza, MA, BCBA; Margaret Dannevik Pavone; Brandon K. May; Behavior Analyst (Washington University in St. Louis); Doctoral Candidate (Southern Illinois University-Carbondale); President and CEO (Elite ABA Services); Daniel M. Childress, BCBA; Jordyn Roady, M.A.; Kodi A. Ernewein, M.A., BCBA; Victoria Spain, MA; Amber McCoy; Katie Harris; Jamie Zipprich; Clint Evans; and Amy Ehnes is licensed under a Creative Commons Attribution 4.0 International License, except where otherwise noted.

Share This Book