Can Derision Enhance Teaching and Learning?

As educators we assume that we have to be kind and supportive to establish a safe and supportive learning environment. We might even embrace the trite saying, “there are no dumb questions.” However, in environments such as League of Legends and other multiplayer online video games, as well as online discussion forums (e.g., FatWallet, Reddit), dumb questions exist, and the individuals who ask them are criticized, demeaned, and mocked by their peers.

At least for some subset of learners, might such a “toxic” environment actually enhance learning by motivating learners to demonstrate due diligence? Might it enhance teaching by preventing wasted time on frivolous questions? At what threshold does a “supportive” learning environment cross over into an unproductive learning environment that rewards incompetence and encourages mediocrity?

One issue is that newcomer integration might be impeded by being condescending toward newcomers. However, perhaps this is less of an issue if the learner is highly motivated to persist? For example, someone addicted to a video game such as World of Warcraft is unlikely to cease playing due to being derided—in fact, derision might motivate one to perform better and avoid being called a “noob.” Derision from peers might actually enhance learning.

However, in the typical classroom, derision needs to be applied carefully. It might shut down learning for some learners, while others may find it more motivating than supportive comments. Also, being derided by someone “in charge” (i.e., the teacher) is different from being derided by a peer. Therefore, I am not advocating educators embrace deriding their students, but only discussing possibilities.

As a Ph.D. student who has completed a Master’s degree and seven full-time years of college education, I have noticed that practically every class starts out with a discussion of the syllabus. Instead, what if instructors expected students to read the syllabus and derided them for asking questions that were answered in it? Instead of giving them the answer and needlessly pulling up the syllabus on the screen, tell them “if you would have actually read the syllabus, you would not have wasted our time with this question.” Similarly, throughout the semester there are perennial questions from students who are simply lazy, failing to read assigned readings, directions, et cetera. Instead of offering derision, instructors typically enable and reward these students’ laziness by serving up easy answers. Conversely, students who exercised due diligence are penalized by having their time wasted. If an instructor spends two minutes on a frivolous question in a class of 30, that’s an entire hour of time wasted. At University of Central Florida (UCF), some classes in other departments (e.g., business, engineering) have as many as 1000 students, which could waste as much as 2000 minutes of time!

When I was a psychology student at UCF Daytona Beach, professors such as Ed Fouty had rather ostentatious “three before me” policies for their students. Specifically, this meant that when asking a question of the professor or teaching assistant, students had to list three actions they took to figure out the answer on their own (e.g., consulting the syllabus, readings, Google Search). In a way, this is derision—it communicates that there are dumb questions and that instructor time is inherently more valuable than student time. And yet, mustn’t it be? Professors, in particular, must juggle teaching dozens to hundreds of students among many other professional obligations. There is simply no way to do this if one’s time is consumed with trivialities. (Note that I never actually took a course with Dr. Fouty because alternative professors taught all the courses he taught at easier levels of difficulty—although I had enrolled in one of Dr. Fouty’s courses, and then dropped it immediately after the first meeting.)

Here are several examples of how participants are derided on the FatWallet Finance forum:

1. In a topic about tipping, the first reply, receiving many upvotes, says: “OK – why is this a difficult concept? If you feel like they did a great job, leave them a tip. If not, don’t. It’s very simple.” This derides the original poster (OP) by implying (s)he lacks critical thought for asking a frivolous question.

2. An OP asks for a simple explanation of BitCoin, and the first response, receiving several upvotes, is merely “https://en.wikipedia.org/wiki/Bitcoin.” This derides the OP for asking a question that they could easily have figured out on their own. However, the OP is arguably deserving of derision for being lazy and wasting others’ time, which shows a lack of respect. Let Me Google That for You (LMGTFY) is a website that can similarly be used to deride individuals who ask questions that could be answered via a simple Google Search query—it provides a link that shows an animation of typing the question into Google and then loads the search results. Deriding learners in this manner can enhance their learning by encouraging them to take personal responsibility, while also enhancing teaching by eliminating a particularly insidious type of time-wasting questions.

3. An OP asks about doing a chargeback for canceling a hotel reservation that lost its Best Western branding, but admits to having canceled for other reasons and that loss of branding is a “convenient excuse.” One commentator says: “Stop using the brand change as a way to scumbag you’re way out of it. It’s pretty pathetic. If you had a problem with the room then that would be the time for a chargeback. The room is exactly the same as the one you were paying for. They didn’t hack it to bits and throw garbage all over the floor bc of the brand change. Saying you want to cancel on the off chance their is a problem you can’t complain to Best Western management is an absurd stretch.” Although this commentator received more downvotes than upvotes, this sentiment of derision was echoed by several other commentators and might discourage the OP from asking similar questions in the future.

In other forums, derision commonly is incited by “reposting”—that is, posting about a topic that has already been covered elsewhere. OPs for such topics are ridiculed for their lack of due diligence—they could easily have searched for the prior topic. Here, derision potentially elicits a social norm of avoiding duplication of questions and content, which increases the efficiency of the forum.

Derision can enhance teaching by making it abundantly clear that the instructor, or a peer group, will not accept unproductive behaviors. For instance, in the realm of financial literacy education, instead of coddling individuals who continue to incur overdraft fees or resort to the services of payday lenders, we might mock, demean, and ridicule them for their lack of financial competence. “You know your actions are financially disastrous, and yet you persist—you have no one to blame but yourself for your situation, and you will find no sympathy here.”

Derision might encourage “lurking” or “participatory spectatorship” instead of active participation, particularly in games or activities with steep learning curves. Just because some activity is difficult to learn does not necessarily mean it is the responsibility of others to aid that learning. In environments where incompetence is derided, effective learners might avoid derision and exercise due diligence by observing and learning from the behaviors of others (social norms), and even by researching and implementing meta-cognitive strategies to aid their performance. Instead of “spoon-feeding” learners, may we not expect them to take at least a modicum of personal responsibility for their learning rather than behaving as lazy, impetuous children?

Tech Insights for Educators #1: Special typographic characters and alt codes

This is the first in a new series of Technology Insights for Educators which I will use as supplemental materials for my students in EME 2040: Introduction to Technology for Educators at University of Central Florida, which may also be of general interest. As I enter my second year of the Education Ph.D., Instructional Design and Technology program, I am becoming a Graduate Teaching Associate and will be teaching two mixed mode sections of EME 2040 (Monday 10:30 A.M. – 1:30 P.M. and Wednesday 1:30 – 4:20 P.M.) as Instructor of Record in Fall 2017. At a later time, I will make a landing or index page for these insights.


When preparing documents, et cetera, there are many typographic characters that are not available on a standard keyboard, and yet are supported by Unicode and can be used in most applications (e.g., Microsoft Office).

On Microsoft Windows, if a numeric keypad is available (found on the right side of the keyboard), such characters can be directly typed with alt codes. With the Num Lock key enabled, one should depress one of the Alt keys, and while doing so, type a sequence of numbers on the numeric keypad, and then release Alt. Then, the special character will appear. I found a list of many alt codes in this blog post by “Techno World 007.” Here are some of the most important ones:

Symbol Alt Code Description
Alt + 0149 Bullet point
Alt + 0150 En dash
Alt + 0151 Em dash
¢ Alt + 0162 Cent sign
° Alt + 0176 Degree symbol
× Alt + 0215 Multiplication sign
÷ Alt + 0247 Division sign
* Alt + 8242 Prime symbol
* Alt + 8243 Double prime symbol

* Alt code works in Microsoft Office, but not most other programs.

If a numeric keypad is unavailable (e.g., on a laptop), or you are in a non-Windows environment, there are other options. In Microsoft Word, there is the “symbol” section. Another option is simply copying-and-pasting the symbol into the target document. In Microsoft Word, this should be done with the “keep text only” paste option to prevent inheriting conflicting font size or formatting from the source.

What you see in many academic manuscripts, books, and other materials is frequently incorrect. Using a hyphen between a number range (e.g., 10-99) is not correct—an en dash should be used (e.g., 10–99). When an author speaks of a two-by-two interaction, calling it a 2*2 or 2x2 is typographically incorrect. Instead, the multiplication sign should be used (i.e., 2×2). When talking about height or distance, one should use the prime and double-prime symbols, rather than the single and double-quote symbols, respectively (e.g., not 5’10”, but rather, 5′10″).

In some cases, Microsoft Word will help you. For example, if you type two hyphens between words, it automatically converts the two hyphens to an em dash (—).

Personally, I am so used to using some of the symbols that I have memorized the alt codes for an en dash, an em dash, the cent sign, and the multiplication sign (–, —, ¢, ×). This way, when I am typing in an online discussion, et cetera, and must employ these symbols to be typographically correct, there is no need for me to copy-and-paste from an external source or consult a character map.

You can impress or annoy your colleagues with your knowledge of typography. Surprisingly, I have found that knowledge of the en dash, in particular, is sparse. Most people, including full professors, incorrectly use hyphens where en dashes are required. I suppose many academic journals correctly employ en dashes only because the editors make corrections to the authors’ manuscripts.

Why UCF should allow faculty and staff to change Windows 10 taskbar display settings

June 21, 2017

My bid to get University of Central Florida’s (UCF) I.T. department to allow education faculty and staff to change taskbar settings so they could ungroup Windows 10 taskbar items and be able to display labels in addition to icons was shot down. I am told this issue does not affect job performance in any way and that there is no need for changes because work is not being impeded. My concluding remarks:

Thanks, [redacted], for your help! I disagree with [redacted]—faculty in the education department are provided with dual monitors, even though by this standard, single monitors would not impede work. I believe that like dual monitors, being able to ungroup items on the taskbar and being able to display labels instead of icons would improve productivity. However, I will take no further action.

EME 6646 Assignment on Measuring Creativity, Neuroimaging, Psychometrics, and Methods

Assignment 5, Part A: Individual Explanation of Imagination and Creativity
For EME 6646: Learning, Instructional Design, and Cognitive Neuroscience
By Richard Thripp
University of Central Florida
June 15, 2017

Measuring Creativity: Neuroimaging or Psychometrics?

When researchers using neuroimaging techniques seek to compare brain activity between people who are especially creative and people who are of average creativity, how do they do so? One might think this would be accomplished by using neuroimaging techniques to determine who is more creative. However, the pretty pictures of brain activity we see in many journal articles are actually the result of averaging and subtraction (Sawyer, 2011). In truth, most of the brain is active almost all the time—what we are really looking at is whether particular regions are comparatively less or more active than others, and this difference is often only 3% if we are lucky (Sawyer, 2011). Brain scans where certain “creative” regions of the brain are shown in bright red may lead the reader astray, not suggesting such a tiny differential in brain activity.

Perhaps because our current ability to measure actual brain activity is not a useful indicator of creativity, neuroimaging cannot yet be directly used to determine an individual’s level of creativity. Thus, even studies employing neuroimaging typically fall back on psychometric measures. For example, Jaušovec (2000) empirical investigation is titled “Differences in cognitive processes between gifted, intelligent, creative, and average individuals while solving complex problems: An EEG study” (p. 213). At first glance, one might think electroencephalogram (EEG) is being used to determine whether someone fits into the four categories of “gifted,” “intelligent,” “creative,” or “average.” However, Jaušovec actually used the Weschler Adult Intelligence Scale (WAIS or “IQ test”) and the Torrance Test of Creative Thinking (TTCT) to organize participants into these categories, defining “gifted” as doing well on both tests, “average” as not doing well on either, and the other categories as doing well on one test but not the other. Then, he found minor differences in EEG readings when participants solved open- or closed-problem tasks, and concluded that intelligence and creativity are probably different, and that patterns of brain activity are related to creativity and intelligence. Knowing that even the best psychometric tests have substantial measurement error (e.g., IQ tests measure not only intelligence, but familiarity with written language and academic environments), that grouping people as Jaušovec (2000) did introduces further error (I have reproduced his grouping table below), and that EEG itself lacks spatial resolution, Jaušovec’s methods seem so muddy as to be unfit to produce any conclusions. However, it is not as though I have cherry-picked an unknown, dubious study—according to Google Scholar his article has an impressive 239 citations! With recent arguments further suggesting that EEG’s temporal resolution is overblown (Burle et al., 2015), our confidence ability to draw conclusions diminishes further.

Jausovec (2000) Table 4

Figure 1. Grouping table for intelligence and creativity categories by Jaušovec (2000).

While EEG is not in the same vein of neuroimaging as magnetic-resonance imaging (MRI), near-infrared spectroscopy (NIRS), or positron emission tomography (PET), the use of psychometrics as an organizing device, and of subtractive averaging as a method to present pretty pictures implying big results, remain applicable. I have difficulty seeing the ethical differences between subtractive averaging and removing the zero axis on a bar chart to show bars of vastly different heights that would otherwise have been only slightly different.

Neuroimaging and Psychometrics in Creativity Research: A Corroboration Model

Psychometrics, the science of mental measurement, by definition is messy and imprecise. However, corroborating psychometric instruments with neuroimaging techniques may help us more accurately understand creativity. This is what Arden, Chavez, Grazioplene, and Jung (2010) advocate in their literature review and position piece on neuroimaging creativity. Researchers are all using different criteria to measure and interpret creativity, but there has been no concerted effort toward detailing the “psychometric properties of creative cognition” (Arden et al., 2010, p. 152), which is needed to be able to compare studies to each other. Nevertheless, employing neuroimaging has already allowed us to debunk, or at least fail to find support for, common hypotheses such as creativity being linked to the right brain or improved neural function (Arden et al., 2010). If we continue to improve the reliability and validity of creativity research along both psychometric and neuroimaging dimensions, we will improve our limited understanding of creativity, which is particularly needed areas such as novelty and originality (Fink, Benedek, Grabner, Staudt, & Neubauer, 2007). Limited spatial resolution prevents us from accurately isolating brain activity, while at the same time, the prevailing paradigm of neuroscience creativity research remains oriented toward finding the specific areas of the brain are associated with creativity (Arden et al., 2010; Sawyer, 2011), while the correct answer may be that all of them are—although some more so than others. Modern techniques as reviewed by Jung, Mead, Carrasco, and Flores (2013), such as structural magnetic-resonance imaging (sMRI), diffusion tensor imaging (DTI), and proton magnetic resonance spectroscopy (1H-MRS) are critical to isolating the structural characteristics of creative cognition, and might be seen as a complement, rather than a replacement, to the proxy measures that psychometrics constitute. Finally, lesion studies reveal that areas of the brain may actually compete in parallel to reach creative solutions, with the right medial prefrontal cortex (mPFC) winning out in healthy subjects, even though it produces inferior results (Jung et al., 2013). When corroborated with psychometric measures, this may lead us to an amusing finding whereby high creativity might be associated with brain problems (i.e., lesions in the left language errors).

Methodological Issues in Neuroscience-Based Creativity Research

Even recent creativity research is often devoid of neuroimaging. For example, Anderson, Potočnik, and Zhou’s (2014) “Innovation and creativity in organizations: A state-of-the-science review, prospective, commentary, and guiding framework,” published in Journal of Management and focused on 2002–2013 research, defines creativity as “idea generation” and looks at studies that solely use observational and self-report data. In an organizational context, it is still unheard of to use MRI, DTI, 1H-MRS, et cetera, and even EEG is rare. Moreover, the research corpus itself is scattered and disjointed (Batey & Furnham, 2006). Consequently, sound methods are even more important for the few researchers who are able to use neuroimaging methods.

A big issue exemplified in Jaušovec (2000), and reiterated by Arden et al. (2010), are case-control designs whereby subjects are unnecessarily dichotomized into high- and low-creativity buckets, instead of respecting the continuous nature of creativity. Even psychometric measures such as Torrance tests do not classify people in binary, but rather across a range of scores. Respecting this continuity can improve statistical power.

Using expensive and cumbersome technologies such as PET or fMRI requires lying down, perfectly still, with loud whirring noises (Sawyer, 2011). Even EEG requires electrodes attached to one’s head, which impairs many creative activities. Methodologically, this is a large problem that is presently not surmountable. There is no way to measure creativity with an fMRI while a subject plays a violin (except, perhaps, a pizzicato performance). Moreover, neuroimaging studies do not measure novelty or usefulness, unlike common definitions of creativity used by non-neuroscience researchers (Sawyer, 2011).

Lastly, although there are many other methodological issues, neuroscience creativity research would be furthered by accurate reporting and disclosure of averaging, subtraction techniques, and the actual activation levels that were observed temporally and/or spatially (Sawyer, 2011). Speculation about causation should be clearly marked as such. Finally, researchers should refrain from labeling a region of the brain as a center for any specific creative task, or for creativity in general (Arden et al., 2010). Even though it generates popular press, such determinations are typically inaccurate.

References

Anderson, N., Potočnik, K., & Zhou, J. (2014). Innovation and creativity in organizations: A state-of-the-science review, perspective, commentary, and guiding framework. Journal of Management, 40, 1297–1333. http://doi.org/10.1177/0149206314527128

Arden, R., Chavez, R. S., Grazioplene, R., & Jung, R. E. (2010). Neuroimaging creativity: A psychometric view. Behavioural Brain Research, 214, 143–156. http://doi.org/10.1016/j.bbr.2010.05.015

Batey, M., & Furnham, A. (2006). Creativity, intelligence, and personality: A critical review of the scattered literature. Genetic, Social, and General Psychology Monographs, 132, 355–429. http://doi.org/10.3200/MONO.132.4.355-430

Burle, B., Spieser, L., Roger, C., Casini, L., Hasbroucq, T., & Vidal, F. (2015). Spatial and temporal resolutions of EEG: Is it really black and white? A scalp current density view. International Journal of Psychophysiology, 97, 210–220. http://doi.org/10.1016/j.ijpsycho.2015.05.004

Fink, A., Benedek, M., Grabner, R. H., Staudt, B., & Neubauer, A. C. (2007). Creativity meets neuroscience: Experimental tasks for the neuroscientific study of creative thinking. Methods, 42, 68–76. http://doi.org/10.1016/j.ymeth.2006.12.001

Jaušovec, N. (2000). Differences in cognitive processes between gifted, intelligent, creative, and average individuals while solving complex problems: An EEG study. Intelligence, 28, 213–237. http://doi.org/10.1016/S0160-2896(00)00037-4

Jung, R. E., Mead, B. S., Carrasco, J., & Flores, R. A. (2013). The structure of creative cognition in the human brain. Frontiers in Human Neuroscience, 7, 1–13. http://doi.org/10.3389/fnhum.2013.00330

Sawyer, K. (2011). The cognitive neuroscience of creativity: A critical review. Creativity Research Journal, 23, 137–154. http://doi.org/10.1080/10400419.2011.571191

EME 6646 Assignment on Moral Emotions, Self-Regulation of BOLD Signals, and Monetary Rewards

Assignment 4, Part A: Individual Explanation of Rewards and Emotions
For EME 6646: Learning, Instructional Design, and Cognitive Neuroscience
By Richard Thripp
University of Central Florida
June 8, 2017

Moral Emotions

Using a task where participants passively viewed morally charged pictures (e.g., starving children and warfighting), Moll et al. (2002) found that while such images activated the amygdala, thalamus, and upper midbrain just like basic emotions, images evoking “moral emotions” additionally activated the orbital and medial regions of the prefrontal cortex, as well as the superior temporal sulcus region, both of which were previously known to be important for perception and social behavior. Moll et al. (2002) argue that these functional magnetic resonance imaging (fMRI) results indicate that humans automatically assign moral values to social events, and that this is an important function of human social behavior.

While traditionally, the prevailing paradigm was that moral judgments are guided by reason, neuroimaging evidence has shown us that emotion plays a vital role. For example, Greene, Sommerville, Nystrom, Darley, and Cohen (2001) tackled the issue by presenting fMRI-connected participants with a battery of moral dilemmas, with moral–personal (e.g., pushing a bystander off a bridge to stop a trolley that would kill five people), moral–impersonal (e.g., voting in favor of a referendum that would result in many deaths), and non-moral conditions (e.g., whether to stack coupons at the grocery store). Moral–personal dilemmas activated brain regions (i.e., medial frontal gyrus, posterior cingulate gyrus, and angular gyrus) that were significantly less active in the other conditions. Moreover, reaction time was higher in instances where a participant responded an action was “morally appropriate” (it was a dichotomous choice between this and “morally inappropriate”) when this was emotionally incongruent—for example, when participants said “appropriate” to sending the bystander to his or her death to stop the trolley from killing five people. The authors specifically compared this to the Stroop test, contending that this was a similar phenomenon in that it required extra processing time. Overall, emotions can help us understand why the majority will say it is acceptable to flip a switch that changes the direction of the trolley, killing a bystander to save five others, while a majority will say it is unacceptable to push the bystander in front of the trolley to stop it, even though the outcome is the same. Greene et al. (2001) say the latter is more emotionally salient. While the trolley problem is a philosophical paradox if considering only reason, adding emotion resolves it.

If moral dilemmas light up different parts of the brain, and if emotional salience is important to judging whether an issue is morally unacceptable, educators can use this to design instruction to engage moral emotions. For instance, the music industry has long argued that illegally downloading a song is no different than shoplifting the CD from Target. The former might be compared with flipping the trolley switch, while the latter is like pushing the bystander in front of the trolley—far fewer would shoplift than illegally download a song. Casting academic integrity in a similar light could help promote ethical and prosocial behaviors among students. Marketing research implies that most people are honest to a fault—they would not be grossly dishonest to get ahead, but if they can profit while continuing to believe they are righteous, they will do so (Mazar, Amir, & Ariely, 2008). In addition to promoting academic honesty, moral emotions can be evoked in instruction through vignettes, case studies, or interrogatories (e.g., “What would you do if you could save five people by harvesting the organs of a cerebrally dead 22-year-old who is an organ donor but whose family actively protests?”). Integrating these as both individual and group activities may be useful. Group activities invite going along with the group, so individually completion might precede group discussion. Sadly, while Walt Disney Studios appeals to our moral emotions and emotions of all forms in their motion pictures, instructors typically leave this engagement opportunity untapped.

Self-Regulation of BOLD Signals and Monetary Rewards

Recently, Sepulveda et al. (2016) combined measurement via real-time FMRI neurofeedback (NF) with instructing participants to increase their blood-oxygen-level dependent (BOLD) signals (i.e., self-regulation of brain physiology), in a between-groups study with four groups (n = 20 with five per group) which received either NF only (a.k.a. contingent feedback), NF and motor-imagery training, NF and monetary reward, and NF + motor-imagery training + monetary reward. The BOLD signal is a proxy for “volitional control of supplementary motor area” (Sepulveda et al., 2016, p. 3155)—this ability can improve “planning and execution of motor activity” (p. 3155), and may be important to self-regulation, learning, academic success, et cetera. Interestingly, while all groups were successful at up-regulating their BOLD signals, monetary reward resulted in the greatest increase, while motor-imagery training did not even result in a statistically significant enhancement. That is to say, the participants who were evidently the most motivated to increase their BOLD signals were the ones who received NF and an on-screen dollar amount where the amount increased in proportion to their real-time increase in BOLD signal. While the authors were careful to note that monetary rewards—which are by definition an extrinsic motivator—lose their effectiveness over time and thus should be used as an initial motivator that is withdrawn over time (hopefully giving way to intrinsic motivation), their discussion does not mention that this neuroimaging evidence may be important to the use of monetary rewards for academic and organizational success.

Monetary Rewards May Be Ineffective in Academic Settings

In contrast to Sepulveda et al. (2016), Mizuno et al. (2008) found that while learning motivated by monetary rewards activated the putamen bilaterally much like self-reported level of motivation learning, the intensity of activity (measured via fMRI BOLD signals) increased with higher levels of motivation for learning, but not with increased monetary rewards. This may suggest that, at least in an academic context, greater monetary rewards do not increase motivation. While it did not employ neuroimaging, a study of 300 middle schoolers by Springer, Rosenquist, and Swain (2015) may be relevant. They offered either no incentive, $100, or a “certificate of recognition signed by the district superintendent” (p. 453) to students who attended tutoring regularly. While the preceding fMRI research may lead us to believe that the monetary incentive would have been effective, in fact it had no significant differences from the control group, while the certificate of recognition was a highly effective motivator. Therefore, for academic motivation, financial rewards may be inferior to other forms of extrinsic motivation (e.g., a certificate), or to intrinsic motivation. Nevertheless, they may be a useful tool for the unimaginative instructor, particularly in contexts where a grading scheme cannot be implemented (e.g., some forms of organizational training). For a more typical academic setting, grades and “extra” credit opportunities (which, ironically, are available even to students who achieve far less than 100% on their work) may basically take the place of what would have been monetary rewards in another setting.

References

Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., & Cohen, J. D. (2001). An fMRI investigation of emotional engagement in moral judgment. Science, 293, 2105–2108. http://doi.org/10.1126/science.1062872

Mazar, N., Amir, O., & Ariely, D. (2008). The dishonesty of honest people: A theory of self-concept maintenance. Journal of Marketing Research, 45, 633–644. http://doi.org/10.1509/jmkr.45.6.633

Mizuno, K., Tanaka, M., Ishii, A., Tanabe, H. C., Onoe, H., Sadato, N., & Watanabe, Y. (2008). The neural basis of academic achievement motivation. NeuroImage, 42, 369–378. http://doi.org/10.1016/j.neuroimage.2008.04.253

Moll, J., de Oliveira-Souza, R., Eslinger, P. J., Bramati, I. E., Mourão-Miranda, J., Andreiuolo, P. A., & Pessoa, L. (2002). The neural correlates of moral sensitivity: A functional magnetic resonance imaging investigation of basic and moral emotions. Journal of Neuroscience22, 2730–2736. http://doi.org/10.1016/j.bandc.2016.07.007

Sepulveda, P., Sitaram, R., Rana, M., Montalba, C., Tejos, C., & Ruiz, S. (2016). How feedback, motor imagery, and reward influence brain self-regulation using real-time fMRI. Human Brain Mapping, 37, 3153–3171. http://doi.org/10.1002/hbm.23228

Springer, M. G., Rosenquist, B. A., & Swain, W. A. (2015). Monetary and nonmonetary student incentives for tutoring services: A randomized controlled trial. Journal of Research on Educational Effectiveness, 8, 453–474. http://doi.org/10.1080/19345747.2015.1017679

Writing on education, finance, psychology, et cetera