How Can Researchers Circumvent Cognitive Biases in Their Data Analysis?

Recently I’ve found myself becoming more and more interested in how to conduct non-biased, objective research. My interest stems from my academic and career goals to conduct research one day. While I consider honesty of very high importance especially in research, I also realize that even the most honest researcher can fall victim to their very own cognitive biases. Unconscious influences can result in self-deception which in turn affects how even the best scientist analyzes his or her data. This is important because misinterpreted data can result in long-term consequences for the representative scientific community. In the article How Scientists Fool Themselves – and How They Can Stop Regina Nuzzo discusses cognitive biases that can have an impact on the analysis of data. She opens the article by telling the story of Andrew Gelman, a statistician who had a wrong sign on one of his variables, but because the results seemed reasonable he didn’t go back through his data and check it. He published his work and some years later when a graduate student discovered the error, Gelman had to publish a correction stating that the findings were essentially wrong. There’s no way of knowing how many people read his original study and never saw the correction. This just goes to show how important producing objective research is because once data is out there for all to see you can’t really erase it. With it being so easy for the brain to fool researchers how can biases be reduced to preserve objectivity and publish reliable data?

The first part of a solution to this problem is to acknowledge that no one is exempt from cognitive bias and to understand what biases are at play when analyzing data. According to Nuzzo, there are four biases that can cause one to misconstrue the results of research. Hypothesis myopia is when researchers become so focused on supporting their hypothesis by collecting evidence that they fail to consider refuting the evidence and alternative explanations. I think the issue with this is fairly clear. When we fail to consider other explanations we create one-sided stories that could be proliferated and have the potential to affect the literature of a topic. Nuzzo explains that the next bias is, “illustrated by the fable of the Texas sharpshooter: an inept marksman who fires a random pattern of bullets at the side of a barn draws a target around the biggest clump of bullet holes, and points proudly at his success.” This cognitive bias causes researchers to ignore the big picture and focus on the “most agreeable and interesting” results. Examples of this bias at play happen in p-hacking (exploiting degrees of freedom until p<0.5 is reached) and something called HARKing (Hypothesizing After the Results are Known). The next bias is asymmetric attention to detail or disconfirmation bias. As seen in the case of Andrew Gelman, this bias happens when non-expected results are scrupulously checked over, but “reasonable” results receive little scrutiny. It surprises me to find out how common this is. According to Nuzzo:

“In 2011, an analysis of over 250 psychology papers found that more than 1 in 10 of the p-values was incorrect— and that when the errors were big enough to change the statistical significance of the result, more than 90% of the mistakes were in favour of the researchers’ expectations, making a non-significant finding significant.”

This really goes to show how often this occurs and how important it is to overcome this bias. Last, but not least is just-so storytelling. This is when researchers construct a narrative to rationalize their results after the results are obtained. As Nuzzo explains, the issue with this is that a researcher can use these stories to support “anything and everything” when used in this way.

As stated by Nuzzo each bias represents a trap in which the science of identifying salient relationships is accelerated. She states that the key to avoiding these traps is to “slow down, be skeptical of findings and eliminate false positives and dead ends”, in essence, use the brakes. As the image below shows, the article poses a wide variety of techniques that a researcher can use to prevent these fallacies from interfering with how researchers look at data. Using a technique called strong interference in which researchers develop experiments to rule out alternative hypotheses and explanations, can combat hypothesis myopia. In order to do this one must first gather alternative explanations. In my eyes looking at competing hypotheses is a no-brainer and any thorough researcher would do this, however, with our brains looking for easy answers it’s easy to see how this still continues to occur. The next solution presented in the article is transparency and open science. Researchers have a lot of leeway when it comes to analyzing and presenting their findings. This method encourages researchers to share all of their methods, data, computer code and results so that others can analyze and confirm research findings. This ensures that what a researcher finds really conveys what the data indicates. Another way of collaborating with others to avoid bias is by working with researchers that hold opposing views. According to the article when talking about this, Daniel Kahneman states, “You need to assume you’re not going to change anyone’s mind completely, but you can turn that into an interesting argument and intelligent conversation that people can listen to and evaluate.” This technique seems like it could be the most useful technique mentioned in the article. Hypothetically the competing researchers would cancel out or reduce the effect of hypothesis myopia, asymmetric attention, and just so-storytelling. The final comment on how to avoid bias consists of using a program to produce alternative data sets which are all analyzed alongside the real data in a process called blind data analysis. The actual results are not revealed until the researcher completes the analysis. This method appears as one of the best methods because researchers are blind to the actual data so they have no way of knowing if the data is significant or not during analysis. In doing this biases are kept at bay and objectivity remains intact.

Insight into these fallacies offers a glimpse into the many biases people use to make decisions. These biases in research have the potential to distort the information available to scientists which informs future research and scientific debates. Learning about these fallacies not only gives me real-world examples of how cognitive biases could one day affect my work but what I can do to overcome them. I can’t stress enough about the importance of applying these techniques in my research so that it is not only objective and transparent, but reproducible. While we can’t get rid of the inherent bias we carry with us, these methods offer a good start to ease the damage they can cause to research, scientific data, and a researchers reputation.

“Science is an ongoing race between our inventing ways to fool ourselves, and our inventing ways to avoid fooling ourselves.”

Saul Perlmutter, astrophysicist at the University of California, Berkeley

References Nuzzo, Regina. “How Scientists Fool Themselves – and How They Can Stop.” Nature, vol. 526, no. 7572, 2015, pp. 182–185., doi:10.1038/526182a.

America’s Opioid Epidemic Is No Longer About Prescription Painkillers Alone

My whole life I’ve been witness to what drug addiction does to individuals and families. My father has been a heroin addict my entire life and 5 years ago I watched my brother struggle with his addiction to painkillers. Needless to say I’ve been following America’s opioid crisis hoping to see a change, but the problem has only gotten worse and to my surprise heroin and fentanyl have become more popular among opiate drug abusers. I can’t help but wonder have these drugs replaced prescription opioid painkillers amidst this epidemic or are they just adding to the death toll and why have they become so popular?

When most people think of America’s opioid crisis the first thing that generally comes to mind is prescription pain pills. Over the last two decades the U.S. has faced a deadly epidemic. An article on Vox.com asserts that it was started by what was intended to change pain management for the better, legal prescription drugs such as Oxycontin and Percocet prescribed by your family doctor. Ever since 1999 pain killer deaths have been on the rise, however the U.S. is now facing bigger problems amidst this crisis. In 2010 the epidemic started to change and something interesting happened; heroin related deaths started to sharply increase. By 2013 deaths by another drug called Fentanyl, a synthetic opioid, started to rise. By 2016 both heroin and Fentanyl contributed to more deaths than opioid painkillers alone and the Fentanyl death rate even surpassed deaths due to heroin, rising 264% between 2012 and 2015.. The graph below is a visual representation of the lives taken during the U.S. opioid epidemic spanning the last 20 years.

Drug overdose deaths in America
*The numbers for 2016 are preliminary estimates

While the graph is proposed to have been made using data from the CDC and the National Institute on Drug Abuse (NIDA), it only aligns with the data and graph found on the CDC website. A similar graph from the NIDA shows that the heroin death rate has never risen above the opioid death rate. I find this peculiar because they cite the CDC as a source for their graph. The discrepancy between these two well established and trusted entities makes me wonder what the actual heroin related death rate is and who has it right. The article mentioned above states that heroin deaths are currently higher than opioid deaths in the U.S. and the CDC states a verifiable source for their data claiming the same thing, so my bet is on the CDC for accuracy.

While the graph above shows what happened it doesn’t really show why. According to the Vox article many opioid users switched to using heroin because it’s so cheap, often cheaper than candy bars. The graph below shows that heroin cost $3260 per gram in 1981 and dropped to $465 per gram by 2012, thus making the drug much more available and cheaper than prescription painkillers. The CDC states that individuals who use prescription opioid painkillers are 40 times more likely to use heroin and 45% of heroin addicts are also addicted to prescription opioid painkillers. Many states have even set a limit of 7 days for opioid prescriptions and this too has driven people to find other means to satisfy their need which has led many to start using heroin. Others have turned to fentanyl. Not only is it more potent (50-100 times more potent than morphine) than other opioids, when illegally made it is even cheaper than heroin. However many deaths from this drug are actually due to it being laced in heroin and cocaine because of its low-cost and increased euphoric effects. People who buy the laced drugs are not often aware of the fentanyl and accidentally overdose.

The price of heroin per pure gram in inflation-adjusted dollar

While it’s evident that heroin and fentanyl account for more deaths, opioid painkiller deaths from overdose are still at an all time high. The introduction of cheaper heroin and illegally made fentanyl didn’t replace painkillers in America’s opioid crisis, they added to the problem and significantly increased drug related deaths across the U.S due to their low-cost and availability. As death rates from all three substances continue to rise and account for more deaths than car crashes and gun deaths on a yearly basis it becomes ever more important to reduce prescription opioid abuse, increase access to treatment programs and ensure access to naloxone (a drug that can reverse opioid overdose).

References

Lopez, German, and Sarah Frostenson. “How the Opioid Epidemic Became America’s Worst Drug Crisis Ever, in 15 Maps and Charts.” Vox, Vox, 29 Mar. 2017, http://www.vox.com/science-and-health/2017/3/23/14987892/opioid-heroin-epidemic-charts.

“Opioid Overdose.” Centers for Disease Control and Prevention, Centers for Disease Control and Prevention, 19 Dec. 2018, http://www.cdc.gov/drugoverdose/opioids/fentanyl.html.


Is Schizophrenia Just an Extreme Personality Type?

Schizophrenia is a largely misunderstood mental illness. For many people, their only experience with this disease is what they see in films. Schizophrenics are often portrayed as violent individuals that abuse drugs and/or alcohol, and had troubled childhoods. The reality is that, in general, these things don’t apply. Individuals with schizophrenia suffer from a long list of cognitive, psychotic, and emotional problems caused by deficits in the brain. Although schizophrenia has been researched for over 100 years there is still a lot that experts don’t understand about the disease. A recent study led by the University of Nottingham in the United Kingdom argues that we are looking at schizophrenia all wrong. Results from the study suggest that schizophrenia may be an extreme personality that exists on a spectrum with healthy personality types.

A previous study found that compared to healthy controls individuals with schizophrenia exhibit a weakened response in the brain to motor visuomotor tasks. The researchers of the Nottingham study wanted to know if this also applied to individuals with schizotypal personality traits. Schizotypy (SPD) is a personality disorder characterized by brief psychotic episodes. Traits of this disorder lie on a spectrum and affect normal healthy individuals to people with full blown SPD. The study was conducted at the University of Nottingham and Cardiff University in Wales. The researchers recruited a total of 166 healthy volunteers from imaging centers on their campuses. The volunteers each filled out a Schizotypal Personality Questionnaire (SPQ) which is a self-reported survey that measures normal to abnormal degrees of schizotypy. Researchers then used Magnetoencephalography (MEG), a technique that measures the magnetic fields generated by neuronal activity of the brain, to look at brainwaves while participants moved their index finger. Correlations between MEG and SPQ results were then computed. The study found a significant negative correlation between the two, suggesting that schizophrenia is on a continuum with schizotypy and is an extreme form with more severe neural deficits.

This study seems very well thought out and the methods were very thorough. The researchers complied to a strict criteria for exclusion of volunteers. The volunteers consisted of both male and females and while there were more females the researchers conducted statistical analysis and determined that the difference was not significant. This was then done to control for sex of the participants. The choice to use MEG to measure brain function seems like a good choice. MEG can measure brain activity extremely fast, unlike EEG it can localize brainwaves, and unlike fMRI it measures direct brain function. The researchers also used a sampling with replacement technique in their data analysis which allowed them to check the reliability of their results.

Even though this study is methodically strong I question the small sample size and issues surrounding the SPQ. Sample size can influence the power of a study to detect the size of an effect. With a small sample size there is an increased risk for a Type II error in which results confirm the hypothesis when in fact the alternative is true. Due to this I would expect to see this study repeated to strengthen the conclusions. I have multiple concerns with the SPQ. While self-reported surveys can help to measure constructs of personality that would otherwise be difficult to measure they come with a multitude of problems. Not only do they rely on the honesty of participants, but not all participants are going to understand or interpret the questions the same. Another concern I have with with the SPQ is how it was conducted. At one site the survey was conducted in-person under the supervision of an experimenter and at the other participants filled it out online before meeting with experimenters. People often behave differently when being watched so could this difference have effected answers? My last concern is the differences in the scales used for the SPQ at the two sites. One site used a 5-point Likert scale while the other used a binary scale. The article does state that participant total scores were normalized using standard deviations and means, however I still wonder how the differences in scale could have influenced how participants answered the survey questions.

Overall I’d say this a fairly strong study, however before I jump on the band wagon I’d like to see this study repeated with more uniform questionnaire practices and a larger sample size. With that said the researchers bring to light a very thought provoking topic on how we view schizophrenia and mental illness as a whole in our society. I think this study does a good job of humanizing people with schizophrenia and showing that they exists on a spectrum of human experience with everyone else and not on another plane entirely. This can go far way in helping to destigmatize mental illness and change the way schizophrenia is portrayed in Hollywood.

Refernces

Liddle, et al. “Attenuated Post-Movement Beta Rebound Associated With Schizotypal Features in Healthy People.” OUP Academic, Oxford University Press, 18 Sept. 2018, academic.oup.com/schizophreniabulletin/advance-article/doi/10.1093/schbul/sby117/5095481.

Does Peer-Reviewed Mean Reliable Data or Should We Be Skeptical?

Part of being a college student is learning to use peer reviewed articles to support ideas and claims in papers and research. Many of us hold the expectation that these articles are thoroughly researched and present trustworthy data. However, this is not always the case, as explored in the Vox.com article Let’s Stop Pretending Peer Review Works. In the article the authors question why scientists even bother with peer review if it doesn’t always produce reliable information. They highlight multiple studies with evidence that peer review doesn’t work much better than chance at allowing only high-quality studies to be published. This leads me to question how can we as students determine the reliability of peer-reviewed articles and data.

“The idea behind peer review is simple: It’s supposed to weed out bad science”  –Julia Belluz and Steven Hoffman

Many believe that peer review is a necessary means of quality control in research but others vastly disagree. In the aforementioned Vox article, Lancet editor Richard Horton is quoted to have said that peer review “is unjust, unaccountable … often insulting, usually ignorant, occasionally foolish, and frequently wrong.” There are many things that peer review does and doesn’t do, so I think both sides make valid arguments, but without a peer review process who knows what kind of data would be published. Assessing the quality of data cannot be easy. Peer reviewers don’t repeat studies or dig deep into every aspect of a submitted study. They surely can’t uncover every act of misconduct. However, that doesn’t mean that peer review is unnecessary.

Even though peer review doesn’t always work the good news is that bad science does get caught and retractions are made. However the implication of people reading bogus information is that the data is already out there and this can have lasting effects. For example, the research article that claimed vaccines cause autism was redacted, but the information had already been read and spread. Every once in awhile I still hear someone use that study as a reason for not vaccinating their children, but once information is out there it’s hard to take it back.

“Let’s stop pretending that once a paper is published, it’s scientific gospel” – Ivan Oransky, Medical Journalist

Many college students, including myself, will spend countless hours lost in peer-reviewed articles, so it’s important to keep in mind that the data presented may not be factual. While peer review isn’t perfect and we shouldn’t take research data that has been vetted through the process at face value, that doesn’t mean we should disregard it either. Just like with any data, peer-reviewed or not, we should remember that any information can be wrong. The best way to go about using the data we find in peer-reviewed articles is to remain skeptical and critically assess anything we intend to use and reference.

References Belluz, Julia, and Steven Hoffman. “Let’s Stop Pretending Peer Review Works.” Vox.com, Vox Media, 7 Dec. 2015, http://www.vox.com/2015/12/7/9865086/peer-review-science-problems.


80,000 Americans Died From Influenza Last Year? It’s Time to Rethink CDC Influenza Death Estimates.

While searching through Science and Health articles on Vox, I came across an article that caught my eye. In large bold letters across the screen the title read, 80,000 Americans died of the flu last winter. Get your flu shot. I thought to myself, wow! That many? In my circles I only know a couple people who even had the flu over the last year. How could this number be so big and where did it come from?

The article asserts that influenza last year claimed more lives than traffic accidents, gun violence, and opioid overdoses. It explains that due to the variability in virus types, flu vaccinations are not always as effective as they should be, stating that the effectiveness of immunizations against influenza A (H1N1) is 54%, influenza B, 67% and influenza A (H3N2), only 33%. This is where the problem lies with flu vaccinations. In years where influenza types H1N1 and B are the main strains, vaccinations have a good chance of being effective. However, in years where H3N2 is the dominate strain, like last year, flu vaccines are less likely to help. This is what the article claims could explain the high incidences of flu related deaths in the 2017-18 flu season.

Even with the vaccination explanation I still felt uncertain about the reported number of deaths, this 80,000, so I went to the CDC website, where the data originated from. I found a slew of information on influenza surveillance. The website indicates that influenza associated deaths are tracked through two systems. the National Center for Health Statistics (NCHS) mortality surveillance data and the Influenza-Associated Pediatric Mortality Surveillance System. The NCHS collects data from death certificates across the U.S. using different codes to identify cause of death. The CDC then uses the data on pneumonia and influenza (P&I) deaths to calculate a seasonal baseline. The other system monitors all confirmed influenza deaths in children, which is reported to the CDC. To calculate an estimate of influenza associated deaths in a given season the CDC uses a ratio of deaths-to-hospitalizations in their statistical model. I found it interesting that the website states they don’t just use P&I data, but a combination of other respiratory and circulatory causes (R&C) and other non-respiratory, non-circulatory causes of death, that are not specified on the CDC website. Even though it’s stated that they use all these categories the only graph I could find shows data from the P&I category only (see graph below). I found this really confusing and furthermore why do they combine so many causes of death to estimate flu deaths? The CDC claims that “deaths related to flu may not have influenza listed as a cause of death” and “seasonal influenza may lead to death from other causes, such as pneumonia, congestive heart failure, or chronic obstructive pulmonary disease”. But association does not imply causation. How can they be so sure that so many of the cases from these other causes of death are related to the flu? Surely many deaths from congestive heart failure and pneumonia are not due to influenza. I’m no statistician, but this seems like a formula for biased data. Lumping all of these deaths together is seriously misleading and a gross overestimation in my opinion.

Source Centers for Disease Control and Prevention

I found other things problematic as well. Throughout the CDC’s webpages on flu data, the terms “influenza associated deaths” and “influenza deaths” seem to be used interchangeably. This too can be misleading as seen in the title of the Vox article. 80,000 Americans did not die from the flu alone last year, they died from flu associated deaths, which may not all have actually been due to the flu. In fact, there were only 54,982 influenza positive tests reported to CDC by Public Health Laboratories, National Summary, which further exhibits how the numbers just don’t add up.

The issue at large here is that when the CDC releases information, the majority of the American public is going to believe and trust in what they say. It is important that the data they use and publish is unbiased and not misleading because it will be repeatedly used and referenced. This data is widely used and accepted. Not only does it affect what people think and the health choices they make, it has the possibility to effect the medical industry and public health policy as well. I’m not trying to say that the CDC is fudging numbers and using scare tactics to get people to vaccinate or is in cahoots with pharmaceutical companies. However I find their use of data to be irresponsible and ambiguous. Perhaps with some rewording on their website and changes to what data they use in determining deaths from influenza, misleading articles such as the one posted on Vox won’t be put out there to misinform the public.

References

Belluz, Julia. “80,000 Americans Died of the Flu Last Winter. Get Your Flu Shot.” Vox.com, Vox Media, 20 Dec. 2018, http://www.vox.com/2018/9/27/17910318/flu-deaths-2018-epidemic-outbreak-shot

“Influenza (Flu).” Centers for Disease Control and Prevention, Centers for Disease Control and Prevention, 19 Oct. 2018, http://www.cdc.gov/flu/weekly/overview.htm

“Influenza (Flu).” Centers for Disease Control and Prevention, Centers for Disease Control and Prevention, 25 Oct. 2018, http://www.cdc.gov/flu/about/burden/how-cdc-estimates.htm.

“Influenza-Associated Pediatric Mortality.” Centers for Disease Control and Prevention, Centers for Disease Control and Prevention, gis.cdc.gov/GRASP/Fluview/PedFluDeath.html.

“National Center for Health Statistics Mortality Surveillance System.” Centers for Disease Control and Prevention, Centers for Disease Control and Prevention, gis.cdc.gov/grasp/fluview/mortality.html.

“National, Regional, and State Level Outpatient Illness and Viral Surveillance.” Centers for Disease Control and Prevention, Centers for Disease Control and Prevention, gis.cdc.gov/grasp/fluview/fluportaldashboard.html.

Making Informed Decisions – Genetic Components of Gender Dysphoria.

For the many that identify as transgender, gender identity is not merely a psychological condition to be treated. It is deeply rooted within who they are or feel that they are meant to be. Many of these individuals believe that their brains are male, female, or somewhere in between regardless of their genitals and the gender they were born as. Not too long ago this was the topic of a conversation I had with a friend who had undergone therapy for gender dysphoria. When the discussion turned toward the treatment of transgender children, I was asked about my opinion on conversion therapy. Although I hold a very strong opinion against it I had no scientific data, only a weakly supported argument based on psychological well-being to back up my point of view. This got me thinking. Is there information out there based on science that parents can use to make well informed decisions concerning their trans children?

In the past it was believed that gender dysphoria was largely psychological and linked to how a person was raised or due to traumatic childhood events. Although this view has been largely left behind, scientific data backing up a biological premise for gender dysphoria did not really exist until more recently. Research of this type is still in its infancy and there is much work to be done, however current data does support biological differences in transgender individuals.

In an article from New Atlas, Rich Haridy reports that a study published in the Journal of Clinical Endocrinology and Metabolism last year suggests that there is a genetic association with gender dysphoria. A team of researchers from the Hudson Institute of Medical Research in Australia looked at variants in genes associated with sex hormone signalling. The idea behind this study came from the recent pool of evidence indicating an increase in gender dysphoria among individuals who were exposed to abnormal concentrations of androgenic hormones while in the womb. This lead researchers to hypothesize that variants in certain genes could alter sex hormone signalling and cause a change in how the developing brain sexually differentiates, thus causing a person to experience gender dysphoria.

To test this theory the DNA of 380 transgender women and 344 non-transgender men was analyzed. The researchers looked for repeated DNA sequences in genes associated with sex hormone signalling. Their data identified 12 genetic variants in transgender women that are associated with the processing of male and female hormones (estrogen and androgen). On the Hudson Institute of Medical Research website the lead author of the study, Vincent Harley, explains, “these genetic variations could make some males less able to process androgen, causing the brain to develop differently – with areas that are less ‘masculine’ or more ‘feminine’ – which may contribute to gender dysphoria in transgender women”. This theory is supported by recent research findings that the MRI’s of individuals with gender dysphoria showed brain structures more like their desired gender than the gender assigned to them at birth.

In light of these findings, I can’t help but wonder what about transgender men? Do they also exhibit genetic differences in genes involved in hormone processing? This is a major pitfall to this study. Only transgender women were assessed, so a fully accurate picture can’t be painted with the limited data this study provides, but what it does do is open the doors for future research. Although the search for a biological cause of gender dysphoria has received backlash from some in the transgender community, Professor Harley sums it up quite eloquently by saying, “While it should not hinge on science to validate people’s individuality and lived experience, these findings may help to reduce discrimination, lend evidence towards improving diagnosis or treatment, promote greater awareness and acceptance and reduce the distress experienced by transgender people in our communities”.

So back to my original question, does current research offer valuable information to parents making decisions on what’s best for their transgender children? The answer is yes and no. Yes, there is scientific data that indicates people experience gender dysphoria because of a biological phenomenon and not some psychological defect. This in theory, could educate a parent to make better decisions concerning the therapeutic and medical treatment of their child, however the data is incomplete. The research, as aforementioned, is still in its infancy.

One final thought on this; even though this data exists, much of this research may not be easily accessible to the majority of the population. Parents looking for information may not have adequate access to research data and even if they do, they may have difficulty understanding the material. This in turn leaves us with another issue to ponder concerning the accessibility of scientific data. How do we make conscientious decisions concerning our health and the health of our children if we don’t have access to current research data?

Sources:

Haridy, Rich. “New Study Probes the Genetic Roots of Transgender Identity.” New Atlas – New Technology & Science News, New Atlas, 4 Oct. 2018, https://newatlas.com/transgender-genetic-study-hudson-institute/56631/.

Foreman, et al. “Genetic Link Between Gender Dysphoria and Sex Hormone Signaling.” OUP Academic, Oxford University Press, 21 Sept. 2018, academic.oup.com/jcem/advance-article-abstract/doi/10.1210/jc.2018-01105/5104458?redirectedFrom=fulltext.

“Written in DNA – Study Reveals Potential Biological Basis for Transgender.” Hudson Institute of Medical Research, hudson.org.au/latest-news/written-in-dna-study-reveals-potential-biological-basis-for-transgender/.

“Transgender Brains Are More like Their Desired Gender from an Early Age.” ScienceDaily, ScienceDaily, 24 May 2018, http://www.sciencedaily.com/releases/2018/05/180524112351.htm.