Your Brain on Blogs
From the Neuroscience Graduate Forum at the University of Southern California
Artwork by Stefanie Walker (Instagram: @stfwlkr)
(By Lily Zou, NGP 2016 Cohort)
One major goal in neuroscience is to decipher the neural code at work in the brain of different species. To achieve this goal, we need to 1) record neural responses during specific behaviors and 2) reactivate the active neurons to analyze the likelihood that those responses have a causal role in the behaviors studied. Over the past twenty years, this methodology has propelled the creation of various neural recording and manipulation toolboxes to better establish the integral rather than incidental links between the neural code and behaviors. (Fig.1)1.
Figure 1. Toolboxes for recording and manipulating neurons (cited from Emiliani et al).
With the development and optimization of various probes, expression methods, and optics techniques, researchers have been able to make enormous improvements in reading the neural code. One of the most popular probes used to investigate neuronal activity are genetically-encoded calcium indicators (GCaMPs). GCaMp62, currently the most widely used GCaMP variant, enables neuroscientists to image hundreds of neurons simultaneously in awake behaving animals. Apart from enabling scientists to monitor large scale neural activities, GCaMP6 also allows scientists access to genetically defined neurons. For example, through adding a CaMKII promoter, GCaMP6 will only be expressed in excitatory neurons. Although GCaMP6 already has a relatively good signal-to-noise ratio (SNR), scientists are continuously working on developing new GCaMP variants with better SNR and temporal resolution to improve the monitoring of neural activity in vivo. Within the past two years alone, GCaMP7 and GCaMP8 were developed3 (Fig 2). With the help of machine learning on rational design, better GCaMP variants will become available at an astonishingly high rate compared to previous generations. One day the speed of releasing a new GCaMP variant will probably even surpass the release speed of new iPhones.
So, how do GCaMPs work exactly to help researchers investigate neural activity in vivo? GCaMPs rely on detecting calcium changes, an indirect measurement of the action potential, and the spike rate is non-linearly related to the calcium concentration change. Thus, the temporal resolution of GCaMP is not good enough to catch short-latency spikes, and the GCaMP signal cannot reflect spike rate differences accurately. These are important points we need to take into account when dealing with calcium imaging data. Concurrently, several groups have developed genetically-encoded voltage indicators (GEVIs), like ASAP34, ArcLight5, and Quasars6, to directly monitor membrane voltage changes. However, due to low brightness and small SNR, GEVIs are not as widely used as GCaMPs yet. But we do have faith that new GEVIs with good SNR and brightness will be developed in the new future, and will be widely used like GCaMPs.
Figure 2. Comparison of fluorescent intensity change (δF/F) caused by 1 action potential (AP) and 3 AP among different GCaMP variants (cited from Janelia GENIE twitter).
As mentioned previously, to decipher the neural code, we also need to manipulate the identified neural ensembles that are correlated to behaviors. To reach the goal of controlling certain neural ensembles, two prerequisites need to be met. First, we need tools that allow us to bidirectionally manipulate (excite and inhibit) neurons with millisecond and single cell resolution. Second, we want access to neural ensembles in three dimensions according to activity patterns, not just certain cell types categorized by their genetic information. This is because the neural ensembles that correspond to a certain stimuli are likely spatially intermingled, and might not share the same genetic identities. For example, as shown in Fig 37, different neural ensembles for different angled stimuli are intermingled in space in the somatosensory cortex.
Figure 3. Different angle-tuned neurons imaged at -299 μm below the dura in barrel cortex. Each colored circle indicates a neuron. Color indicates the angle the neuron is tuned to (cited from Kim et al).
Figure 4. Optogenetic tool families (cited from Fenno et al).
In addition to the advancements in GCaMP imaging, in the past 15 years, the rapid development of optogenetics has brought us much closer to the dream experiment “controlling the mind”. Optogenetics leverages the natural optical responses of opsins, membrane-bound light-sensitive proteins, derived from microbial origins, which can generate light-induced inward or outward current (Fig. 4)8. These opsins empower us with the capability of bidirectionally controlling the neurons with cell type specificity. Numerous labs are still hard at work optimizing these opsins, including increasing their sensitivities, increasing their photocurrent, decreasing the jitters[1], confining their sub-cellular traffic to somatic or axonal domains, and so on. Similar to the advanced pace of GCaMP evolution, every year or two, new improved opsins are developed. But to recreate the neural code, we also need prerequisite two, controlling the neural ensembles in 3D according to activity patterns. To overcome this challenge, several groups have taken advantage of the spatial light module (SLM) to perform holographic optogenetics 9 10. SLMs are composed of high density of liquid crystal pixels, and by changing the orientation of the liquid crystals, SLM can finely modulate the phase of the light. By combining SLM with different microscopes, like two-photon microcope9, scientists are now able to perform volumetric imaging and optogenetics simultaneously. Thus with SLM, we can not only generate customized pattern of illumination, but when combined with different opsins, we can also simultaneously photoactivate or inhibit the neurons in the same neural ensembles in 3D (Fig. 5).
Figure 5. Diagram of holographic 2-photon microscope (cited from Yang et al).
Figure 6. Comparison between what we know and what we don’t know.
Even with all these tools for recording and manipulating neural activities, we are still only scratching the surface of deciphering the intricacies of the neural code. After all, what we currently know is only the area within a circle, while what we don’t know is the infinite blank space outside of that circle (Fig. 6). However with the improvements of these techniques, we are slowly expanding our circle. Compared to the space outside of the circle, the expanded circle might never seem significant. But as Einstein put it “The most beautiful experience we can have is the mysterious. It is the fundamental emotion that stands at the cradle of true art and true science”.
Footnotes
[1] Jitters: the variation of neural response latencies.
References
1. Emiliani, Valentina, et al. "All-optical interrogation of neural circuits." Journal of Neuroscience 35.41 (2015): 13917-13926.
2. Chen, Tsai-Wen, et al. "Ultrasensitive fluorescent proteins for imaging neuronal activity." Nature 499.7458 (2013): 295-300.
3. Y. Zhang, et al. jGCaMP8 Fast Genetically Encoded Calcium Indicators (2020). doi:10.25378/janelia.13148243.
4. Villette, Vincent, et al. "Ultrafast two-photon imaging of a high-gain voltage indicator in awake behaving mice." Cell 179.7 (2019): 1590-1608.
5. Jin, Lei, et al. "Single action potentials and subthreshold electrical events imaged in neurons with a fluorescent protein voltage probe." Neuron 75.5 (2012): 779-785.
6. Hochbaum, Daniel R., et al. "All-optical electrophysiology in mammalian neurons using engineered microbial rhodopsins." Nature methods 11.8 (2014): 825-833.
7. Kim, Jinho, et al. "Behavioral and Neural Bases of Tactile Shape Discrimination Learning in Head-Fixed Mice." Neuron108.5 (2020): 953-967.
8. Fenno, Lief, Ofer Yizhar, and Karl Deisseroth. "The development and application of optogenetics." Annual review of neuroscience 34 (2011).
9. Yang, Weijian, et al. "Simultaneous two-photon imaging and two-photon optogenetics of cortical circuits in three dimensions." Elife 7 (2018): e32671.
10. Pégard, Nicolas C., et al. "Three-dimensional scanless holographic optogenetics with temporal focusing (3D-SHOT)." Nature communications 8.1 (2017): 1-14.
Lily is a fifth year Neuroscience PhD student in Andrew Hires lab. Two words to summarize her research interest — neuromodulators enthusiast. Apart from doing research, she also loves traveling around the globe, going to live concerts and exploring different restaurants in LA. She is also a big fan of Arsenal Football Club, COYG! Follow her on twitter @LilyZou1.
Hey, nobody said neuroscientists were good at digital art okay?
(By Rita Barakat , NGP 2016 Cohort)
In late November of 2020, pharmaceutical companies Pfizer (in collaboration with BioNTech) and Moderna announced the early efficacy and safety results from their respective coronavirus vaccine candidate clinical trials. This news was resoundingly welcomed by both the general public and scientific community during the height of a truly devastating pandemic season. Both vaccines were developed using novel messenger RNA (mRNA) technology, and both were reported by various media sources to have an effectiveness of preventing severe COVID-19 disease in the 90-95% range, significantly higher than the 50% benchmark many public health officials cited as necessary to ensure a truly effective vaccine.
Figure 1. Original press release (excerpts) from Moderna (top) and Pfizer-BioNTech (bottom). These announcements preceded any formal data releases/ peer-reviewed publications, which some scientists have criticized as undermining the scientific review process.
However, notably absent from the initial presentation of these results were the preliminary data from the clinical trials themselves. Instead, Pfizer-BioNTech and Moderna gave press releases (abridged copies in Figure 1) in mid-to-late November providing the general public and media with the basic information about their respective vaccine candidate’s efficacy in a small subset of the full study populations. This lack of full transparency surrounding the raw safety and efficacy data from these trials, while not necessarily the top priority or concern for the general public currently, remains troubling. This is especially true for scientists trying to understand the full picture of how these mRNA-based vaccines work in practice, and the potential long-term ramifications of this expedited vaccine approval process. In addition, the lack of any substantial peer-reviewed studies stemming from these clinical trials raises flags about how statistically significant the early results are, particularly when it comes to the question of how effective the vaccines will be in the populations excluded from clinical trials [1] (see Table 1).
Finally, we in the scientific community must make an effort to anticipate and account for the variable of time, specifically when it comes to any foreseeable (and perhaps unforeseeable) adverse effects that may result from an attenuated vaccine development and approval process (or Operation Warpspeed in the United States). We must be prepared to deal with the (granted, highly unlikely) consequences should there be any significant side effects from the vaccine that were not detected in the shortened time frame during which study participants were evaluated. This concern about unforeseen side-effects is, however, largely addressed by the history of vaccine candidates presenting significant side-effects during the acute stages of testing (i.e. within a matter of weeks or months of an ongoing clinical trial, as opposed to years after testing).
“Next Generation” Vaccines as compared to the “Traditional Methods”
The Pfizer-BioNTech and Moderna vaccines both rely on mRNA as the active biological agent, a departure from the more traditional approaches of using inactive or live-attenuated adenovirus (or other viral components) to confer immunity. In the case of these two vaccines, the engineered mRNA (known as mRNA-1273) codes for the spike protein of the Sars-CoV2 virus: this protein is critical for cell membrane fusion and subsequent infection of host cells (Huang et. al, 2020).
As the body’s natural “template” for creating proteins, mRNA is transcribed directly from a precursor genetic code (usually DNA) and is then translated into a protein that goes on to fold into its activated conformational state and perform a specific function within/surrounding the cell. Thus, an mRNA-based vaccine would interact with the human body’s cells in a predictable fashion to create a copy of the Sars-Cov2 spike protein (but importantly, not the entire functioning virus). This protein would provide the body with just enough “information” to generate a robust immune response capable of creating long-term immunity (via the “adaptive” aspect of the human immune system), without presenting any risk of infection or disease symptomatology to the vaccine recipient.
Figure 2. Image from Mahalingam and Taylor piece in The Conversation illustrating the five major mechanisms used in past and current vaccines. Mechanism 1 (Top): Vaccines which rely on genetic material, such as DNA or RNA, to generate an immune response. Mechanism 2 (Second from Top): Vaccines that include a component of the virus resulting in production of a key protein (in the case of the Sars-CoV2 virus, the spike protein) to generate an immune response. Mechanism 3 (Middle): Vaccines that rely on an inactivated form of the entire viral structure to confer immunity. Mechanism 4 (Second from Bottom): Vaccines that rely on a live-attenuated form of the entire viral structure to confer immunity. Mechanism 5 (Bottom): Vaccines that rely on direct injection of the key protein (in the case of the Sars-Cov2 virus, the spike protein, or a subunit of this protein) to generate an immune response. It is worth noting that for the Sars-CoV2 virus in particular, the spike protein appears to be a critical component of conferring sufficient immunity to the virus as a whole, and it remains unclear which (if any) of the above methods will prove to be most effective in providing long-term, population-wide immunity over time.
One of the strongest benefits of DNA and/or RNA-based vaccines, which have been dubbed “next generation vaccines” by many experts in the field, is that they can be quickly engineered and manufactured as a result of modern genetic sequencing technologies. Considering that the Sars-CoV2 viral genome was sequenced and released to the scientific community in January of 2020 [2], it is perhaps no surprise that an RNA-based vaccine was developed and ready for Phase 1 clinical trials in a matter of months.
Suresh Mahalingam and Adam Taylor of The Conversation draw up a comprehensive comparison of these next generation vaccines and their more traditional counterparts and argue that having a diversity of vaccine mechanisms will be key for “ensuring vaccination is safe and effective for all members of society” (Mahalingam, S., & Taylor, A., 2020). This message is encouraging for vaccine development, and positive news for the general public, as it implies that multiple different vaccine mechanisms (such as those illustrated in Figure 2) will be effective at conferring long-term immunity and protection of the population in which they are deployed.
The “who, when and how” of the Pfizer and Moderna Clinical Trials
In addition to the novelty of the genome-based vaccine technology, more questions about long-term efficacy of the Pfizer-BioNTech and Moderna vaccines arise from the limited study populations, constrained timeline for testing, and methods for evaluating “effectiveness” of each vaccine candidate.
Two sides of the same coin when it comes to selecting a study population
Both vaccine clinical trials included a lengthy set of strict inclusion/exclusion criteria which, while by no means is abnormal for clinical trials, are nevertheless troubling when taking into account the short-circuited vaccine approval process following each of the trials. Some notable subject exclusions from both trials include children under the age of 12, individuals over the age of 85, women who are pregnant or planning to become pregnant during the course of the trial [3], individuals who are immunocompromised, and individuals with significant health complications and/or pre-existing conditions.
Table 1. Non-exhaustive summary table of the participants and exclusion criteria for each of the mRNA vaccine candidate clinical trials. See references to NIH clinical trial webpages for more detailed information.
The primary argument for maintaining such restrictive criteria for participation in these clinical trials is a valid one: it is critical that researchers be able to disentangle the outcome measures in the experimental and control groups from any other potentially confounding variables (such as pre-existing conditions, a compromised immune system, etc.). This is necessary to ensure that the outcomes from the trial can reliably be connected to the study intervention (administering a vaccine candidate). However, given that the vaccine approval process shifted from a more standard timeline to an emergency-use timeline in early December 2020, one could make the argument that such a non-representative study population hinders our ability as scientists and as a society to determine whether these vaccine candidates will truly be safe and effective in individuals that would not have qualified for participation in the clinical trials (but who will undoubtedly be seeking to receive the approved vaccines in large numbers across the country and the world).
From two years follow-up to two months follow-up
Another significant challenge, which has yet to be addressed, pertains to the “warp-speed” timeline that investigators and pharmaceutical companies have followed when it comes to the development and testing of vaccine candidates. Under normal (non-pandemic) circumstances, a vaccine clinical trial could take several years to complete due to the extended follow-up period required to ensure long-term efficacy and safety of the vaccine in question. In the case of the two mRNA vaccine candidates from Pfizer-BioNTech and Moderna, the subject follow-up has taken place over the course of the last three to four months, providing researchers with a limited number of time points to perform pre- and post-interventional analyses.
This raises the (again, unlikely) possibility that adverse effects which have yet to be detected in either of these two clinical trials may be discovered after an extended period of follow-up. At this point, assuming the current proposed timeline for distribution is followed, several million Americans and even more people in other countries will have already received one of these vaccines as a result of emergency-use authorization. While the urgency for effective public health intervention and relief supersedes many of these concerns, nevertheless, it is important that the scientific and public health community be prepared for the possible outcomes of a partially complete post-intervention analysis from one or more of these vaccine clinical trials. In addition, we must all be cognizant of the ethical ramifications should it be discovered that the vaccine study and approval process was fast-tracked at the expense of public health and public safety.
Do these concerns mean vaccine approval should be delayed? Not necessarily.
A fundamental tenet of public health practice centers around the idea that many of the decisions individuals make have broader implications for the other members of their community. A specific example that bears a close resemblance to the current state of controversy surrounding mask-wearing in the United States is the increased restrictions over the last two decades on smoking in public spaces.
While the decision to smoke seems at first glance a personal one, the consequences from second-hand smoke lingering in the atmosphere, combined with the lack of consideration for or approval by other people nearby, means that this action does not simply concern one person. For this reason, one can make the argument that the behavior of smoking in public spaces requires public health intervention and regulation for the safety of the community at large. Similarly, choosing to not wear a mask is not only a decision that concerns the individual’s health and safety, but it also poses a threat to the other people that individual may encounter, and it is the latter that necessitates enforcement of mask-wearing in public spaces during the current pandemic climate.
Vaccines fall into a similar, yet admittedly much murkier area of public health practice. Widespread vaccination has been shown to lead to widespread immunity, illustrating how an individual’s decision to be vaccinated not only concerns that individual’s health, but also the ability of that individual to spread disease and impact the health of others in their community who may or may not have been vaccinated for the same virus/disease.
One critical caveat regarding the experimental design of the Sars-CoV2 mRNA vaccine trials is that there were no measures to determine whether the vaccine candidates effectively prevented continued spread of the novel coronavirus (i.e. individuals who receive the vaccine are no longer able to spread the virus to others). However, public health experts have emphasized that even if these vaccines did not prevent viral transmission, having a sufficiently high percentage of the population inoculated against the virus would effectively limit the number of viable hosts remaining in the population. This would result in a gradual disappearance of the virus over time (and thus, “herd immunity” would be achieved in the population).
Under normal circumstances, scientists and public health researchers would have the luxury of time to critically evaluate and if necessary, reassess the early results from these very promising vaccine candidate trials. However, at the time of this writing, the pandemic has reached a fever pitch and shows no signs of slowing down, leaving hospitals across the country and the world crippled, unable to fully attend to and care for the increasing number of severely ill and dying patients flooding intensive care units. For this reason alone, the moral imperative to provide some kind of relief, even if this relief comes in the form of a technologically-novel, expedited and emergency use-authorized vaccine, effectively squashes the concerns about early data and post-interventional statistics.
And yet, we as human beings have proven ourselves to be capable of multitasking (to some degree). So here I present the argument that scientists, clinicians, and public health officials responsible for the dissemination of these vaccine candidates should make it a priority to do just that. They should try to focus their efforts on providing the general public with desperately-needed relief while also preparing for the possible (though improbable) long-term health consequences of this unprecedented, warp-speed vaccine approval.
Another ethical dilemma: can the placebo group receive the vaccine once it’s approved?
In addition to the potential issues resulting from an expedited clinical trial process, another ethical dilemma researchers face is the question of whether clinical trial subjects that received a placebo should be inoculated with the approved candidate vaccines. The problem lies in the inherent conflicts that often exist between scientific interest and human morality: many of the subjects in the placebo group will undoubtedly seek vaccination, but by doing so, their participation in the ongoing clinical trial is voided, preventing researchers from collecting more data for the purposes of group comparison.
Dr. Anthony Fauci, director of the National Institute of Allergy and Infectious Disease (NIAID), proposed the idea of continuing to collect data from placebo group subjects up until the point they receive the actual vaccine candidate, as an attempt to compromise between the scientific integrity of the clinical trials and the public health obligation to vaccinate as many people as possible to ensure population-wide immunity (Weintraub, 2020).
Delicate balance between the research status quo and human morality (but morality wins)
It is important to be explicitly clear about the intent of this piece in the current pandemic climate: all of the questions and concerns raised regarding these vaccine candidates should in no way minimize or deter the ongoing vaccination effort in the United States and around the world. In fact, our moral obligation to one another as human beings requires that this effort be undertaken regardless of remaining questions and concerns surrounding these (and possibly other) vaccine candidates.
With that said, the issues of data transparency and communication, as well as ethical practices in disseminating the vaccines, must still be top priority for all those involved in the unprecedented push to vaccinate a large enough subset of the population to achieve herd immunity. That being said, there is increased optimism for large-scale, public health education efforts that will aim, as in this piece, to provide the general public with clear and reliable information about the vaccines available, as a means of promoting widespread vaccination.
Footnotes
[1] As of December 10th, 2020, Pfizer-BioNTech published the safety and efficacy results from their vaccine phase 3 clinical trials thus far (though the trial remains ongoing as of publication of this blog post). See reference #5 for more information.
[2] Before the pandemic became widespread, the virus was known to have been in existence before then.
[3] Individuals who are not participating in an approved contraceptive regimen outlined by the study protocol.
References
1. A Study to Evaluate Efficacy, Safety, and Immunogenicity of mRNA-1273 Vaccine in Adults Aged 18 Years and Older to Prevent COVID-19. https://clinicaltrials.gov/ct2/show/NCT04470427
2. Huang, Y., Yang, C., Xu, X., Xu, W., & Liu, S. (2020, August 3). Structural and functional properties of SARS-CoV-2 spike protein: Potential antiviral drug development for COVID-19. https://www.nature.com/articles/s41401-020-0485-4
3. Mahalingam, S., & Taylor, A. (2020, December 2). From adenoviruses to RNA: The pros and cons of different COVID vaccine technologies. https://theconversation.com/from-adenoviruses-to-rna-the-pros-and-cons-of-different-covid-vaccine-technologies-145454
4. Polack, F.P. et. al (2020, December 31) Safety and Efficacy of the BNT162b2 mRNA COVID-19 Vaccine. https://www.nejm.org/doi/full/10.1056/NEJMoa2034577
5. Study to Describe the Safety, Tolerability, Immunogenicity, and Efficacy of RNA Vaccine Candidates Against COVID-19 in Healthy Individuals. https://clinicaltrials.gov/ct2/show/NCT04368728
6. Weintraub, K. (2020, December 4). Continuing COVID-19 vaccine trials may put some volunteers at unnecessary risk. Is that ethical? https://www.usatoday.com/story/news/health/2020/12/04/vaccine-ethics-does-continuing-covid-19-trials-put-volunteers-risk/6473436002/
Rita Barakat is a fifth-year Ph.D. Candidate and NSF Graduate Research Fellow in the Neuroscience Graduate Program (NGP) at the University of Southern California. Her research focuses on understanding the behavioral and neuroanatomical differences between children with dyslexia and their typical-reader counterparts. In addition to her disciplinary research, Rita has also applied her theoretical interests in language and learning to teaching and program administration for the Young Scientists Program (YSP) and Neighborhood Academic Initiative (NAI), two educational partnerships that provide supplementary STEM education to K-12 underrepresented minority students (URMs) in the Los Angeles Unified School District (LAUSD). Rita has been a contributing writer and editor for the NGP’s “Brain on Blogs” since 2019.
Organization Statement
The Neuroscience Graduate Program (NGP) at the University of Southern California is an interdisciplinary, research-based doctoral (Ph.D.) program. Graduate students and Ph.D. Candidates in the program come from a variety of academic and research backgrounds, and are encouraged to explore both, “traditional” and “non-traditional” post-graduate careers that promote scientific inquiry and excellence. Alumni have continued on to careers in academic research and teaching, clinical and/ or government research, industry and development, education, policy, and science communication.
Disclosure Statement
The author (Rita Barakat) and the NGP have no financial or other conflicts of interest in reporting on the aforementioned vaccine candidates. The author and the NGP neither endorse nor discourage vaccination with either of the aforementioned vaccine candidates, or any other candidates not explicitly mentioned.
Image from: https://www.istockphoto.com/
(By Zachary Murdock , NGP 2017 Cohort)
From wildfires and impeachment to protests and pandemics, the beginning of 2020 has taken the “quantity over quality” approach to global events. While there are many sources that describe all of the current protests [1], pandemic projections [2], and attacks on civil rights [3], I’m afraid I have yet another rising issue that we, now more than ever, need to address: you, or more specifically, your face.
The privacy of your online persona -- from social media to scientific publications -- has always been a nightmare to understand. Some websites and services can instantly claim your content as their own to be used however they see fit [4]. While there’s minimal harm in the exchange of cat memes, major issues begin to develop when you learn this ownership also applies to photos of you, from your silhouette to your face. Datasets of millions of images, likely including yours, have been built using publicly posted images for use in image recognition tasks [5].
Recent developments in computer vision systems have led to the rise of highly specialized tools for a variety of visual tasks. One such tasks is “Facial Recognition,” or the ability of a computer to scan, store, and recognize human faces for use in identifying people. Tools like Apple’s Face ID [6] or Facebook’s predictive labeling make use of advanced algorithms designed to detect, analyze, and identify your face using several physical key attributes [7]. This technology has been used in simple systems like Snapchat’s facial filters, but has also evolved to the extent that it can streamline security checkpoints for authorized personnel for major events such as the 2021 Summer Olympics in Tokyo [8].
While these applications sound relatively harmless and convenient, it also opens the door for serious personal violations. As these algorithms continue to improve, apps and search engines are being developed to let someone identify an individual with a photo in real time. That means a stranger could easily take a photo of you and then be given all your social media links, where you work, and more in moments. The issues of privacy rapidly escalate as one moves away from the front-line consumers. The Chinese government is in the process of installing 626 million surveillance cameras to be used to monitor its citizens and enforce its social credit system [9]. Effectively, individuals can now be tracked and monitored continuously without their knowledge. Similar systems are being planned for other countries in the region, including India and Russia. The United States has also seen a dramatic increase in the use of this technology, through security contracts and private companies. Leaders from all of these countries claim that these systems are being developed to “help catch criminals, find missing people, and identify dead bodies.” Or as Clearview.AI [10] puts it, “Computer vision for a safer world.” (Feeling the Orwellian "Big Brother" tones yet [11]?)
In reality, this technology is quickly becoming a method for powerful entities to identify and target individuals. As the Hong Kong protests continue, China has significantly up-regulated its facial recognition systems to identify and “prosecute” protesters [12]; however, it has not been used on the state’s police force, even during the well-documented violent outbreaks. The United States has also recently seen a sudden surge in facial recognition being used by law enforcement: specifically in identifying, charging, and prosecuting members of the recent BLM protests [13]. Yet, conveniently, the states aren’t using them to identify officers that initiate violence or abuse. Other state-run services, such as ICE and the NSA, are also looking to incorporate this technology to "better investigate persons of interest." Fortunately, many of the major resources to this realm of computer vision have taken a step back to prevent this abuse of their systems [14, 15]. Furthermore, political regions are also beginning to institute more strict privacy regulations [16, 17]; however, how long these current regulations last is yet to be seen.
There are a few things that make these last developments worrying. First, these systems are effectively weaponizing recent scientific breakthroughs in the realms of computer science and visual neuroscience with minimal turnaround time. Second, the rapid up-scaling of recognition systems will only increase as their cost continues to drop along with the computational requirements. Together, these shifts should encourage more insight into the ethical and societal impacts of research. And third, all of these developments are coming from a small specialized area of computer vision research: imagine the great and terrible things to come as more discoveries are integrated into our technologies.
All of this information can leave one wondering "what can I do now?" Unlike other current events, donations and individual contributions won't help. Only through vocalizing your concerns -- to friends, to co-workers, to your elected representatives -- can public interest and subsequent regulations be put in place to prevent more harm from being done. And in-case you don't feel confident explaining it, you can leave that to our friend, John Oliver [18]:
Zach is a 3rd Year NGP Student as a part of the iLab under Dr. Laurentt Itti. His current work focuses on translating biological attention mechanisms into convolutional neural networks. Follow him on Twitter @zwmurdock, or connect with him on LinkedIn here!
(By Kristina Shkirkova , NGP 2017 Cohort)
A new silent killer emerged unrecognized and quickly roamed around the world with no regard to continents, nations, or borders. Although at a different pace and with a different level of preparedness, we try to stay united in fighting against Coronavirus as a common enemy.
We are still far from beating it, but efforts of social distancing to prevent the spread of coronavirus has highlighted a complicated relationship with another silent killer: air pollution.
The number of deaths related to poor air quality is estimated to be 7 million annually worldwide[1] and about 100 000 in the United States[2]. This number is not a projection, it’s a reality. A reality that has been part of our lives for decades.
Every time we take a breath, we are inhaling toxic pollutants and our lungs are the first to suffer. Fine and ultra-fine particulate matter (PM) are major constituents of urban air pollution. Numerous studies have shown that PM negatively affects practically every organ in our body[3,4]. Based on data from a new Harvard study that was released before peer-review, there is an increased risk of death in patients with COVID-19 who were exposed to moderate and high levels of PM(2.5)[5]. PM causes lung, and systemic inflammation. The smallest particles of PM have been shown to penetrate into the bloodstream threatening all body organs, including the brain[6].
There is an extensive body of evidence regarding the potential harmful effects of air pollutants in our nervous system[7]. In adults, evidence shows that long term exposure to PM causes cognitive decline[8]. Animal models suggest an increase in white matter damage with sustained exposure to high levels of PM[9]. Furthermore, vulnerable populations with concurrent or preexisting cardiovascular conditions are more susceptible to air pollution damage due to blood-brain barrier abnormalities[10].
Global efforts of social isolation resulted in a significant drop in air pollution levels across the world[11]. Although some places are more polluted than others, it’s important to realize that, like the spread of coronavirus, the issue of air pollution is global and no individual, group, or country will be able to solve it alone.
Kristina Shkirkova is a 3rd Year NGP Student at Dr. William Mack's neurosurgery laboratory studying vascular inflammation, air pollution, and stroke. Follow her on Twitter: @kshkirkova!
(By Dakarai McCoy , NGP 2017 Cohort)
*For those who wish to effectively navigate time -- skip to the end and read the bold text for the algorithm.
Mastering time will be your best asset. It can make all the difference when trying to submit a journal article, or your latest grant proposal, on-time. The ability to manage time is a key component in building your foundation as an independent researcher.
Managing time comes in different shapes: From breaking down big goals into smaller, more doable, chunks, to planning vacations by doing research on hotel and airfare prices months in advance. Most successful time lords take pride in designing their schedule, closely managing how they allocate their time and often turning down opportunities that lead to distractions down the road. Just take a look at any person you admire, or any person on the Times 100 most influential people list. Look them up. Find out how they manage and organize their schedules and use that as an early model. But in case you don’t have time to look up how the rich and famous manage their calendars, I would like to share some of the time hacks that have helped me so far. Keep in mind that what works for me, may not work for you. For example, those of us who are lab rats have schedules determined by the length and stages of our experiments, while coders have schedules depending on the flow of programming and bug fixing, or the availability of collected data from collaborators. In times like these, when all of us feel the impact of the coronavirus pandemic, it may be helpful to have some tips handy when it comes to managing our time, and hence, our sanity.
One of the things I find most helpful is identifying the type of environment that gets me in the zone. Some people need a little background noise, a cup of tea/coffee, a conversation with a friend or colleague, a clean environment, or even complete silence. I try to avoid reading emails, blogs, scrolling on Instagram, Pinterest, Facebook, or anything that puts images in my head that are difficult to forget. When I do fall prey to social media, it usually leaves my mind distracted - processing all the passive images recently viewed. Instead of allocating precious passive cognitive abilities to visual candy, through years of ninja training, I have learned to warm up my brain with visual cues that facilitate an optimal context for productivity. Activities such as reviewing lab notes, code, experimental results, slides, or even the latest papers of the field usually do the trick. Take some time to rework why the project you work on is important to the world, to your field, and to your own research. Can you come up with a new perspective that gives you more clarity and places your research in a broader context? In moments like these, I like to review the observations that have led to my hypothesis and ask myself if the current test / experiment is the best fit to answering my particular research question. If this approach doesn’t work well for you, think of something that evokes the same feeling as when you’re ‘sciencing’, or when a thought triggers your curiosity. For example, sometimes physical exercise, or doodling, will put me in the right headspace. Other times, origami - a physically integrated mental exercise - helps warm up my brain. But what works for you may end up clashing with schedules and deadlines imposed externally. How does one deal with such curveballs?
Circa 2013, I attended the “Guaranteed 4.0” workshop led by Donna O. Johnson, at the fall regional conference for the National Society for Black Engineers. The workshop used the latest research in cognitive science to provide attendees with techniques that make learning easier both in and out of academia. The “Time Management-Plan for Success” seminar provided me with the necessary tools to begin forming a personalized and effective schedule. Motivated by what I learned at the 4.0 workshop, I decided to buy Dr. Johnson’s “Guaranteed 4.0 Workbook”. Through its application, I continued to hone my time management skills by sticking to “my plan”. After changing my study and testing habits, I learned how my brain converted information from short to long term memory most efficiently. With this tool, I began to improve my ability to access stored information as it related to the context at hand. I felt I was finally in the zone. And then, I got into graduate school.
In graduate school, I quickly learned that syllabi didn’t exist for day-to-day research, and lab-related deadlines were more fluid. This dramatic shift made it difficult to apply Dr. Johnson’s “Guaranteed 4.0” learning process. As an undergraduate, or master’s, student a course’s syllabus is typically given to you on the first day of the course. This syllabus is a contract between you and the professor, and usually includes weekly homework assignments and readings for each class. Since the schedule, or scope of work, has already been predetermined, applying the “Guaranteed 4.0” method is straightforward. But as a graduate student, there is no syllabus! There are only program deadlines designed to ensure timely progress. The day-to-day, and weekly planning is left up to the individual. So, what does one do when everything seems to be up in the air? Well, here are some things that have worked for me as a PhD student, given my personal struggles on the road to mastering my time.
Work in 20-minute intervals [Pomodoro's Technique: A famous time-management technique developed by Francesco ‘Pomodoro’ Cirillo in the late 1980s. Also, Italian for tomato.]) with a 5-minute break. Repeat 4 times and take a 20-minute break before beginning the next cycle. Do at least 3 per day. The goal here is to read, code, and write every day. Task variety can help you stay engaged and keep a sharp mind.
Set your objectives for the day. List about 5-8 tasks. Use the S.M.A.R.T. [Specific, Measurable, Achievable, Relevant, Time- bound] goals framework to aid in setting your tasks. Write tasks that are achievable within the time interval you choose.
Choose a task, set a timer, and focus on it for the next Pomodoro interval.
Work on the task at hand with no distractions.
Stop when the time is up and record your progress. Get up and walk, drink some water, get some tea, and get your blood flowing. If you haven’t completed the task, ask yourself why. Is it because you didn’t stay focused, was the scope too large, or was it that the goal wasn’t specific enough? Regardless of the reason, find something that shows progress, and if necessary, adjust the S.M.A.R.T. goal.
Repeat steps 2-5, three more times. After the fourth time, take a 20-minute rest. This is when you can engage in emails, or other admin work. It is also a good time to supplement your task objectives. For example, if you’re coding, perhaps read a book. If you’re doing a literature review, write down your thoughts. If you’re doing an analysis, watch a tutorial. This is also a good time to do some peripheral reading, like catching up on the latest news, or talking to a friend or family member. If there are lab mates around, maybe engage in a conversation about their projects.
Have a tangible output at the end of every 20-minute Pomodoro. Having a tangible output will make it much easier to track progress toward long term goals.
A suggested pattern for the 4 rounds of Pomodoro cycles could be to read and review recent literature. The next cycle would be to formulate and test hypotheses based on a new set of observations. The third cycle could be completing the tests and interpreting the results. The last cycle could be a review of all the above.
At the end of the day, assess how productive you have been based on task completion. If you haven’t finished more than a few of the assigned tasks, consider thinking more deeply about the steps necessary to complete each task. If you have accomplished all of your tasks, be more risk taking. Rinse and repeat the next day.
Again, this method is for all the brave PhD students who must first develop their own semester syllabus and stick to it just as they would during undergraduate studies. This is an approach that doesn’t use a syllabus, or a rigid plan, but gives immediate feedback necessary to know the value of your time. I hope you will find some of this helpful as you navigate your own journey towards the completion of your PhD. Time for me to take my 20-minute “tomato” break.
Dakarai McCoy is a 3rd year student in NGP studying longitudinal structural and functional brain development in adolescents. His research interests lie at the intersection of psychology, neuroscience and sociology to understand how psychological mechanisms and social contexts affect brain development.
Stock Image from: DepositPhotos.com
(By Alexander Markowtiz, NGP 2014 Cohort)
(Originally Posted in February 2020)If you’re a graduate student interested in staying in academia, then the road forward is clearly laid out for you from the start of graduate school: (1) find a lab; (2) pick a project; (3) gather funding; and (4) promote and publish your project. If you do all four of these during your time in grad school, you will inevitably be in line for a solid post-doc offer following grad school, which will likely be a launch pad for a career as a research professor.
However, if you’re interested in a career outside of academic research, then things are not so clearly laid out. Navigating a job search outside of academia is challenging because the resources most available to us are academia-focused and are in line with the stated objective of our graduate program to create the next generation of academic scientists.
After deciding that a career in academic or industry research was not for me, I started to dip my toe in the water of the industry job market. What I’ve learned in this initial stage is that hiring a fresh PhD out of school is a risky bet for a company. Breaking down the stigma of what a PhD represents and building up connections into industries are the keys to making these hiring bets less risky.
As a neuroscience PhD, our perceived skill set ranges from the areas of computer science, mechanical engineering, electrical engineering, microbiology, sociology, psychology, physiology, oncology, genetics, clinical pathology, and all the combinations between these fields. The skill set we bring to the table changes between individuals and, to a job recruiter, must feel like a roulette table of choices to bet on which skills we’ve acquired during our time in school.
Industries spend a lot of resources on recruiting new talent. They want to find, isolate, and recruit the best and brightest for their team. They are not in the market of losing bets and will make sure they find what they are looking for by only recruiting from trusted areas. Industries in management consulting, data science, software engineering, product analytics, and project management all recruit outside of our reach in the Marshall School of Business and Viterbi School of Engineering because they believe the likelihood of finding a student that fits their needs is higher in those schools than in the neuroscience program.
The System—the politics of the university—makes it an uphill climb for academic departments to intermix resources. However, if the Neuroscience Graduate Program (NGP) is truly an interdisciplinary program, we should have interdisciplinary access to these university resources. Neuroscience graduate students have the same analytical and problem-solving skills as our fellow graduate students in the engineering program. Furthermore, we have strong communication and marketing skills needed to be awarded grants and get our research published, much like what is taught in business school. It is unfortunate that a student in the engineering school could work in the same lab (and even on the same project) as a neuroscience grad student but will get disproportionate access to university resources. To remedy this imbalance, interdepartmental faculty need to build a bridge for their students into the already well-established career-development infrastructure of the other departments.
University resources are not the only resources available for job seeking students. In fact, the personal network of an individual is their best bet for getting a job. “80% of people find jobs through people they know” according to a weekly spam email I receive from LinkedIn job alerts. The reason why this statistic is high is that having someone vouch for you lowers your perceived risk.
From personal experience, my best opportunities came from people I knew, but not directly. NGP hosts workshops for students on “Building your LinkedIn page”—a great resource for students—but NGP needs to get faculty to participate as well. This is the tool we need to search the NGP professional network. For example, when you type a job or company name into LinkedIn, you will see people who currently have those positions/work in that company. As students, there’s a lower probability that we know someone in a company outside of academia. However, there is a higher probability that someone that we know knows someone in that company because a faculty member’s professional network is larger and older than our student body’s. These second-connection networks are critical for those of us looking into industry positions because we can ask these faculty members to introduce us to people who work in the companies we are interested in, and thus lower our perceived hiring risk to those companies.
A third way to lower your hiring risk and to get career exposure is by doing an internship. Internships promise exposure to job opportunities that may be available to students once they are done with school. An intern can meet hiring managers and company team members, learn about the type of problems they deal with, and most importantly, learn the skill set that a company values/uses on the job.
A recent survey of NGP reported that 70% of students believe their PIs would not allow them to take an internship during their time in school. From my experience in grad school, I have found that faculty in the program are deeply invested in their students’ success and would be willing to allow students interested in careers outside academia to pursue other avenues and gain exposure in other fields. However, this statistic may point to the lack of clear career-development infrastructure for students. We need to have a transparent plan for us to take internships with the help of our PhD advisors and program. As much as our PIs may disagree, taking a summer off your 3rd or 4th year to participate in an internship should not derail the productivity of the lab. My suggestion would be to start exploring internships after passing the qualification exams. By then, a PhD candidate has proven that they have acquired the skills and have a committee-approved plan to finish their PhD thesis project. Therefore, this is the perfect time to explore career options, start building a professional network, and identify the translational skills that they can develop in grad school and bring to the table during job interviews.
One last statistic from the NGP survey is that there is a plurality of students that feel as though they are “suffering through grad school.” All hyperboles aside, I believe that many of us grad students feel anxiety from not having a solid career development plan. Graduate school is supposed to be a means to an end for professional success. By building a solid career development plan within the program, we can show students that there’s light at the end of the tunnel, that the likelihood of finding a secure job is within our reach, and that the NGP community will help to make these options a sure bet.
Alex is a sixth-year graduate student in the Kalluri laboratory studying the physiology of the inner ear. In his free time, he enjoys baking, boxing, and participating in a healthy debate.
Connect with Alex on LinkedIn! (https://www.linkedin.com/in/alexander-markowitz-0063351b/)
(By Erin Ryan, NGP 2017 Cohort)
(Originally Posted in November 2019)Changing fields- it happens for a lot of reasons. Maybe your interests have changed, maybe your original field is in a funding drought, maybe you’re looking to incorporate new perspectives into your work. Whatever the cause, changing fields in grad school is no easy feat, so here are some of the tips and things I’ve learned from my experiences!
Know that it’s going to be hard. Both academically and emotionally. By the time you’re applying to grad school, you’ve likely gained a fairly solid scientific skillset and knowledge base. So it’s going to be pretty challenging having to start from scratch again, but this time with higher expectations and work demands, and with less direct help and oversight than at the undergrad level. It’s also going to be pretty frustrating, feeling like you’re constantly behind the eight ball, and playing catch up. But be patient with yourself. You’ll learn the skills you need, and build up a new knowledge base faster than you think.
Find the right lab! When picking a lab, be sure that the PI understands that you are changing fields, and is wiling to work with you, and be patient with you while you get up to speed. A good PI will also be willing to help you develop your own interests within the context of the type of work their lab does.
Resist the urge to compare yourself to others. I’d say this goes for all grad students, and is also important for PI’s to remember as well. It’s easy to feel down about yourself/be harsh to your students because so-and-so in a different lab has more data, or is farther along in their projects, or works faster, etc. etc. But good science is not a race. Instead of looking at how everyone else is doing, look instead at how far you’ve come. Did you finally master a new technique? Teach yourself to code AND get it to run? Read enough papers to have an informed conversation about your new field? Congratulate yourself! Those aren’t trivial tasks, even if they are “basic expectations.” Keep working hard, and focus on doing the best science possible.
Learn the value of an interdisciplinary background. Seriously, we need more interdisciplinary science. While you may have a steeper learning curve than others starting grad school, having knowledge of not just the state of other scientific areas, but how other types of scientists work, solve problems, and what tools are available, will allow you to provide unique perspectives when you design your projects. It also allows you to ask more unique questions, which can be more impactful to your field.
Spread out your coursework. Classes can be a great way to get up to speed with the techniques and background knowledge of your new field! But remember, if the class is covering material you aren’t familiar with, you’re going to need more time that everyone else to study, learn, and get coursework done. So it may be best to work with your PI to make a plan to spread your courses out over a longer period of time, so that you can keep up with your research as well, and keep classes from becoming overwhelming.
Good luck! Grad school is hard for everyone. Just keep moving forward!
Erin is a third year student in Jason Zevin's lab, and a former marine biologist. She is currently studying physiological reactions to hate speech, and is widely considered to be the World's Okayest Grad Student. She enjoys baking, gardening, sewing, and training her dogs in her free time.
Original artwork by: M. Lisenby
(By Kasey Rose, NGP 2017 Cohort)
(Originally Posted in October 2019)Most mornings, soon after I sit down to enjoy a delicious almond-milk cappuccino with a side of Greek yogurt and honey, a staring contest ensues with a visitor perched across my kitchen window. On top of a tiny wire that connects two comically close power poles, sits Humphrey, the friendly neighborhood hummingbird. After a full morning of devouring nectar from the overgrown California Fuchsia outside my window, little Humphrey partakes in some rest and relaxation, coupled with some solid eye contact with the human equivalent of a Loon. As you can imagine, all that attention from one of the most prodigious flyers in the universe is quite flattering. But recently, I have begun to wonder whether Humphrey can actually see me in my unicorn onesie, or if, all this time, I’ve been having a one-way unrequited stare-off with an unknowing participant. As a neuroscience graduate student at USC studying photoreceptor cells in the retina, this got me thinking: How do hummingbird eyes see the world? Is it similar to how my eyes see it?
To answer these questions, I will need to briefly describe certain elements of the human visual system, before exploring what is known about hummingbird vision. First off, humans have camera-type eyes where the cornea (the transparent part of the eye in front of the pupil) and the lens work in concert to focus light from the environment directly onto the retina located at the back of the eye. Focused light travels through an aqueous jelly-void inside your eyeball and then proceeds to pass through all the layers of the retina; a weird inside out configuration.
Once light reaches the very back of the retina, the neurons that capture light (photoreceptors) become excited and convert light into an electrical signal, indicating to downstream retinal cells the basic properties of the visual space, such as color, brightness, and contrast. In humans, there are two major classes of photoreceptors: rods and cones. Rods, which get their name from their rod-like structure, help humans see in dim-light conditions and are especially helpful when snatching that midnight cookie. Cones, on the other hand, have a conical shape and are essential for color vision and “tasting the rainbow”. Moreover, unlike Skittles, cones come in only three different flavors, with each cone containing a specific photo-pigment (red, green, or blue — think RGB) that is excited by certain wavelengths of light. Thus, primary colors and the many other hues we are familiar with (such as gold, pink, teal, etc.) are perceived by stimulating various combinations of these three cone types.
Cones are located near the center of the retina, a region known as the fovea centralis, which is the area responsible for visual acuity. For example, when I look at an object, such as Humphrey sitting on a power line, my eyes orient in a specific way so that the image of Humphrey falls directly onto the fovea centralis in each eye. This process allows me to clearly see this regal and multi-colored hummingbird in all its glory. Rods, on the other hand, lie on the periphery of the retina, just outside of the fovea centralis. To illustrate how rods help us see, imagine that Humphrey is sitting on the same power line at 9:00 pm during a lunar eclipse. I can actually view him better out of the corner of my eyes using my peripheral vision. In this very low light environment, my cones are no longer active and my more sensitive, but color-blind, rods take over the show, allowing me to see Humphrey in many shades of grey.
But what about Humphrey? Do his eyes and photoreceptors work similarly to mine? Can he see me (day or night), or is he more focused on the vibrant red California Fuchsia outside?
Like humans, hummingbirds rely tremendously on their vision for survival, specifically for finding mates and locating food. Since vision is so significant, hummingbirds developed some unique characteristics for processing visual information. But until quite recently, very little was known about their visual system. Fortunately for me and you, research articles discussing hummingbird ‘eye morphology’ and ‘retinal topography’ were recently published. Using information presented in these papers, I will attempt to answer the questions posed at the beginning of this post.
At a first glance, hummingbirds have similar eyeball morphology to humans. They possess camera-type eyes and have relatively small corneas, which is found in most diurnal animals, including humans. This adaptation improves visual acuity by increasing the size of the image projected onto the retina1. On the other hand, in contrast to human retinas, hummingbird retinas are avascular. Once light passes through the hummingbirds’ cornea, lens, aqueous jelly, and is subsequently focused onto the retina, the lack of blood vessels works to prevent light scattering. This adaptation endows hummingbirds (as well as all other avians) with even greater visual acuity than humans. Looks like Humphrey has 20/10 vision, even without contact lenses. How unfair is that? And to supply nutrients and oxygen to the energy-needy retina, hummingbirds have evolved a comb-like vascularized structure (known as pecten oculi), which projects into the clear jelly structure from the optic nerve2.
While a hummingbird’s basic eyeball structure and retinal cellular composition is similar to what is found in humans, that does not mean we see the same vibrant world. When I look at the sky during the day, I see blue (brownish-gray blue here in Los Angeles). Humphrey, on the other hand, will see the sky differently even though the same wavelengths of visible and non-visible light are hitting both of our retinas. This is because hummingbirds are tetrachromatic: they have 4 flavors of cones (ultra-violet, violet, green, and red3). In addition to the 4 main types of cones and 1 type of rod (as found in humans), hummingbirds have red double cones3. This weird cell type allows two cones to directly communicate which each other, but their exact function for color vision isn’t currently understood.
Another major difference between cones in Humphrey’s and my retinas is that hummingbirds have oil droplets in a specific compartment of their cones. These oil droplets absorb some of the longer wavelengths passing through the retina, effectively restricting the absorption spectrum of each cone. This property unique to avians enhances color discrimination (less color overlap) and expands their color palette, allowing them to see a multitude of colors beyond what humans see. It has been postulated that seeing ‘beyond the rainbow’ is important for feeding behaviors (i.e. attracting them to specific flowers), as well as for mating (i.e. using their flashy iridescent feather colors to attract potential mates)1. Maybe this explains why Humphrey is drawn to the world inside my kitchen window: I probably look like a being from another dimension in my purple unicorn kigurumi.
Besides perceiving the world in amazing technicolor, hummingbird eyes also sport two centers of acute vision, each packed with cones: a fovea centralis (like humans) and an area temporalis4. The central fovea helps hummingbirds view distant objects that are laterally displaced from their beak in full color and with high resolution1. Therefore, when Humphrey is sitting on the power wire facing my kitchen window, his central fovea in each eye is informing him about the left and right visual fields lateral to his beak, such as the pigeon flying by. The area temporalis, on the other hand, is positioned in the retina to sharply resolve detailed objects that are nearby and directly in front of Humphrey (i.e. me at the kitchen table). Thus, both high acuity regions of the hummingbird retina work in concert to combine monocular vision (from the central fovea) and binocular vision (from the area temporalis) to create a nearly 360-degree image of the world4. Wouldn’t you sit and stare at things forever if everything looked like the world from Avatar in IMAX 360?
Now looping back to my original questions about Humphrey and our early morning rendezvous. After analyzing what we know about hummingbird eyes, it appears that Humphrey has all the necessary visual elements, and then some, to compete in a staring contest with me. But the more I think about it, the more certain I am that the early morning sun reflects quite intensely off my kitchen window. So, while each and every morning I have stared longingly through my clear window, I shiver at the thought that, all this time, we have both been staring at the same thing: Humphrey, the Hummingbird who broke my heart.
Kasey Rose is a third-year Neuroscience PhD student and is currently studying the underlying mechanisms that cause genetic retinal diseases, focusing specifically on what genes/proteins/signaling molecules are involved in rod photoreceptor degeneration. She is deeply interested in understanding how the retina processes visual information and how we can use that information to support people suffering from vision loss. She also loves eating avocados covered in sriracha, playing volleyball, hiking/skiing, and traveling to Japan.
Follow on Twitter (@kaseyvrose) and connect with her on LinkedIn (https://www.linkedin.com/in/kaseyvrose/)
Original Artwork by Nicole Barakat (on twitter, @foumiedye)
(By Rita Barakat , NGP 2016 Cohort)
This semester, I’m participating in a professional development workshop through the Center for Excellence in Teaching (CET) at USC, known as the “Future Faculty Teaching Workshop Series”. As part of a discussion on ways to establish an authoritative presence in the classroom, we were asked to read an article discussing the various “competence types” (essentially personality traits) that are most likely to manifest into Imposter Syndrome.
For those that may not be as familiar, Imposter Syndrome refers to the feeling that one’s accomplishments are not attributable to their own merit and hard work, but rather, are the result of luck or chance. As a result, the individual may believe that they do not measure up to the academic and/ or professional accomplishments of their peers, and may feel woefully out of place when surrounded by their colleagues. I would imagine that this description resonates closely with many of you reading this right now. Perhaps by nature of the academic rigor we face, graduate students are one of the most vulnerable populations when it comes to experiencing Imposter Syndrome, and the vast majority of students fall within one or more of the competence types outlined in the original Fast Company article (based on the research of Dr. Valerie Young and her book, The Secret Thoughts of Successful Women: Why Capable People suffer from The Imposter Syndrome and How to Thrive In Spite of it). The publication opens with a definition of Imposter Syndrome, and then proceeds to outline the following competence types from Dr. Young’s book that pose the greatest potential for developing Imposter Syndrome:
The Perfectionist, who sets unrealistically high standards that lead to an inevitable disappointment in being unable to measure up to them.
The Superwoman/man/person, who feels they must work the hardest and spend the most time dedicated to their work in order to be perceived as accomplished.
The Natural Genius, who believes that if the concept or task does not come naturally, that they are somehow a failure amongst their peers.
The Soloist, who perceives asking for help with a task as a sign of incompetence or inadequacy.
The Expert, who feels they must know every aspect of a subject/ field in order to be deemed competent in that subject/ field.
While the article does a thorough job outlining each of these competence types, and even provides leading questions to aid the reader in beginning to sort themselves among them, it does not seem to offer much hope in the way of effectively managing the negative emotions associated with Imposter Syndrome. I propose that one valuable coping strategy for dealing with these negative emotions is to acknowledge their universal impact and dissipate the negative stigma that surrounds feelings of incompetence/ inadequacy by talking openly with one’s peers. As someone who has suffered from Imposter Syndrome throughout my graduate career, it brings me immense relief to be able to share my feelings with my peers and realize that I am in fact not alone in experiencing these thoughts: almost all of us, if not all of us, will have moments where we doubt our competency and ability to be successful in our field. But it is essential to remember the lessons learned on the way to where we are today, and recognize that these experiences are valuable and, statistically-speaking, could not be the result of mere chance. By exposing the more debilitating and ugly side of Imposter Syndrome through healthy discussions with each other, we can work to remind ourselves that our successes are valid, and our accolades are deserved. Ultimately, open discussions about the emotions associated with Imposter Syndrome can help to usher in a sea-change within our own graduate program, and graduate programs/ departments more broadly.
Rita Barakat is a fourth-year Ph.D. Candidate and an NSF Graduate Research Fellow studying the reading network in children with dyslexia, using structural and functional neuroimaging techniques. She is also a Program Assistant for the Young Scientists Program (YSP), and is particularly passionate about STEM Education and pedagogical techniques.
To connect with Rita, reach out to her on LinkedIn!