Search Index
278 items found
- Neuroimaging and spatial resolution | Scientia News
Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Neuroimaging and spatial resolution 06/01/25, 13:40 Last updated: Peering into the mind Introduction Neuroimaging has been at the forefront of brain discovery ever since the first ever images of the brain were recorded in 1919 by Walter Dandy, using a technique called pneumoencephalography (PET). Fast-forward over a decade and neuroimaging is more than just blurry singular images. Modern techniques allow us to observe real time changes in brain activity with millisecond resolution, leading to breakthroughs in scientific discovery that would not be possible without it. Memory is a great example - with functional magnetic resonance imaging (fMRI) techniques we have been able to demonstrate that more recent long-term memories are stored and retrieved with brain activity in the hippocampus, but as memories become more in the distant past, they are transferred to the medial temporal lobe. While neuroimaging techniques keep the doors open for new and exciting discoveries, spatial limitations leave many questions unanswered, especially at a cellular and circuit level. For example - within the hippocampus, is each memory encoded via complete distinct neural circuits? Or do similar memories share similar neural pathways? Within just a millimetre cubed of brain tissue we could have up to 57,000 cells (most of them neurons), all of which may have different properties, be part of different circuits, and produce different outcomes. This almost makes revolutionary techniques such as fMRI, with almost unparalleled image quality, seem pointless. To truly understand how neural circuits work, we have to dig as deep as possible to record the smallest regions possible. So that begs the question, how small can we actually record in the human brain? EEG 2024 marks a decade since the first recorded electroencephalography (also known as EEG) scan by Hans Berger in Germany. This technique involves placing electrodes all around the scalp to record activity throughout the whole outer surface of the brain ( Figure 1 ). Unlike the methods we see later on, EEG scans provide a direct measure of activity in the brain, by measuring electrical activity when the brain is active. However, because electrodes are only placed across the scalp, EEG scans are only able to pick up activity from the outer cortex, missing important activity in deeper parts of the brain. In our memory example, this means it would completely miss any activity in the hippocampus. EEG resolution is also quite underwhelming, typically being able to resolve activity with a few centimetres’ resolution - not great for mapping behaviours to specific structures in the brain. EEG scans are used in a medical environment to measure overall activity levels, assisting with epilepsy diagnosis. Let's look at what we can use to dig deeper into the brain and locate signals of activity… PET Position emission tomography (PET) scans offer a chance to record activity throughout the whole brain by ingesting a radioactive tracer, typically glucose labelled with a mildly radioactive substance. This tracer is tracked and uptake in specific parts of the brain is a sign for greater metabolic activity, indicating a higher signalling rate. PET scans already offer a resolution far beyond the capacities of EEG scans, distinguishing activity between areas with a resolution of up to 4mm. With the use of different radioactive labels, we can also detect activity of specific populations of neurons such as dopamine neurons to diagnose Parkinson's disease. In fact, many studies have reliably demonstrated the ability of PET scans to detect the root cause of Parkinson's disease, which is a reduced number of dopamine neurons in the basal ganglia, before symptoms become too extreme. As impressive as it sounds, a 4mm resolution can locate activity in large areas of the cortex, but is limited in its resolving power for discrete cortical layers. Take the human motor cortex for example - all 6 layers have an average width of only 2.79mm. A PET scan would not be powerful enough to determine which layer is most active, so we need to dig a little deeper… fMRI Since its inception in the early 90's, fMRI has gained the reputation of becoming the gold standard for human neuroimaging, thanks to its non-invasiveness, lack of artefacts, and reliable signalling. fMRI uses Nuclear Magnetic Resonance to measure changes in oxygenated blood flow, which is correlative of neural activity, known as BOLD signals. In comparison to EEG, measuring blood oxygen levels cannot reach a highly impressive temporal resolution, and is also not a direct measure of neural activity. fMRI makes up for this with its superior spatial resolution, resolving spaces as small as 1mm apart. Using our human motor cortex example, this would allow us to resolve activity between every 2-3 layers - not a bad return considering it doesn’t even leave a scar. PET, and especially EEG, pales in comparison to the capabilities of fMRI that has since been used for a wide range of neuroimaging research. Most notably, structural MRI has been used to support the idea of hippocampal involvement during spatial navigation from memory tasks (figure 2). Its resolving power and highly precise images also make it suitable to be used for mapping surgical procedures. Conclusion With a resolution of up to 1mm, fMRI takes the crown as the human neuroimaging technique with the best spatial resolution! Table 1 shows a brief summary of each neuroimaging method. Unfortunately though, there is still so much more we need to do to look at individual circuits and connections. As mentioned before, even within a millimetre cubed of brain, we have 5 figures worth of cells, making the number of neurons that make up the whole brain impossible to comprehend. To observe the activity of a single neuron, we would need an imaging technique with the power of viewing cells in the 10’s of micrometre range. So what can we do to get to the resolution we desire while still being suitable for humans? Maybe there isn't a solution. Instead, maybe if we want to record singular neuron activity, we have to take inspiration from invasive animal techniques such as microelectrode recordings. Typically used in rats and mice, these can achieve single-cell resolution to look at neuroscience from the smallest of components. It would be unethical to stick an electrode into a healthy human's brain and record activity, but perhaps in the future a non-invasive form of electrode recording could be developed? The current neuroscience field is foggy and shrouded in mystery. Most of these mysteries simply cannot be solved with the current research techniques we have at our disposal. But this is what makes neuroscience exciting - there is still so much to explore! Who knows when we will be able to map behaviours to neural circuits with single-cell precision, but with how quickly imaging techniques are being enhanced and fine-tuned, I wouldn't be surprised if it's sooner than we think. Written by Ramim Rahman Related article: Neuromyelitis optica REFERENCES Hoeffner, E.G. et al. (2011) ‘Neuroradiology back to the future: Brain Imaging’, American Journal of Neuroradiology, 33(1), pp. 5–11. doi:10.3174/ajnr.a2936. Maguire, E.A. and Frith, C.D. (2003) ‘Lateral asymmetry in the hippocampal response to the remoteness of autobiographical memories’, The Journal of Neuroscience, 23(12), pp. 5302–5307. doi:10.1523/jneurosci.23-12-05302.2003. Wong, C. (2024) ‘Cubic millimetre of brain mapped in spectacular detail’, Nature, 629(8013), pp. 739–740. doi:10.1038/d41586-024-01387-9. Butman, J. A., & Floeter, M. K. (2007). Decreased thickness of primary motor cortex in primary lateral sclerosis. AJNR. American journal of neuroradiology, 28(1), 87–91. Loane, C., & Politis, M. (2011). Positron emission tomography neuroimaging in Parkinson's disease. American journal of translational research, 3(4), 323–341. Maguire, E.A. et al. (2000) ‘Navigation-related structural change in the hippocampi of taxi drivers’, Proceedings of the National Academy of Sciences, 97(8), pp. 4398–4403. doi:10.1073/pnas.070039597. [Figure 1] EEG (electroencephalogram) (2024) Mayo Clinic . Available at: https://www.mayoclinic.org/tests-procedures/eeg/about/pac-20393875 (Accessed: 18 October 2024). [Figure 2] Boccia, M. et al. (2016) ‘Direct and indirect parieto-medial temporal pathways for spatial navigation in humans: Evidence from resting-state functional connectivity’, Brain Structure and Function, 222(4), pp. 1945–1957. doi:10.1007/s00429-016-1318-6. Project Gallery
- Origins of COVID-19 | Scientia News
Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Origins of COVID-19 24/09/24, 11:12 Last updated: Uncovering the truth behind the origins of the virus The quest for the crime of the century begins now! Suspicion of the Wuhan Institute of Virology Since the early epidemic reports in Wuhan, the origin of COVID-19 has been a matter of contention. Was SARS-CoV-2 the outcome of spontaneous transmission from animals to humans or scientific experimentation? Although most of the recorded initial cases occurred near a seafood market, Western Intelligence Agencies knew that the Wuhan Institute of Virology (WIV) was situated nine miles to the south. Researchers at the biosafety centre combed Yunnan caves for bats harbouring SARS-like viruses. They have been extracting genetic material from their saliva, urine, and faeces. Additionally, bat coronavirus RaTG13 (BatCoV RaTG13) shared 96% of its genome with SARS-CoV-2. Suspicion increased when it was discovered that WIV researchers dealt with chimeric versions of SARS-like viruses capable of infecting human cells. However, similar "gain-of-function" studies in Western biosecurity institutions have shown that such slow virulence increases may occur naturally. The coincidence that the pandemic began in the same city as the WIV outbreak was too obvious to ignore. According to two Chinese specialists , "the likelihood of bats flying to the market was quite remote". Chan and Ridley's "Quest for the Origin of COVID-19" Chan and Ridley have created a viral whodunit titled "Quest for the origin of COVID-19" to excite the curiosity of armchair detectives and scientific sceptics. Both need clarification as to why a virus of unknown origin was detected in Wuhan and not in Yunnan, 900 kilometres to the south. The stakes could not be more significant; if the virus were deliberately developed and spread by a Chinese laboratory, it would be the crime of the century. They are prudent in not going that far. They are, however, within their rights to cast doubt on the findings since their concerns were shared by numerous coronavirus experts who openly discounted the possibility of a non-natural origin and declared that the virus displayed no evidence of design at the time. Is this the impartial and fair probe the world has been waiting for? They present no evidence for the development of SARS-CoV-2. For example, Chan asserts that it seemed pre-adapted to human transmission " to an extent comparable to the late SARS-CoV-2 outbreak ". This statement is based on a single spike protein mutation that appears to "substantially enhance" its potential to connect to human receptor cells, meaning it had "apparently stabilised genetically" when identified in Wuhan. Nonetheless, this is a staggeringly misleading statement. As seen by the alphabet soup of mutations, the coronavirus has undergone multiple alterations that have consistently increased its suitability. Additionally, viruses isolated from pangolins attach to human receptor cells more efficiently than SARS-CoV-2, indicating the possibility of additional adaptation. According to two virologists, although the SARS-CoV-2 virus was not wholly adapted to humans, it was "merely enough". Evidence for design of SARS-CoV-2 and possible natural origins of the virus Another concerning feature of SARS-CoV-2 is a furin cleavage site, which enables it to infect human cells by interfering with the receptor protein. The identical sequence is present in highly pathogenic influenza viruses and was previously utilised to modify the spike protein of COVID-19. Chan and Ridley explain that this is the kind of insertion that would occur in a laboratory-modified bat virus. As a result, 21 leading experts have concluded that the furin sequence is insufficient. Coronaviruses have been shown to possess " near identical " genomes that often can infect humans and animals. Because the furin cleavage site characteristic is not seen in known bat coronaviruses, it is possible that it evolved naturally. Surprisingly, Chan and Ridley do not suggest that the SARS virus's high human infectivity feature was inserted on purpose since "there is no way to determine". There is also no way to determine if a RaTG13 is the pandemic virus's progenitor since history is replete with pandemics that began with zoonotic jumps. This argument is based on the strange fact that WIV researchers retrieved the bat isolate in 2013 from a decommissioned mine shaft in Yunnan. Six people were removing bat guano from the cave that year when they suffered an unexplained respiratory ailment. As a consequence, half of them perished. The 4% genetic diversity between RaTG13 and SARS-CoV-2, on the other hand, is similar to 40 years of evolutionary change. While exploring caves in northern Laos, researchers discovered three more closely related bat coronaviruses, which have a higher affinity to attach to human cells than the early SARS-CoV-2 strains. This indicates an organic origin, either through another animal host or directly from a bat, maybe when a farmer went into a cave. This is arguably the most reasonable explanation since it is consistent with forensic and epidemiological data. The food sample isolates collected from the Wuhan seafood market are similar to human isolates, and the majority of original human cases had a history of market exposure, in contrast to the absence of an epidemiological connection to the WIV or any other Wuhan research institution. Lack of evidence for a laboratory origin If scientists could demonstrate prior infection at the Wuhan market or other Chinese wildlife markets that sell the most likely intermediary species, including pangolins, civet cats, and raccoon dogs, the case for a natural origin would be strengthened. Although multiple animals tested positive for sister human viruses during the SARS epidemic, scientists have yet to find evidence of earlier infections in animals in the instance of Sars-CoV-2. Nonetheless, the absence of evidence does not confirm the absence and may indicate that samples were not taken from the appropriate animal. The same may be said of the lab leak argument's lack of evidence. However, even though history is littered with pandemics, no significant pandemic has ever been traced back to a laboratory. In other words, the null hypothesis is a zoonotic occurrence; Chan and Ridley must demonstrate otherwise. The irony is their drive to construct a compelling case for a laboratory accident. They are oblivious to the much more pressing story of how the commerce in wild animals, global warming, and habitat degradation increase the likelihood of pandemic viral development. This is the most plausible origin story that should concern us. Summary Although Chan and Ridley's "Quest for the Origin of COVID-19" has cast suspicion on the Wuhan Institute of Virology, there is still insufficient evidence to support the lab leak theory. There is, however, growing evidence for a natural origin of SARS-CoV-2, with multiple animals testing positive for sister human viruses during the SARS epidemic and the discovery of more closely related bat coronaviruses in northern Laos. As such, we should be more concerned with the increasing likelihood of pandemic viral development due to the commerce in wild animals, global warming, and habitat degradation. Written by Sara Maria Majernikova Project Gallery
- Mechanisms of pathogen evasion | Scientia News
Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Mechanisms of pathogen evasion 06/01/25, 13:56 Last updated: Ways in which pathogens avoid being detected by the immune system Introduction Pathogens such as bacteria and viruses have evolved strategies to deceive and outsmart the immune system's defences. From hiding within cells to avoiding immune detection to blocking signals crucial for immune function, pathogens have developed an array of tactics to stay one step ahead of the immune system. This article introduces some key strategies pathogens employ to evade the immune system. Antigenic variation The influenza virus is a persistent and challenging pathogen to treat because it employs a clever strategy known as antigenic variation to evade the immune system. Antigenic variation is the pathogen’s ability to alter the proteins on its surface (antigens), particularly hemagglutinin (HA) and neuraminidase (NA), which are the primary targets of the immune system. As the virus conceals itself, it is no longer recognised and attacked by the host's defences. But how do the surface antigens change? This occurs through two primary mechanisms: antigenic drift and antigenic shift. The former process involves gradual changes in the virus's surface proteins by progressive accumulation of genetic mutations. Meanwhile, the latter requires a slightly different explanation. Antigenic shift is an abrupt process. It occurs when two influenza virus strains infect the same host cell and exchange genetic material. The exchange can lead to a new hybrid strain. This hybrid strain usually presents a new combination of surface proteins. It is a more abrupt process, and because the immune system lacks prior exposure to these new proteins, it fails to clear the viral pathogen. Antigenic shifts can lead to the emergence of strains to which the population has little to no pre-existing immunity. Some examples are the 1968 Hong Kong flu and the 2009 swine flu pandemic. Variable serotypes- Streptococcus pneumoniae When the host encounters a pathogen, the body creates antibodies against specific proteins on the pathogen's surface, ensuring long-term immunity. However, some species of pathogens evade this protection by evolving different strains. These strains involve multiple serotypes, each defined by distinct variations in the structure of their capsular polysaccharides. This variability allows them to infect the same host repeatedly, as immunity to one serotype does not confer protection against other serotypes. A perfect example of such a pathogen is the pneumonia-causing bacterium, Streptococcus pneumoniae , which has more than 90 strains. After successful infection with a particular S. pneumoniae serotype, a person will have devised antibodies that prevent reinfection with that specific serotype. However, these antibodies do not prevent an initial infection with another serotype, as illustrated in Figure 1 . Therefore, by evading the immune response, a new primary immune response is required to clear the infection. Latency- chicken pox & Human Immunodeficiency Virus (HIV) Pathogens can cleverly persist in the host by entering a dormant state where they are metabolically inactive. In this state, they are invisible to the immune system. Human Immunodeficiency Virus is well known for its use of HIV latent reservoirs. These reservoirs, consisting of metabolically inactive T-cells infected with HIV, can exist for years on end. When the host becomes immunocompromised at any stage in life, the T-cells in these reservoirs are suddenly activated to renew HIV production. The Varicella-Zoster Virus (VZV) is responsible for causing varicella (chickenpox) and zoster (shingles). Similarly, this virus can remain latent in the host to evade immune detection. VZV establishes latency in sensory ganglia, particularly in neurons. Since neurons are relatively immune-privileged sites, they are less accessible to immune surveillance mechanisms. This provides a safe haven from immune detection. When the host is immunocompromised, the virus reactivates. This renewed viral activity results in the production of viral particles which travel along the sensory nerve fibres towards mucous membranes. When the virus reaches the skin, it causes an inflammatory response. This results in painful vesicular skin lesions, commonly known as shingles (herpes zoster). Conclusion Pathogens employ diverse mechanisms to evade the host immune system, ensuring their survival and propagation through host cells. These evasion mechanisms can hinder the development of treatments for certain infectious diseases. For instance, the diversity in Strep A serotypes challenges vaccine development because immunity to one serotype may not confer protection against another. Additionally, the influenza virus constantly evolves via antigenic variation, always one step ahead of the immune system. The strategies employed by pathogens to evade the immune system are as diverse as they are sophisticated. Scientists continue to study these mechanisms, paving the way for developing more effective vaccines, treatments, and public health strategies to out-manoeuvre these organisms. We can better protect human health by staying one step ahead of pathogen evolution. Written by Fozia Hassan Related article: Allergies REFERENCES Abendroth, Allison, et al. “Varicella Zoster Virus Immune Evasion Strategies.” Current Topics in Microbiology and Immunology , 2010, pp. 155–171, www.ncbi.nlm.nih.gov/pmc/articles/PMC3936337/ , https://doi.org/10.1007/82_2010_41 . Accessed 24 July 2024. Gougeon, M-L. “To Kill or Be Killed: How HIV Exhausts the Immune System.” Cell Death & Differentiation , vol. 12, no. S1, 15 Apr. 2005, pp. 845–854, www.nature.com/articles/4401616 , https://doi.org/10.1038/sj.cdd.4401616 . Accessed 24 July 2024. Parham, Peter. The Immune System . 5th ed., New York, Garland Science, 2015, read.kortext.com/reader/epub/1743564 . Accessed 24 July 2024. Shaffer, Catherine. “How HIV Evades the Immune System.” News-Medical.net , 21 Feb. 2018, www.news-medical.net/life-sciences/How-HIV-Evades-the-Immune-System.aspx . Accessed 24 July 2024. Project Gallery
- Teaching maths like it matters | Scientia News
Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Teaching maths like it matters 06/01/25, 13:52 Last updated: The importance of implementing Maths into our lives …But I’m never going to use Algebra in my life! The above is a typical response from students across the country when walking into a Maths class. I did not understand others’ disdain, because I love Maths. I got satisfaction from solving numerical problems, stimulation from equations, and excitement from learning new variables like alpha, or constants like Pi. The abstract nature of Maths was like art to me. Later, I realised that not all my peers felt the same way, that somehow, I was the anomaly and that they were the norm. Many maths teachers feel the same way. They get lost in the subject that they love and try to teach it in the way that makes sense to them, without thinking on how the lack of context in equations and processes means nothing to disengaged students. As teachers, our job is to show how applicable Maths can be to our students on an individual basis. Rather than using real-life questions as extensions after the core activity, we must utilise them from the beginning when introducing topics, showing student’s how the methods that they learn can be applied to have some use beyond a pass mark in their exams. I am not talking about examples of ladders leaning against walls when teaching Pythagoras’ theorem and SOHCAHTOA, or, taking counters from a bag, to explain Probability. The examples here are forced, no student will connect with them because they are not lived examples or likely scenarios in most of their lives. We need to build strong relationships with our students, understand their demographic and interests, then introduce topics based on this. For example: If I know that my class enjoys football, I will begin with a video of Messi playing the game, pausing the video, and splitting the pitch up into segments, which can lead a conversation into areas of segments and circles, or, I can discuss the trajectory of the ball after a kick, to talk about quadratic equations. In another class, we can ask what students are budgeting for, perhaps concert tickets or new clothes, and use that to open a discussion into arithmetic series. Another great example is asking students to find an event happening somewhere in the country that they would like to go to, and as a class, plan for this. We would use research skills, calculate speed, distance and time if going by car, or pull up a train timetable where we can teach two-way tables and time conversions. To create meaningful connections to Math topics will take time, effort, and research, and the difficulty will be that not every application will be relatable to every cohort. We will need to build a portfolio of contextual examples related to each topic, however, if there is buy-in from others in our departments, it is an achievable target. In conclusion, we must teach Maths to students in meaningful ways that applies to their life, to keep up engagement and motivation as well as providing opportunities to deepen understanding. Maths should be based around conversation and interests, rather than an exercise of memorising and processes. It should make sense to students, it should matter. Written by Sara Altaf Related article: The game of life Project Gallery
- Help with personal statements | Scientia News
We check UCAS personal statements for free! What are UCAS personal statements? For UK-based universities UCAS personal statements are a chance for students to show a UK university why they should be offered a place to study a particular subject there. Academics or more? Whilst academics are important to talk about, it is just as necessary to talk about who you are beyond your grades, too. We can inform you on what this may look like. Page limited It is critical to note that the statements must not be longer than 1 page: anything beyond this will not be read. You can v isit UCAS for more information... Deadline! All statements must be submitted through UCAS by 31st January 2024 at 18:00 (UK time). However, the earlier the better as universities accept students on a rolling deadline. The process of submitting a personal statement: 1. Research university courses interested in 2. Pick a course & write statement on why this subject 3. Check and edit statement for approval 4. Submit to your top 5 university choices Note for those that are considering medicine or dentistry: You (normally) will have to choose 1 university out of the 5 where you will do a back up course i.e. something that is not medicine or dentistry. What we offer to you: Proofreading To catch any remaining errors or inconsistencies in draft statements Expert advisors Graduates or current university students will provide personalised advice to highlight your unique qualities, and align your statement with your chosen field of study Goals We'll assist in articulating your passion and long-term goals effectively Feedback Get detailed feedback reports with specific improvement suggestions Guidance Giving example guideline questions for you to answer and include in your statement. This will help to create flow and making adjustments easier. Structure Advice on approaching your introduction, main body paragraphs and ending Example universities where some of our volunteers currently attend, or have graduated from: Queen Mary University of London, Imperial College London, Kings College London, University of Liverpool and so on. Fill the form out below and we will contact you* * Alternatively, you can email us at scientianewsorg@gmail.com . Please keep the subject as 'Personal Statement'. * Disclaimer: there must be no plagiarism in all statements submitted - we will assume there has been no copying. Scientia News will not be responsible for any plagiarism detection by UCAS, as we only give advice. Email Subject Your message Send Thanks for submitting!
- Can you erase your memory? | Scientia News
Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Can you erase your memory? 24/09/24, 10:54 Last updated: The concept of memory erasure is huge and complex What is memory? Our brain is a wiggly structure in our skull, made up of roughly 100 billion neurones. It is a wondrous organ, capable of processing 34 gigabytes of digital data per day, yet being able to retain information, and form memory – something that many would argue, defines who we are. So.. what is memory? And how does our brain form them? Loosely defined, memory is the capacity to store and retrieve information. There are three types of memory: short-term, working, and long-term memory (LTM). Today, we will be focusing on LTM. In order to form LTM, we need to learn and store memory. This follows the process of encoding, storage, retrieval, and consolidation. In order to understand the biochemical attributes of memory in our brain, a psychologist, Dr Lashley, conducted extensive experiments on rats to investigate if there were specific pathways in our brain that we could damage to prevent memory from being recalled. His results showed that despite large areas of the brain being removed, the rats were still able to perform simple tasks ( Figures 1-2 ). Lashley’s experiment transformed our understanding of memory, leading to the concept of “engrams”. Takamiya et al., 2020 defines “memory engrams” as traces of LTM consolidated in the brain by experience. According to Lashley, the engrams were not localised in specific pathways. Rather, they were distributed across the whole of the brain. Can memory be erased? The concept of memory erasure is huge and complex. In order to simplify this, let’s divide them into two categories: unintentional, and intentional. Let’s take amnesia for example. This is a form of unintentional memory ‘erasure’. There are two types of amnesia: retrograde amnesia, and anterograde amnesia. Retrograde amnesia is the loss of memory that was formed before acquiring amnesia. On the other hand, anterograde amnesia is the inability to make new memories since acquiring amnesia. Typically, a person with amnesia would exhibit both retrograde, and anterograde amnesia, but at different degrees of severity ( Figure 3 ). Can we ‘erase’ our memory intentionally? And how would this be of use to us? This is where things get really interesting. Currently, the possibility of intentional memory ‘erasure’ is being investigated in patients for the treatment of post-traumatic stress disorder (PTSD). In these clinical trials, patients with PTSD are given drugs that block these traumatic memories. For example, propranolol, an adrenergic beta receptor blocker impairs the acquisition, retrieval, and reconsolidation of this memory. Incredible, isn’t it? Although this is not the current standard treatment for PTSD, we can only imagine how relieving it would be for our fellow friends who suffer from PTSD if their traumatic memories could be ‘erased’. However, with every step ahead, we must always be extremely cautious. What if things go wrong? We are dealing with our brain, arguably one of the most important organs in our body after all. Regardless, the potential for memory ‘erasure’ in treating PTSD seems both promising and intriguing, and the complexities and ethical considerations surrounding such advancements underscore the need for careful and responsible exploration in the realm of neuroscience and medicine. Written by Joecelyn Kiran Tan Related articles: Synaptic plasticity / Boom, and you're back! (intrusive memories) Project Gallery
- Blood | Scientia News
Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Blood 06/01/25, 13:35 Last updated: A vital fluid A comprehensive guide to the human blood system and alternatives Human blood Blood is a vital fluid for humans and vertebrates. It transports nutrients, including oxygen, to cells and tissues. Blood is made of different components: red blood cells, white blood cells, platelets and plasma. Red blood cells (also called erythrocytes) contain haemoglobin, which gives blood its red colour. Haemoglobin helps to carry oxygen to the body from the lungs. White blood cells (also called leukocytes) defend the body against infections. Lymphocytes are a type of white blood cell, and the two types are T lymphocytes and B lymphocytes. T lymphocytes target infected cells and regulate the function of other immune cells. B lymphocytes create antibodies, which are proteins that can destroy foreign substances like bacteria and viruses. Platelets (also called thrombocytes) are small cell fragments. They are essential in blood clotting, a process known as coagulation. They also help wounds heal and contribute to the immune response. Plasma is the liquid component in blood, made of water, ions, proteins, nutrients, wastes and gases. Its main role is transporting substances such as blood cells and other nutrients throughout the body. Artificial blood There are two main types of artificial blood: haemoglobin-based oxygen carriers (HBOCs) and perfluorocarbons (PFCs). HBOCs are synthetic solutions designed to carry oxygen. They are usually a smaller size than RBCs. The haemoglobin is modified and covered with carriers to ensure the HBOCs do not break down inside the body. They can be used for blood transfusions that need to be done immediately or when there is too much blood loss. PFCs are derived from fluorine-containing and carbon-containing chemicals. They have a high capacity for carrying and delivering oxygen. Advantages and disadvantages of artificial blood Artificial blood can be beneficial because it can be used for any patient who needs a blood transfusion, regardless of their blood type, if the substitute has the universal O blood group. There is also less chance of diseases being passed to patients using artificial blood. However, artificial blood has been shown to have adverse side effects, including high blood pressure and a higher chance of heart attacks. The future of artificial blood As of 2022, there have been experiments in the NHS with laboratory-grown RBCs in the RESTORE randomised controlled clinical trial. With further research, artificial blood can be refined and used more, especially when there is low blood availability for transfusions or for people with blood-related diseases. Written by Naoshin Haque Related article: Sideroblastic anaemia Project Gallery
- A-level resources | Scientia News
A-levels Are you a student currently studying A-levels, or looking to choose them in the near future? Read below for tips and guidance! You may also like: IB resources , University prep and Extra resources What are A-levels? Jump to resources A-levels, short for Advanced Level qualifications, are a widely recognized and highly regarded educational program typically taken by students in the United Kingdom (UK) and some other countries. They are usually studied in the final two years of secondary education, typically between the ages of 16 and 18. A-levels offer students the opportunity to specialize in specific subjects of their choice. Students typically choose three or four subjects to study, although this may vary depending on the educational institution. The subjects available can be diverse, covering areas such as sciences, humanities, social sciences, languages, and arts. How are A-levels graded? The A-level grading system is based on a letter grade scale in the UK. Here's an overview of the A-level grading system: Grades: A* (pronounced "A-star"): The highest grade achievable, demonstrating exceptional performance. A: Excellent performance, indicating a strong understanding of the subject. B: Very good performance, showing a solid grasp of the subject. C: Good performance, representing a satisfactory level of understanding. D: Fair performance, indicating a basic understanding of the subject. E: Marginal performance, showing a limited understanding of the subject. U: Ungraded, indicating that the student did not meet the minimum requirements to receive a grade. What are the benefits of studying A-level? A-levels provide students with a variety of advantages, such as a solid academic foundation for further education, the chance to focus on interest-specific areas, and flexibility in planning their course of study. Transferable abilities like critical thinking, problem-solving, and independent research are developed in A-levels, improving both prospects for entrance to universities and future employment opportunities. These widely respected credentials encourage intellectual vigour, intellectual curiosity, and a love of lifelong study. A-levels provide students with a strong foundation for success in higher education and a variety of career pathways thanks to their academic rigour and global renown. Resources for revision Web sites to hel p M ath s / Maths and Further Maths Che mistry / Chemrevise / Chemguide Biology / Qui zzes Physics Computer Science topic-by-topic Teach Computer Science All subjects / Seneca Learning / Save My Exams YouTube channels to hel p Chemistry- Allery Chemistry and Eliot Rintoul Past p apers Biology, Chemistry, Physics, Maths Textbooks (depend on exam board) CGP range for Bio, Chem, Phys, and Maths- exam practice workbooks
- Electricity in the body | Scientia News
Go Back Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Electricity in the body: Luigi Galvani Last updated: 07/11/24 Luigi Galvani (1737- 1798) Luigi Galvani was an Italian physician and biologist, and is known for his work on bioelectricity, and for laying the foundations of electrophysiology- the branch of science focusing on electricity in the body. He was born in 1737 in Bologna, Italy, and died in 1798 when the age of electricity was approaching. Galvani began his career as a doctor after he graduated with a thesis in 1762, at the University of Bologna. The same year, he became a Reader in Anatomy at the university. He was then given the Chair of Obstetrics at the Institute of Sciences, owing to his surgical skills, and became its president in 1772. He held his chair for 33 years but was dismissed in 1797 when Napoleon’s army invaded but was reinstated sometime later. Galvani's discovery Galvani was performing experiments on frog legs at the University of Bologna, when his assistant touched his scalpel to the crural nerves of the frog, when he was drawing spark from the brass conductor of the electrostatic machine, and the frog leg twitched. Due to the current, muscular spasms were generated throughout the body. Galvani was intrigued and performed more experiments to see if he would get the same result. He did- the experiment was reproducible. Galvani used a Leyden jar (a device which stores static electricity, an early form of capacitor), and an electrostatic machine to produce this electricity. He knew that metals transmitted something called electricity, and a form of this electricity was presumably generated in the frog tissue to allow muscular contraction- he named this ‘animal electricity’. He believed this ‘animal electricity’ was different from static, and natural electricity e.g. lightning. Indeed, in 1786, during a lightning storm, he touched some frog nerves with a pair of scissors and the muscle contracted. Galvani thought ‘animal electricity’ as a fluid secreted by the brain, which flows though nerves and activates the muscles. This is how his experiments helped pave the way for electrophysiology in neuroscience. In 1786, during a lightning storm, Galvani touched some frog nerves with a pair of scissors and the muscle contracted. Galvani's experimental setup consisted of frog legs, a Leyden jar, and an electrostatic machine. He knew that metals transmitted something called electricity, and a form of this electricity was presumably generated in the frog tissue to allow muscular contraction- he named this ‘animal electricity’. A first step in the branch of electrophysiology. Galvani's progress in the field Galvani’s work was accepted by all his colleagues except for Volta, the professor of physics at the University of Pavia. Though Volta could reproduce Galvani’s experiments, he did not like Galvani’s explanation of ‘animal electricity’. Volta believed it was the two dissimilar metals producing the electricity, he named it ‘metallic electricity’, and there was no current running inside the frogs- there was no ‘animal electricity’. Galvani argued that there were electric forces inside organisms, and in 1794 he published an anonymous book Dell’uso e dell’attivita dell’arco conduttore nella contrazione dei muscoli (“On the Use and Activity of the Conductive Arch in the Contraction of Muscles”), where Galvani described his work on how he obtained electricity inside the frog, without the use of any metal. It was reported that he did this by touching the exposed muscle of one frog with a nerve of another, and the muscle contracted (Dibner 2020). This seems doubtful as Galvani’s forceps must have been in contact with spark for there to be movement. Still, it was the first attempt to demonstrate the existence of bioelectric forces. Outside of neuroscience The term ‘animal fluid’ Galvani used, is reminiscent of ‘animal spirits’, which was used by Rene Descartes, French philosopher, in the 1600s. Descartes described ‘animal spirits’ as a fluid flowing through the brain and the body, and Galvani unwittingly built on this belief with his findings on bioelectricity; the spirits ‘became’ “electricity”. There was a paradigm shift as Descartes thought that nerves were water pipes, but they were electrical conductors. This illustrates how Galvani was able to build on existing ideas in science. Limitations Even with the vigorous experiments and support, there was one limitation. For a direct correlation between frog muscle contraction and electricity generation, Galvani needed to be able to quantitatively measure the electrical currents generated in the muscle. This was difficult to do at the time since there was not enough technology to measure the currents- the currents were too small. Eventually, in the early 1900s when there were major advances in technology, Muller, Bois-Reymond, and Helmholtz, three German physiologists, managed to successfully measure the conduction of electrical activity along the nerve axon. This breakthrough furthered the branch of electrophysiology which Galvani had started. Summary In conclusion, Luigi Galvani was an influential physician and biologist, who founded the branch of electrophysiology with his experiments on frogs and metals. His results were crucial to the development of neuroscience, particularly the beginning of understanding electrical activity along the axon. Written by Manisha Halkhoree Related article: Nikola Tesla and wireless electricity
- 'The Emperor of All Maladies' by Siddhartha Mukherjee | Scientia News
Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link 'The Emperor of All Maladies' by Siddhartha Mukherjee 28/11/24, 15:25 Last updated: Book review Stretching nearly 4,000 years of history, Pulitzer Prize winner, Siddhartha Mukherjee sets on a journey to document the biography of cancer in ‘The Emperor of All Maladies’. Drawing from a vast array of books, studies, interviews, and case studies, Mukherjee crafts a narrative that is as comprehensive as it is compelling. Driven by curiosity and a desire to understand the origins of cancer, Mukherjee sets the tone by reflecting on his experiences as an oncology trainee, drawing insightful parallels to contemporary perspectives on the fight against this relentless disease. Mukherjee also pays homage to Ancient Egyptian and Greek physicians for their early observations on cancer, from the work on Imhotep to Claudius Galen. He then introduces Sidney Farber, whose monumental contributions to modern chemotherapy are brought to life through Mukherjee's exceptional storytelling—tracing Farber's journey from his initial observations to his unprecedented success in treating children with leukaemia. As you progress through each chapter of this six-part book, your appreciation deepens for how far cancer treatments have advanced - and how much further they can go. Mukherjee’s unparalleled skill as a science communicator shines through, seamlessly weaving together groundbreaking scientific discoveries with the historical contexts in which they emerged contributing to an immersive reading experience. Siddhartha Mukherjee, The Emperor of All Maladies : In 2005, a man diagnosed with multiple myeloma asked me if he would be alive to watch his daughter graduate from high school in a few months. In 2009, bound to a wheelchair, he watched his daughter graduate from college. The wheelchair had nothing to do with his cancer. The man had fallen down while coaching his youngest son's baseball team. Mukherjee also makes an effort to highlight the critical role of raising awareness in shaping public health outcomes. ‘Jimmy’ was a cancer patient that represented children with cancer, his real name was Einar Gustafson, but his individual story was able to galvanise large-scale support. As the face of the ‘Jimmy Fund’, he was able to assist in raising $231,485.51 for the Dana-Farber Institute subsequently becoming the official charity for the Boston Red Sox. Mukherjee underscores how storytelling can serve as a catalyst for change, not just in raising money, but also in enacting larger societal and governmental shifts. In 1971, President Richard Nixon signed the ‘National Cancer Act’, the first of its kind where federal funding went directly into advancing cancer research. What struck me most was how Mukherjee connects this historical event to the broader need for advocacy, as science doesn’t just happen in the lab. It is a collective effort, driven by awareness, to push funding and influence policy. The ability to link individual stories to broader missions, as Mukherjee illustrates, continues to be one of the most effective strategies in keeping cancer research in the public eye. Mukherjee delves into the pivotal role of genetics in cancer research, tracing its evolution from the discovery of DNA's structure by Francis Crick, James Watson, and Rosalind Franklin to Robert Weinberg's ground-breaking work on how proto-oncogenes and tumour suppressors drive cancer progression. These discoveries ushered in a new era in cancer drug development. Mukherjee also emphasises the importance of collaboration and the rise of the internet, which gave birth to The Cancer Genome Atlas, a landmark program, that unites various research disciplines to diagnose, treat, and prevent cancer. In concluding the book, Mukherjee looks ahead to the future of cancer treatment, seamlessly connecting this discussion to his second book, ‘ The Gene’ . This book takes readers on a remarkable journey through the history of cancer, from the earliest recorded cases to groundbreaking discoveries in genetics. It weaves together compelling personal stories as well as pivotal moments in governmental policy. The storytelling is rich and immersive, drawing you in with its detail and depth. By the time you finish, you'll find yourself returning to its pages, eager to revisit the knowledge and insights it offers. Written by Saharla Wasarme Related book review: Intern Blues Project Gallery