Search Index
258 items found
- Hubble Tension | Scientia News
Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Hubble Tension Why the fuss over a couple of km/s/Mpc? You have probably heard that the universe is expanding, and perhaps even that this expansion is accelerating. A consequent observation of this is that distant objects such as galaxies appear to recede from Earth faster if they are further away. Here is a helpful analogy: imagine a loaf of raisin bread that is rising as it is baked. A pair of raisins on opposite sides of the loaf will move away from one another at a greater rate than a pair of raisins near the center. The more dough (universe) there is between a pair of raisins (galaxies), the faster they recede from one another. See Figure 1 . This phenomenon is encapsulated in Hubble’s Law, which relates specifically to the recessional velocity due to the expansion of space. Hubble’s Law is given by the equation v = H0 D . Where: v is the recessional velocity D is the distance to the receding object H0 is the Hubble constant It is worth noting that distant objects will often have velocities of their own due to gravitational forces - so-called ‘peculiar velocities’. In order to clarify the meaning of the title of this article, we must explore the unit in which the Hubble constant H0 is most often quoted: km/s/Mpc. This describes the speed (in kilometers per second) at which a distant object, such as a galaxy, is receding for every megaparsec of distance that galaxy is from Earth. Edwin Hubble is the name most often associated with this cosmological paradigm shift; however, physicists Alexander Friedmann and Georges Lemaître worked independently on the notion of an expanding universe, deriving similar results before Hubble verified them experimentally in 1929 at the Mount Wilson Observatory, California. What is the Hubble Tension? Hopefully the above discussion of units and raisin bread convinced you that the Hubble constant H0 is linked to the expansion rate of the universe. The larger H0 is, the faster galaxies are receding at a given distance, thus indicating a more quickly expanding universe. Therefore, cosmologists wish to accurately measure H0 in order to draw conclusions about the age and size of the universe. The Hubble Tension arises from the contradicting measurements of H0 obtained from different experiments. See Figure 2 of Edwin Hubble. CMB measurement One of these experiments uses the Cosmic Microwave Background (CMB), which can be thought of as an afterglow of light from near the time of the Big Bang. The wavelength of this light has expanded with the universe ever since the period of recombination - which I mentioned in my previous article on the DESI instrument. Our current best model of the universe, called ΛCDM, can describe how the universe evolved from a hot, dense state to the universe we see today, subject to a specifically balanced energy budget between ordinary matter, dark matter, and dark energy. From fitting this ΛCDM model to CMB data from missions such as ESA’s Planck Mission, one can derive a value for the expansion rate of the universe, i.e., a value for H0 . The Planck Mission measured temperature variations (anisotropies) across the CMB with unprecedented angular resolution and sensitivity. The most recent estimate for the Hubble constant using this method gave H0 = 67.4 ± 0.5 km/s/Mpc . Local Distance Ladder measurement Another technique to determine the value of H0 uses the distance-redshift relation. This is a wholly observational approach. It relies on the fact that the faster an object recedes from Earth, the more the light from that object is shifted towards longer wavelengths (redshifted). Hubble’s Law relates this recessional velocity to a distance; therefore, one can expect a similar relation between distance and redshift. A ‘ladder’ is invoked since astronomers wish to use objects that are visible from a vast range of distances; the rungs of the ladder represent greater and greater distances to the astronomical light source. Each rung of the ladder contains a different kind of ‘standard candle’, which are sources with reliable, well-constrained luminosities that translate to an accurate distance from Earth. I encourage you to look into these different types; some examples are Cepheid variables, Type Ia Supernovae, and RR Lyrae variables. When this method was employed using the Hubble Space Telescope and SH0ES (Supernova H0 for the Equation of State), a value of H0 = 73.04 ± 1.04 km/s/Mpc was obtained. The disagreement Clearly, these two values for the Hubble constant do not agree, nor do their uncertainty ranges overlap. Figure 3 shows some of the 21st-century measurements of H0 ; an excellent illustration of how the uncertainty has decreased for both methods, therefore making their disagreement more statistically significant. Many sources of scientific engagement with the public cite this disagreement as the ‘Crisis in Cosmology!’. In the author’s opinion, this is unnecessarily hyperbolic and plays on the human instinct to pick a side between two opposing viewpoints. In fact, new methods to measure H0 have been implemented using the tip of the Red-Giant branch (TRGB) as a standard candle, which demonstrate closer agreement with the value derived from the CMB. Some cosmologists believe that eventually this Hubble Tension will dissipate as our calibration of astronomical distances improves with the next generation of telescopes. Constraining the value of the Hubble constant is by no means low-hanging fruit for cosmologists, nor is the field in crisis. To see the progress we have made, one has to look back in time to 1929 when Edwin Hubble’s first estimate using a trend line and 46 galaxies gave H0 = 500 km/s/Mpc ! We must remain hopeful that the future holds a consistent approximation for the expansion rate and, with it, the age of our universe. Written by Joseph Brennan Project Gallery
- The physics of the world’s largest gravitational-wave observatory: LIGO | Scientia News
Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link The physics of the world’s largest gravitational-wave observatory: LIGO Laser Interferometric Gravitational-wave Observatory (LIGO) Since the confirmation of detection, talk of gravitational waves has drastically increased in the public forum. In February 2016, the Laser Interferometric Gravitational-wave Observatory (LIGO) Collaboration announced that they had sensed gravitational waves, or ripples in spacetime, caused by the collision of two black holes approximately 1.3 billion light years away. Such an amazing feat quickly became globalized news with many asking how it could be physically possible to detect an event occurring at an unimaginable distance? For some, the entire situation feels incomprehensible. Although named an observatory, LIGO looks quite different from observatories such as the late Arecibo Observatory in Puerto Rico, the Very Large Array (VLA) in New Mexico, or the Lowell Observatory in Arizona. Instead of being related to the traditional telescope concept, LIGO is comprised of two interferometers, one in Hanford, Washington and the other in Livingston, Louisiana, that use lasers to detect vibrations in the fabric of spacetime. An interferometer is an L-shaped apparatus with mirrors at the end of each arm specifically positioned to split the incoming light waves, specifically in this case laser waves, into an interference pattern. This pattern is then detected by a device called a photodetector, which converts the pattern into carefully recorded data. When an incredibly violent event occurs, two black holes colliding, for instance, that action results in a massive release of energy that ripples across the fabric of spacetime. The energy from the event vibrates the laser light causing a change in the recorded light pattern. This change is also recorded by the photodetector and stored as data, which scientists can collect to analyze as needed. Because the LIGO detector is so sensitive, there are a number of systems in place to maintain its functionality and reliability. The apparatus is comprised of four main systems: 1) seismic isolation that focuses on removing non-gravitational-wave detections (also called ‘noise’) 2) optics that regulate the laser 3) a vacuum system preserving the continuity of the laser by removing dust from the components 4) computing infrastructure that manages the collected scientific data. The collaboration of these systems helps to minimize the number of false detections. False detections are also kept at a minimum with the effective communication between the Washington and Louisiana LIGO sites. It took months for the official announcement of the 2015 gravitational-wave detection because both locations had to compare data to ensure that the detection of one apparatus was also accurately detected by the other apparatus. Because of human activity on Earth, there can be a number of vibrations similar to gravitational-wave ripples, but ultimately are shown to be terrestrial events rather than celestial ones. So, while LIGO physics itself is fairly straightforward, the interpretation of the gathered data tends to be tricky. Written by Amber Elinsky Related articles: the DESI instrument / the JWST Project Gallery
- The search for a room-temperature superconductor | Scientia News
Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link The search for a room-temperature superconductor A (possibly) new class of semiconductors In early August, the scientific community was buzzing with excitement over the groundbreaking discovery of the first room-temperature superconductor. As some rushed to prove the existence of superconductivity in the material known as LK-99, others were sceptical of the validity of the claims. After weeks of investigation, experts have concluded that LK-99 was likely not the elusive room-temperature superconductor but rather a different type of magnetic material with interesting properties. But what if we did stumble upon a room-temperature superconductor? What could this mean for the future of technology? Superconductivity is a property of some materials at extremely low temperatures that allows the material to conduct electricity with no resistance. Classical physics cannot explain this phenomenon, and instead, we have to turn to quantum mechanics to provide a description of superconductors. Inside superconductors, electrons are paired up and can move through the structure of the material without experiencing any friction. The pairs of electrons are broken up by the thermal energy from temperature, so they will only exist for low temperatures. Therefore, this theory, known as BCS theory after the physicists who formulated it, does not explain the existence of a high-temperature superconductor. To describe high-temperature superconductors, such as those occurring at room temperature, more complicated theories are needed. The magic of superconductors lies in their property of zero resistance. Resistance is a cause of energy waste in circuits due to heating, which leads to the unwanted loss of power, making for inefficient operation. Physically, resistance is caused by electrons colliding with atoms in the structure of a material, causing energy to be lost in the process. The ability for electrons to move through superconductors without experiencing any collisions results in no resistance. Superconductors are useful as components in circuits as they cause no wasted power due to heating effects and are completely energy-efficient in this aspect. Normally, using superconductors requires complex methods of cooling them down to typical superconducting temperatures. For example, the temperature at which copper becomes superconducting is 35 K, or in other words, around 130 °C colder than the temperature at which water freezes. These methods are expensive to implement, which prevents them from being implemented on a wide scale. However, having a room-temperature superconductor would allow access to the beneficial properties of the material, such as its resistance, without the need for extreme cooling. The current record holders for highest-temperature superconductors are the cuprate superconductors at around −135 °C. These are a family of materials made up of layers of copper oxides alternating with layers of other metal oxides. As the mechanism for superconductivity is yet to be revealed, scientists are still scratching their heads over how this material can exhibit superconducting properties. Once this mechanism is discovered, it may be easier to predict and find high-temperature superconducting materials and may lead to the first room-temperature superconductor. Until then, the search continues to unlock the next frontier in low-temperature physics… For more information on superconductors: [1] Theory behind superconductivity [2] Video demonstration Written by Madeleine Hales Related articles: Semiconductor manufacturing / Semiconductor laser technology Project Gallery
- How epigenetic modification gives the queen bee her crown | Scientia News
Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link How epigenetic modification gives the queen bee her crown It's in the diet Honey bee colonies are comprised of three kinds of adult bees: workers, drones and a single queen. While all drones are male, the queen and the worker bees are female. Within the female population, only the queen bee is fertile and is thus responsible for laying eggs which are fertilised by drones. Additionally, a queen bee is larger than worker bees and produces pheromones to allow the colony to function. However, worker and queen bees are genetically identical, so how is it possible that they are so fundamentally different? ( Figure 1 ) The answer lies in epigenetic modification , defined as the alteration in gene function without a change in the DNA sequence. Types of epigenetic regulation include histone modification, DNA methylation and action of noncoding RNA. The honey bee Apis mellifera is amongst the many species that can produce different characteristics of organisms using the same genome. The mechanism by which honey bees do this derives from epigenetic modification resulting from the difference in diet during larval development. All larvae feed on royal jelly during the first three days of their development ( Figure 2 ). However, worker larvae will then feed on a diet of honey and pollen, which constitutes worker jelly. In comparison, the queen larva maintains a diet of royal jelly; this is a complex mixture produced by nurse bees and contains water, crude protein, monosaccharides, and fatty acids. Subsequently, the difference in dietary intake provides information to facilitate the correct epigenome which in turn allows correct transcription. Thus, key studies have taken place to investigate the effect of epigenetic marks on the development of bees. DNA methyltransferase DNMT3 is responsible for the methylation of DNA and is a repressive mark; a study found that the silencing of DNMT3 resulted in worker larvae developing into queens that had developed ovaries. Consequently, this shows that royal jelly gives information to larvae destined to be queens that can be interpreted to apply the correct epigenome. Additionally, certain histone deacetylase inhibitors have been observed in royal jelly including the compound 10 HDA and phenylbutyrate. Histone acetylation within regions of the genome results in chromatin opening; acetylation is associated with active regions. HDACi activity will inhibit the removal of such acetylation and maintain open regions of DNA. However, note that worker bees are not just a repressed version of queen bees, as they have overexpressed genes of their own to facilitate their specific behaviours. On examination of the methylome (see Figure 3 ), different genes were identified as being hypo- or hyper- methylated within worker vs queen bees. See the table below for a detailed analysis of worker and queen bees on days 3-5 of development. How exactly the specificity of epigenetic modifications is accomplished is not completely realised. To exemplify this, DNMTs do not have specificity, and thus, there must be an interplay between chromatin modifiers and cellular components to accomplish the correct recruitment of enzymes involved in epigenetic modification. However, it is clear that the epigenomes of workers vs queen bees are decidedly different and thus are the cause of different physiological and behavioural characteristics. Written by Isobel Cunningham Related article: An introduction to epigenetics Project Gallery
- The Importance of Emojis in Healthcare | Scientia News
Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link The Importance of Emojis in Healthcare Their applications and usefulness The evolution of emojis Emojis are widely used visual symbols representing people, animals, objects and more. They can convey a writer’s tones and emotions, which can help clarify the meaning of messages. This allows the writer to build a connection with the person who has received the message. Emojis originated from smileys, which evolved into emoticons and, finally, emojis. Japanese originator Shigetaka Kurita released the first set of emojis in 1999, with the word “Emoji” being a transliteration of a Japanese word, with “e” meaning “picture”, “mo” meaning “write” and “ji” meaning “character”. Emojis in healthcare Emojis can play a significant role in healthcare by improving the communication of complex health concepts effectively, offering patients greater access to healthcare. Patients with limited health literacy would benefit from health reports containing emojis, which would help them understand and interpret information better. This was proven by a study by Stonbraker, Porras and Schnall (2019), which found that 94% of patients preferred reports with emojis as it aided their understanding. For example, emojis can be helpful in the field of dermatology, where they can be used to complement information regarding things such as lesions, colours, and symptoms, allowing doctors to communicate additional information to patients alongside primary concerns. In addition, emojis can be used in public health, such as to convey information about hand hygiene and infection prevention and control. By using emojis that are related to these fields, health professionals can communicate information and remind the public (especially patients with low levels of health literacy) to protect themselves against infections and the spread of diseases. Some existing emojis can be used to illustrate certain aspects of hand hygiene, such as touching (🤝), patient (🤒), clean (✨), procedure (💉), body fluid (🗣 💦), and exposure risk (❗). Using emojis in healthcare systems, especially infection prevention and control, can improve communication among healthcare providers and receivers, therefore improving health. The future of emojis in healthcare One of the limitations of incorporating emojis into healthcare is that they are unclear. In a healthcare context, this could lead to misunderstandings and misinterpretations. Therefore, healthcare professionals must be cautious when using emojis in patient communication. Nevertheless, with clear guidelines and communication, if emojis are leveraged even more, they will play a very important role in healthcare communication, particularly in improving health literacy and access to healthcare for vulnerable patients. Due to new and evolving technology and communication, healthcare professionals also need to adapt, and using emojis could be a way this can happen. Emojis have been rapidly evolving, with new diverse and inclusive emojis continuously being introduced, such as anatomical emojis and skin tone customisations. With roughly 30 emojis being relevant to medicine — excluding generic body parts, such as the ear (👂), hand (🖐), leg (🦵), and foot (🦶) — there is potential to create more emojis related to medicine and healthcare. Researchers Debbie Lai and Shuhan He have already proposed an additional 15 medical emojis: intestines, leg cast, stomach, spine, liver, kidney, pill pack, blood bag, IV bag, CT scan, weight scale, pill box, ECG, crutches, and a white blood cell. Despite this, there is still a need for more diverse health-related emojis. This gap can be filled by the upcoming generation of students who study health sciences, as they can use their medical and digital knowledge to create emojis to communicate aspects of health care not currently represented, such as CPR, drawing blood, and more. It is important to acknowledge the limitations and potential barriers to using emojis in healthcare. For example, they could be ambiguous, leading to misunderstandings and misinterpretations. Therefore, healthcare professionals should be careful while using them in patient communication and follow any guidelines to minimise this. Conclusion Overall, emojis can have significant benefits, as they have proven to be a powerful tool in healthcare by enhancing health literacy and improving the communication of complicated health concepts to patients. Therefore, it is important to have clear guidelines on how and when to use emojis in a healthcare setting to increase their effectiveness. Health science students can contribute meaningfully to this field by proposing and creating new emojis. Written by Naoshin Haque Project Gallery
- Artificial intelligence: the good, the bad, and the future | Scientia News
Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Artificial intelligence: the good, the bad, and the future A Scientia News Biology collaboration Introduction Artificial intelligence (AI) shows great promise in education and research, providing flexibility, curriculum improvements, and knowledge gains for students. However, concerns remain about its impact on critical thinking and long-term learning. For researchers, AI accelerates data processing but may reduce originality and replace human roles. This article explores the debates around AI in academia, underscoring the need for guidelines to harness its potential while mitigating risks. Benefits of AI for students and researchers Students Within education, AI has created a buzz for its usefulness in aiding students to complete daily and complex tasks. Specifically, students have utilised this technology to enhance their decision making process, improve workflow and have a more personalised learning experience. A study by Krive et al. (2023) demonstrated this by having medical students take an elective module to learn about using AI to enhance their learning and understand its benefits in healthcare. Traditionally, medical studies have been inflexible, with difficulty integrating pre-clinical theory and clinical application. The module created by Krive et al. introduced a curriculum with assignments featuring online clinical simulations to apply preclinical theory to patient safety. Students scored a 97% average on knowledge exams and 89% on practical exams, showing AI's benefits for flexible, efficient learning. Thus, AI is able to assist in enhancing student learning experiences whilst saving time and providing flexibility. Additionally, we gathered testimonials from current STEM graduates and students to better understand the implications of AI. In Figure 1 , we can see that the students use AI to benefit their exam learning, get to grips with difficult topics, and summarise long texts to save time whilst exercising caution, knowing that AI has limitations. This shows that AI has the potential to become a personalised learning assistant to improve comprehension and retention and organise thoughts, all of which allow students to enhance skills through support as opposed to reliance on the software. Despite the mainstream uptake of AI, one student has chosen not to use AI in the worry of becoming less self-sufficient, and we will explore this dynamic in the next section. Researchers AI can be very useful for academic researchers, such as making the process of writing and editing papers based on new scientific discoveries less slow or even facilitating it altogether. As a result, society may have innovative ways to treat diseases and increase the current knowledge of different academic disciplines. Also, AI can be used for data analysis by interpreting a lot of information, and this not only saves time but a lot of money required to complete this process accurately. The statistics and graphical findings could be used to influence public policy or help different businesses achieve their objectives. Another quality of AI is that it can be tailored towards the researcher's needs in any field, from STEM to subject areas outside of it, indicating that AI’s utilities are endless. For academic fields requiring researchers to look at things in greater detail, like molecular biology or immunology, AI can help generate models to understand the molecules and cells involved in such mechanisms sufficiently. This can be through genome analysis and possibly next generation sequencing. Within education, researchers working as lecturers can utilise AI to deliver concepts and ideas to students and even make the marking process more robust. In turn, this can decrease the burnout educators experience in their daily working lives and may possibly help establish a work-life balance, as a way to feel more at ease over the long-term. Risks of AI for students and researchers Students With great power comes great responsibility, and with the advent of AI in school and learning, there is increasing concern on the quality of learners produced from schools, and if their attitude to learning and critical thinking skills are hindered or lacking. This matter has been echoed in results from a study conducted by Ahmad et al. (2023), which studied how AI affects laziness and distorts decision making in university students. The results showed using AI in education correlated with 68.9% of laziness and a 27.7% loss in decision making abilities in 285 students across Pakistani and Chinese institutes. This confirms some worries that a former testimonial shared with us in figure 1 and suggests that students may become more passive learners rather than develop key life skills. This may even lead to reluctance to learn new things and seeking out ‘the easy way’ rather than enjoy obtaining new facts. Researchers Although AI can be great for researchers, it carries its own disadvantages. For example, it could lead to reduced originality while writing, and this type of misconduct jeopardises the reputation of the people working in research. Also, the software is only as effective as the type of data they are specialised in, so specific AI could misinterpret the data. This has downstream consequences that can affect how research institutions are run, and beyond that, scientific inquiry is hindered. Therefore, if severely misused, AI can undermine the integrity of academic research, which could hinder the discovery of life-saving therapies. Furthermore, there is the potential for AI to replace researchers, suggesting that there may be fewer opportunities to employ aspiring scientists. When given insufficient information, AI can be biased, which can be detrimental; an article found that its use in a dermatology clinic can put certain patients at risk of skin cancer and suggested that it receives more diverse demographic data for AI to work effectively. Thus, it needs to be applicable in a strategic way to ensure it works as intended and does not cause harm. Conclusion Considering the uses of AI for students and researchers, it is advantageous to them by supporting any knowledge gaps, aiding in data analysis, boosting general productivity and can be used to engage with the public and much more. Its possibilities for enhancing industries such as education and drug development are endless for propagating societal progression. Nevertheless, the drawbacks of AI cannot be ignored, like the chance of it replacing people in jobs or that it is not completely accurate. Therefore, guidelines must be defined for its use as a tool to ensure a healthy relationship between AI and students and researchers. According to the European Network of Academic Integrity (ENAI), using AI for proofreading, spell checking, and as a thesaurus is admissible. However, it should not be listed as a co-author because, compared to people, it is not liable for any reported findings. As such, depending on how AI is used, it can be a tool to help society or be detrimental, so it is not inherently good or bad for students, researchers and society in general. Written by Sam Jarada and Irha Khalid Introduction, and 'Student' arguments by Irha Conclusion, and 'Researcher' arguments by Sam Project Gallery
- Sideroblastic anaemia | Scientia News
Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Sideroblastic anaemia A problem synthesising haem This is the fourth and final article in a series about anaemia. First article: anaemia Sideroblastic anaemia (SA) is like haemochromatosis as there is too much iron. Due to an absence of protoporphyrin iron transport is inhibited. SA’s include hereditary and acquired conditions; these can be due to alcohol, toxins, congenital defects, malignancies, or mutations. This haem synthesizing defect can be caused by the X-linked chromosome or the lead poisoning induced mutations, these are main mutations that interrupt the 8 enzymatic cascades in the biosynthesis of protoporphyrin, thus leading to defective haemoglobin (Hg) moreover, iron accumulation in the mitochondria. X-linked protoporphyria is due to a germline mutation in the gene that produces δ-aminolaevulinic acid (δ-ala) synthase, this interrupts the first step of haem synthesis, figure 1. Lead poisoning can interrupt 2 stages of haem synthesis δ-ala dehydratase (-δ-ala dehydratase porphyria) and ferrochelatase (erythropoietic protoporphyria). The first step devastates the production of haem, due to the chromosomal abnormality that stops the production of δ-ala dehydratase, is X-linked porphyria. The second step and the final step are associated with lead poisoning, this is more common in children. Ferrochelatase is a catalyst for the incorporation of iron to haem in the final stage of haemoglobin synthesis, this causes ferrochelatase erythrocytic protoporphyrin (FECH EPP). SA clinical presentation Common features of SA are general to microcytic anaemias such as teardrop and hypochromic cells, dimorphism is common, pappenheimer bodies and mitochondrial iron clusters which are found in bone marrow smears, where iron accumulates around 2/3 of the nucleus of erythroblasts. Without knowing the aetiology of anaemia standard FBCs and iron studies would be run to initially diagnosis the anaemia, with SA the iron cannot be transported so transferrin will be reduced, alongside mean cell volume (MCV), haemoglobin and haematocrit (HCT). There will also be an increase in ferratin, % saturation and serum Fe. Microcytic anaemia presents in 20-60% of patients with FECH-EPP. morphology will present as microcytic and hypochromic with the possible presentation of Pappenheimer bodies, ringed sideroblasts, dimorphism and basophilic stippling may be present in bloods of children suspected in lead >5 µg/dL. Lead poisoning can be misdiagnosed as porphyrin as lead is shed from the body slowly, this allows approximately 80% of the lead to be absorbed. Although lead exits the blood rather quickly once it’s in the bone it can have a half-life of 30 years. Written by Lauren Kelly Project Gallery
- The role of dopamine in the movement and the reward pathway | Scientia News
Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link The role of dopamine in the movement and the reward pathway What is it and what does it do? Dopamine is a neurotransmitter produced mainly in the ventral tegmental area (VTA) and the substantia nigra pars compacta (SNPC) in the brain, exhibiting both excitatory and inhibitory effects in different brain pathways. Dopamine is important in mediating the mesolimbic and nigrostriatal pathways for reward and movement, respectively. Therefore, damage to dopaminergic neurones affects dopamine levels in the brain and can consequently result in diseases associated with abnormal dopamine levels. Movement The role of dopamine is vital in modulating the initiation of movement through both the direct and indirect pathways of the basal ganglia (figure 1). In the direct pathway, dopamine produced from the SNPC binds to the D1 Gs-coupled receptors in the striatum resulting in the activation of the intracellular signalling cascade. Activation of these receptors results in increased intracellular cyclic adenosine monophosphate (cAMP) and protein kinase A (PKA) levels, which control the modulation of ion channels, including calcium channels for further depolarisation of the striatal cells. The excitation of the striatum results in GABAergic inhibition of the globus pallidus internal segment (GPI) and the substantia nigra pars reticulata (SNPR). Hence, this results in the disinhibition of the thalamus, allowing for excitatory glutamatergic transmission to the motor cortex for the facilitation of movement. The activation of the striatum via D1 receptor stimulation can be supported by a study conducted by Gerfen et al. 2012 in which they concluded that PKA activates calcium voltage-gated 1 L-type calcium channels, resulting in depolarisation of striatal cells, which causes the enablement of movement via the direct pathway. However, in the indirect pathway, dopamine binds to D2 Gi-coupled receptors with a higher affinity than D1 receptors, causing inhibition of these receptors and their intracellular signalling cascades. Consequently, there is decreased inhibition of potassium channels by the second messengers, resulting in hyperpolarisation due to potassium efflux from the striatal cells. As the striatum is inactivated, this reduces the overall inhibitory effect of the indirect pathway on the thalamus, allowing for movement. Therefore, dopamine is critical for the normal functioning of humans by allowing them to control their movements for survival, for example, by pushing a ball away when it is about to hit them. Reward Pathway The mesolimbic dopaminergic pathway (figure 2) is the most recognised reward pathway in the brain. This pathway contains the VTA, located in the midbrain, the nucleus accumbens (NA) and the tuberculum olfactorium (TO), located in the basal forebrain. The lateral regions of the VTA are the most abundant in A10 dopaminergic neurones in comparison to other regions of the VTA. These A10 neurones are activated in association with reward anticipation, for example, after exercising. The medial VTA dopaminergic neurones project to the core and medial shell regions of the NA, and the lateral VTA project towards the lateral shell region of the NA (figure 3). Thus, increasing dopamine levels in the NA and inducing the processing of the reward. Moreover, dopaminergic inputs from the VTA to the TO allow the individual to develop an odour preference for a specific stimulus due to motivation-oriented behaviour. Hence, this could be a reason why the anticipation of eating one's favourite food by evoking the memory of its smell is associated with the feeling of reward. Experiments conducted by FitzGerald et al. 2014 support my points regarding the role of the TO in the mesolimbic pathway. In their study, mice were given a choice of two different odours to choose from. The team noted activation of c-Fos neurones in the forebrain, indicating neuronal activity in this region, which is involved in reward motivation behaviour. Hence, allowing them to support the importance of the TO in odour processing and reward behaviour in the mice when choosing a more pleasurable odour. Eventually, projections from the TO and NA converge at the ventral pallidum, where the enrichment of reward-related learning occurs. Therefore, dopamine is essential for the initiation of the reward pathway in ensuring the continuation of reward behaviour when exposed to a specific stimulus and for survival due to the association of reproduction with reward. Conclusion In conclusion, dopamine is essential for the initiation of movement and in the reward pathway for normal human functioning and survival. Studies into aldehyde-dehydrogenase 1 in the SNPC have found that it protects dopaminergic neurones against neurodegeneration. Further studies will aid in understanding the mechanisms by which this enzyme is regulated and the actions by which it protects dopaminergic neurones in the SNPC. Written by Maria Z Kahloon Related article: The dopamine connection between the gut and the brain Project Gallery
- Anticancer Metal Compounds | Scientia News
Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Anticancer Metal Compounds How metal compounds can be used as anti-cancer agents Metal compounds such as Platinum, Cobalt and Ruthenium are used as anticancer agents. Anticancer metal compound research is important as chemotherapy is not selective, being very toxic to patients damaging normal DNA cells. Such metal compounds act as anti-cancer agents with the metals being able to vary in oxidation states. Selectivity of metal compounds to target only cancer cells arises from the metals properties of varying oxidation states for redox reactions. As cancer exists in hypoxic environments, the oxidation state of the metal is able to vary releasing the cancer drug only in the cancer environment. For example prodrugs are relatively inert metal complexes with relatively high oxidation states. PtIV, and CoIII are selective carriers undergoing reduction by varying the metals oxidation state in cancerous hypoxic environments releasing anticancer drugs. CoIII reduced to CoII, PtIV reduced to PtII in hypoxic environments. CoIII two oxidation states: Cobalt (III) is kinetically inert with low-spin 3d6 configuration, CoII is labile (high-spin 3d7). When CoIII is reduced to CoII in hypoxic environments, the active molecule is released then restored to its active form killing cancer cells. Cobalt can also bind to ligands like nitrogen mustards and curcumin ligands, exhibiting redox reactivity for cancer therapy. Nitrogen mustards are highly toxic due to their DNA alkylation and cross-linking activity. In vivo they are not selective for tumour tissue however can be deactivated by coordination to CoIII, released on reduction to CoII in hypoxic tumour tissue. This reduces systemic toxicity concluding an efficient anticancer drug. Platinum anticancer metal compounds treat ovarian, cervical and neck cancer. Platinum ( Pt IV) (cisplatin) exhibits redox-mediated anticancer activity, highly effective towards tumours. Platinum causes severe side-effects for patients so PtIV prodrug is used selectively reducing tumour sites. Ruthenium is used for cancer therapy as a less toxic metal over platinum. Ruthenium targeted therapy selectively disrupts specific cellular pathways where cancer cells rely for growth and metastasis. Reduction of Ru (III) to Ru(II) selectively occurs in hypoxic reducing environments where tumours over express transferrin receptors, ruthenium binding to. Overall metal compounds for cancer treatment attracted high interest due to redox activity properties. Metal compounds are selective to cancer cells, limiting patients' side effects. Such therapy shows how inorganic chemistry is important to medicine. By Alice Davey Related article: MOFs in cancer drug delivery Project Gallery
- Which fuel will be used for the colonisation of Mars? | Scientia News
Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Which fuel will be used for the colonisation of Mars? Speculating the prospect of habitating Mars The creation of a “Planet B” is an idea that has been circulating for decades; however we are yet to find a planet that is similar enough to our Earth that would be viable to live on without major modifications. Mars has been the most widely talked about planet in the media, and is commonly thought to be the planet that we know the most about. So, could it be habitable? If we were to move to Mars, how would society thrive? The dangers of living on Mars As a neighbour to Earth, Mars might be classed as habitable without more knowledge. Unfortunately, it is quite the opposite. On Earth, humans have access to air with an oxygen content of 21% however Mars only has 0.13% oxygen. The difference in the air itself suggests an uninhabitable planet. Another essential factor of human life is food. There have indeed been attempts to grow crops in Martian soil, including tomatoes, with great levels of success. Unfortunately, the soil is toxic therefore ingesting these crops could cause significant side effects in the long term. It could be possible to introduce a laboratory that crops could be grown in, modelling Earth soil and atmospheric conditions however this would be difficult. Air and food are two resources that are essential and could not readily be available in a move to Mars. Food could be grown in laboratory style greenhouses and the air could be processed. It is important to note that these solutions The Mars Oxygen ISRU Experiment The Mars Oxygen ISRU Experiment (MOXIE) was a component of the NASA Perseverance rover that was sent to Mars during 2020. Solid oxide electrolysis converts carbon dioxide, readily available in the atmosphere of Mars, into carbon monoxide and oxygen. MOXIE contributes to the idea that, in the move to Mars, oxygen would have to be ‘made’ rather than being readily available. The MOXIE experiment utilised nuclear energy to do this, and it was shown that oxygen could be produced at all times of day in multiple different weather conditions. It is possible to gain oxygen on Mars, but a plethora of energy is required to do so. What kind of energy would be better? With accessing oxygen especially, the energy source on Mars would need to be extremely reliable in order to ensure the population is safe. It is true that fossil fuels are reliable however it is increasingly obvious that the reason a move to Mars would be necessary is due to the lack of care of the Earth therefore polluting resources are to be especially avoided. A combination of resources is likely to be used. Wind power during the massive dust storms that find themselves on Mars regularly and solar power in clear weather, when the dust has not yet settled over the surface. One resource that would be essential is nuclear power. The public perception is mixed yet it is certainly reliable and that is the main requirement. After all, a human can only survive for around 5 minutes without oxygen. Time lost due to energy failures would be deadly. By Megan Martin Project Gallery