top of page

Search Index

295 results found

  • The secret to disarming plant pathogens revealed | Scientia News

    Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link The secret to disarming plant pathogens revealed Last updated: 27/03/25, 15:06 Published: 27/03/25, 08:00 Channel-blocking nanoparticles as a potential solution to plant diseases Unravelling the role of bacterial proteins in plant diseases! Disarming plant diseases one protein at a time! Scientists may have found a means to neutralise them, saving farmers $220 billion in yearly crop losses. The impact of plant diseases on global food production Bacteria have long been known to wreak havoc on crops, threatening our food supply and causing substantial economic losses. For over two decades, biologist Sheng-Yang He and his dedicated team have been delving into the mysterious world of bacterial proteins, seeking to unravel their role in plant diseases that plague countless crops worldwide. Finally, a breakthrough has been achieved after years of tireless research and collaboration. In a groundbreaking study published in the esteemed journal Nature, he and his colleagues have uncovered the mechanisms by which these proteins induce disease in plants and devised a method to neutralise their harmful effects. Understanding the mechanism of harmful proteins Their investigation focused on a group of injected proteins called AvrE/DspE, responsible for causing diseases ranging from brown spots in beans to fire blight in fruit trees. Despite their significance, the exact workings of these proteins have long remained elusive. The researchers discovered that these proteins adopt a unique 3D structure resembling a tiny mushroom with a cylindrical stem through cutting-edge advancements in artificial intelligence and innovative experimental techniques. Intriguingly, this structure resembled a straw, leading the team to hypothesise that the proteins create channels in plant cells, enabling the bacteria to extract water from the host during infection. Further investigation into the 3D model of the fire blight protein revealed that its hollow inner core contains many proteins from the AvrE/DspE family. These proteins were found to suppress the plant's immune system and induce dark water-soaked spots on leaves, the telltale signs of infection. However, armed with this newfound knowledge, the researchers sought to develop a strategy to disarm these proteins and halt their destructive effects. They turned to poly(amidoamine) dendrimers (PAMAM), tiny spherical nanoparticles with precise diameters that can be tailored in the lab. By experimenting with different sizes, they identified a nanoparticle that effectively blocked the water channels formed by the bacterial proteins. Application of nanoparticles in blocking water channels In a remarkable series of experiments, the researchers treated frog eggs engineered to produce the water channel protein with these channel-blocking nanoparticles. The results were astounding—the eggs no longer swelled with water and remained unaffected. Similarly, infected Arabidopsis plants treated with the nanoparticles significantly reduced pathogen concentrations, effectively preventing disease development. This breakthrough discovery offers a glimmer of hope in the battle against plant diseases, which cause immense losses in global food production. Plants are responsible for 80% of the world's food supply, and protecting them from pathogens and pests is crucial for ensuring food security. The team's groundbreaking research on plant pathogens and their harmful proteins opens up new possibilities for combating various plant diseases. The implications of their findings extend far beyond a single crop or disease, offering novel approaches to address a wide range of plant diseases. By understanding the mechanism by which bacterial proteins, such as AvrE and DspE, cause diseases in plants, researchers can now explore strategies to disarm these proteins and prevent their harmful effects. The team discovered that these proteins act as water channels, allowing bacteria to invade plant cells and create a saturated environment that promotes their growth. This insight led to the development of channel-blocking nanoparticles, effectively preventing bacteria from infecting plants and causing disease symptoms. Using precise nanoparticles, such as PAMAM dendrimers, to block plant pathogens' water channels represents a promising avenue for crop protection. Figure 1: this figure shows that PAMAM are very branched polymers that are very small, have a low polydispersity index, and have a lot of active amine functional groups. They have multiple modifiable surface functionalities, facilitating the conjugation of ligands for cancer targeting, imaging, and therapy. PAMAM dendrimers also have solubilisation, high drug encapsulation, and passive targeting ability, contributing to their therapeutic success. Cancer researchers are excited about their potential as drug carriers and non-viral gene vectors, with a focus on diagnostic imaging applications. These nanoparticles can be tailored to specific diameters, allowing for targeted disruption of the bacterial proteins' channels. The nanoparticles effectively render the bacteria harmless by interfering with the proteins' ability to create a moist environment within plant cells. This innovative approach has shown success in combating diseases caused by pathogens like Pseudomonas syringae and Erwinia amylovora . Implications for global food production and food security The potential impact of this research on global food production is immense. Plant diseases result in significant crop losses, amounting to over 10% of global food production annually. This translates to a staggering $220 billion economic loss worldwide. Developing strategies to disarm harmful proteins and protect crops from diseases can mitigate these losses and enhance food security. Furthermore, the team's findings highlight the critical role of plant biology research in addressing global challenges. Plants provide 80% of our food, making their health and protection crucial for sustaining our growing population. By understanding how pathogens infect plants and developing innovative solutions, we can safeguard our food supply and reduce the economic impact of crop diseases. Experimental results and a promising outlook The researchers aim to further investigate the interaction between channel-blocking nanoparticles and bacterial proteins. By visualising the structures and mechanisms involved, they hope to refine their designs and develop even more effective strategies for crop protection. Additionally, artificial intelligence, such as the AlphaFold2 programme, has proven instrumental in predicting the 3D structures of complex proteins. Continued advancements in AI technology will undoubtedly contribute to further breakthroughs in understanding and combating plant diseases. By unravelling the mechanisms by which harmful proteins cause diseases in plants and developing innovative strategies to disarm them, we can protect global food production and enhance food security. The implications of this research extend beyond a single crop or disease, paving the way for novel approaches to combat a wide range of plant diseases and safeguard our agricultural systems. Conclusion The groundbreaking research conducted by biologist Sheng-Yang He and his team offer hope in the fight against plant diseases. By revealing the mechanisms by which harmful proteins cause diseases in plants and developing innovative strategies to disarm them, they have paved the way for novel approaches to combat various plant diseases. This enhances food security and protects global food production, reducing economic losses and ensuring a sustainable future. With continued advancements in artificial intelligence and the development of precise nanoparticles, the possibilities for further breakthroughs in understanding and combating plant diseases are endless. By safeguarding our agricultural systems, we can secure the health of our crops and, ultimately, the well-being of our growing population. The implications of this research extend far beyond agriculture, offering new avenues for addressing global challenges and paving the way for a brighter and more resilient future. Figure 2: this figure shows a working model for the molecular actions of AvrE-family effectors in plants. AvrE-family effectors are water- and solute-permeable channels that change the osmotic and water potential and make an apoplast that is rich in water and nutrients for bacteria to grow in plant tissues that are infected. They can also engage host proteins to modulate AvrE-family channel properties or optimise pathogenic outcomes. Written by Sara Maria Majernikova Related articles: Digital innovation in rural farming / Nanomedicine / Mechanisms of pathogen evasion REFERENCE Kinya Nomura, Felipe Andreazza, Jie Cheng, Ke Dong, Pei Zhou, Sheng Yang He. Bacterial pathogens deliver water- and solute-permeable channels to plant cells. Nature , 2023; DOI: 10.1038/s41586-023-06531-5 Project Gallery

  • The spread of digital disinformation | Scientia News

    Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link The spread of digital disinformation 18/11/24, 12:41 Last updated: Published: 05/08/23, 10:06 IT cells and their impact on public opinion As of January 2023, the internet boasts a staggering 4.72 billion estimated social media accounts, with a 3% year-on-year growth of +137 million users and further expansion projected throughout the year. The average person now spends a substantial 6 hours and 58 minutes daily connected to online screens, underscoring the significant role the internet plays in our lives. Consequently, it comes as no surprise that governments worldwide have recognized its potential as a critical tool to advance their agendas, policies, and achievements. Through diverse digital channels, governments aim to reach a vast audience and change public perception, striving to build transparency, trust, and legitimacy while maintaining a powerful digital presence. However, this approach also raises concerns about bias, propaganda, and information manipulation, which can impact public perceptions in questionable ways. One such phenomenon that has emerged is the presence of IT cells, organized groups typically affiliated with political parties, organizations, or interest groups. These Information Technology cells dedicate themselves to managing and amplifying their respective organizations' online presence, predominantly on social media platforms and other digital avenues. During contentious political events or national issues, IT cells deploy coordinated messaging in support of government policies and leaders, inundating social media platforms. Unfortunately, dissenting voices and critics may face orchestrated attacks from these IT cells, aimed at discrediting and silencing them. While some IT cells may operate with genuine intentions, they have faced criticism for engaging in tactics that spread misinformation, disinformation, and targeted propaganda to sway public sentiment in favour of their affiliated organizations. In such instances, IT cells strategically amplify positive news and government achievements while downplaying or deflecting negative information. Social media influencers and online campaigns have become tools to project a positive image of the government and maintain public support. One striking example of how governments can exploit IT cells for their gain was evident in the infamous Cambridge Analytica scandal. In 2018, revelations exposed how the political consulting firm, Cambridge Analytica, acquired personal data from millions of Facebook users without consent. The firm then weaponized this data to construct highly targeted and manipulative political campaigns, including during the 2016 United States presidential election and the Brexit referendum. In India, the ruling BJP party has come under scrutiny for its orchestrated online campaigns through its social media cell. The cell allegedly intimidates individuals perceived as government critics and actively disseminates misogyny, Islamophobia, and animosity. According to Sadhavi Khosla, a BJP cyber-volunteer associated with the BJP IT Cell, the organization promotes divisive content and employs trolling tactics against users critical of the BJP. Journalists and Indian film actors have also found themselves targeted by these campaigns. As technology continues to evolve, it is imperative to strike a balance between leveraging the internet for transparency and legitimacy while safeguarding against potential misuse that could erode trust in digital governance and public discourse. Monitoring and addressing the activities of IT cells can be a significant step towards ensuring responsible and ethical use of digital platforms in the political arena. Written by Jaspreet Mann Related articles: COVID-19 misconceptions / Fake science websites Project Gallery

  • Secondary bone cancer | Scientia News

    Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Secondary bone cancer 24/09/24, 13:05 Last updated: Published: 13/12/23, 17:27 Pathology and promising therapeutics Introduction: what is secondary bone cancer? Secondary bone cancer occurs when cancer cells spread to the bones from a tumour that started somewhere else in the body. The site where the tumour first develops is called primary cancer. Cancer cells can break away from the primary cancer, travel through the bloodstream or lymphatic system, and establish secondary cancers, known as metastasis. Bones are among the most common sites to which cancer can spread. Most type of cancer has the potential to metastasise to the bones, with the most frequent occurrences seen in prostate, breast, lung, thyroid, kidney, and myeloma cancers. Throughout the literature, secondary cancer in the bones is referred to as bone secondaries or bone metastases. The most common areas of secondary bone cancer are the spine, ribs, pelvis, humerus (upper bone of the arm), femur (upper bone of the leg) and skull. There are two main types of bone cancer referred to as osteolytic and osteoblastic metastases. In osteolytic metastases, cancer cells break down the bone, leading to significant weakening. This type of metastasis is more common than osteoblastic metastases and often occurs when breast cancer spreads to the bone. In osteoblastic metastases, cancer cells invade the bone and stimulate excessive bone cell formation. This process results in the bone becoming very dense (sclerotic). Osteoblastic metastases frequently occur when prostate cancer spreads to the bone. Although new bone forms, it grows abnormally, which weakens the overall bone structure. Hormone therapy Like primary bone cancer, treatment for secondary bone cancer includes surgical excision, chemotherapy, and radiation therapy. Treatment for secondary bone cancer aims to control the cancer growth and symptoms. Treatment depends on several factors, including the type of primary cancer, previous treatment, the number of bones affected by cancer, whether cancer has spread to other body parts, overall health, and symptoms. Breast and prostate cancers rely on hormones for their growth. Reducing hormone levels in the body can be effective in managing the proliferation of secondary cancer. Hormone therapy, also known as endocrine therapy, uses synthetic hormones to inhibit the impact of the body’s innate hormones. Typical side effects include hot flashes, mood fluctuations, changes in weight, and sweating. Bisphosphonates Bone is a dynamic tissue with a continuous process of bone formation and resorption. Osteoclasts are cells responsible for breaking down bone tissue. In secondary bone cancer, cancer cells often produce substances that stimulate the activity of osteoclasts. This leads to elevated levels of calcium in the blood (hypercalcemia), resulting in feelings of nausea and excessive thirst. Treating secondary bone cancer involves strengthening bones, alleviating bone pain and managing hypercalcaemia). One option for bone-strengthening is bisphosphonates. Bisphosphonates can be administered orally or intravenously. They have been in clinical practice for over 50 years and are used to treat metabolic bone diseases, osteoporosis, osteolytic metastases, and hypercalcaemia. These compounds selectively target osteoclasts to inhibit their function. Bisphosphonates can be classified into two pharmacologic categories based on their mechanism of action. Nitrogen-containing bisphosphonates, the most potent class, function by suppressing the activity of farnesyl pyrophosphate synthase, a key factor in facilitating the binding of osteoclasts to bone. Consequently, this interference causes the detachment of osteoclasts from the bone surface, effectively impeding the process of bone resorption. Examples of these bisphosphonates include alendronate and zoledronate. Bisphosphonates without nitrogen in their chemical structure are metabolised intracellularly to form an analogue of adenosine triphosphate (ATP), known as 5'-triphosphate pyrophosphate (ApppI). ApppI is a non-functional molecule that disrupts cellular energy metabolism, leading to osteoclast cell death (apoptosis) and, consequently, reduced bone resorption. Examples of these bisphosphonates include etidronate and clodronate. Non-nitrogen-containing bisphosphonates can inhibit bone mineralisation and cause osteomalacia, a condition characterised by bones becoming soft and weak. Due to these considerations, they are not widely utilised. Denosumab Denosumab is another option for bone strengthening. It is administered as an injection under the skin (subcutaneously). Denosumab is a human monoclonal antibody that inhibits RANKL to prevent osteoclast-mediated bone resorption. Denosumab-mediated RANKL inhibition hinders osteoclast maturation, function, and survival in contrast to bisphosphonates, which bind to bone minerals and are absorbed by mature osteoclasts. In some studies, Denosumab demonstrated equal or superior efficacy compared to bisphosphonates in preventing skeletal-related events (SREs) associated with bone metastasis. Denosumab’s mechanism of action provides a targeted approach that may offer benefits for specific populations, such as patients with renal impairment. Bisphosphonates are excreted from the human body by the kidneys. A study by Robinson and colleagues demonstrated that bisphosphonate users had a 14% higher risk of chronic kidney disease (CKD) stage progression (including dialysis and transplant) than non-users. On the other hand, denosumab is independent of renal function and less likely to promote deteriorations in kidney function. Take-home message Secondary bone cancer, resulting from the spread of cancer cells to the bones, poses challenges across various cancers. Two main types, osteolytic and osteoblastic metastases, impact bone structure differently. Hormone therapy, bisphosphonates, and Denosumab have shown promising results and offer effective management of secondary bone cancers. Ultimately, the decision between treatments should be made in consultation with a healthcare professional who can evaluate the specific clinical situation and individual patient factors. The choice should be tailored to meet the patient’s needs and treatment goals. Written by Favour Felix-Ilemhenbhio Related article: Bone cancer Project Gallery

  • Medicinal Manuka | Scientia News

    Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Medicinal Manuka 08/02/25, 13:23 Last updated: Published: 11/05/24, 10:57 It's produced by European honeybees Manuka honey has received considerable attention recently due to its impressive antimicrobial ability and potential for future clinical use. Manuka honey is produced by European honeybees ( Apis mellifera ) that visit the Manuka tree ( Leptospermum scoparium ) in New Zealand. It is most commonly distributed as monofloral honey (produced by bees that have visited predominantly one plant species—in this case, the Manuka bush); however, it can also be sold as multifloral. The Manuka tree, which the European honeybees visit, has a long history of use for its medicinal properties. The Māori (the indigenous Polynesian people of mainland New Zealand) valued it for its wide variety of uses, referring to the plant as ‘taonga’ (‘treasure’). The leaves from the tree were used to make infusions that could reduce fevers, and the gum produced from the tree was used to moisturise burns and soothe coughs. In the 18th century, European settlers contacted the Māori and became aware of this tree and its healing properties; they used the leaves as a medicinal tea to treat scurvy. In 1839, an English beekeeper, Mary Bumby, introduced bees to New Zealand, and by 1860, the bee population had grown extensively, and colonies were present throughout forests. The Māori learnt to harvest the honey produced by these bees and promoted the production of Manuka honey. The honey was used by the Māori for the same benefits they used the Manuka tree. In the 1980s, the biochemist Peter Molan launched the first scientific research on the antimicrobial properties of Manuka honey, evaluating its ability to kill microbes. Research has demonstrated that Manuka honey is an effective bactericidal (killer of microbes). Dr Jonathon Cox and his colleagues at Aston University showed that administering Manuka honey can be effective against Mycobacterium abscessus , which is fatal without treatment. Using a model of an artificial lung, Dr Cox found that the addition of Manuka honey reduced the dosage of the highly potent amikacin by 8-fold, which is an extremely significant difference to the quality of life of patients as a common consequence of the 13-month amikacin treatment is permanent hearing loss. Alternative remedies for bacterial infections are required to combat the growing concern of antibiotic resistance. Many molecules of Manuka honey are responsible for their antimicrobial activity, including methylglyoxal (MGO) content. MGO can interfere with the lipid bilayer structure of the bacterial membrane, leading to leakage of its cellular contents and cell death. MGO can also impair the function of enzymes involved in energy production and macromolecule synthesis within bacteria. Additionally, Manuka honey can also produce hydrogen peroxide, which generates highly reactive oxygen species (ROS) within bacterial cells. These ROS, such as hydroxyl radicals, can cause oxidative damage to biomolecules, including proteins, lipids, and DNA, leading to bacterial cellular death. Altogether, these mechanisms enable Manuka honey to disrupt bacterial growth and proliferation. Manuka honey is currently used as a medical product for professional wound care in European hospitals. The main advantage of Manuka honey is that the mechanisms behind its antibacterial activity are diverse, making it effective against resistant strains of bacteria, including methicillin-resistant Staphylococcus aureus (MRSA) . A systematic review written by Jonathon Cox states that certain commercially available varieties of Manuka honey are effective against organisms that have a high degree of antibiotic resistance. Therefore, this leads to the promising preliminary conclusion that Manuka honey could be the answer to the investigation of finding an effective antimicrobial, an alternative to antibiotics. Written by Harvey Wilkes Related article: Natural substances as treatment to infection REFERENCES Nolan, V.C., Harrison, J. and Cox, J.A., 2022. In vitro synergy between manuka honey and amikacin against Mycobacterium abscessus complex shows potential for nebulisation therapy. Microbiology, 168(9), p.001237. Nolan, V.C., Harrison, J., Wright, J.E. and Cox, J.A., 2020. Clinical significance of manuka and medical-grade honey for antibiotic-resistant infections: a systematic review. Antibiotics , 9 (11), p.766. Project Gallery

  • The genesis of life | Scientia News

    Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link The genesis of life 14/02/25, 13:54 Last updated: Published: 23/11/23, 11:22 Life's origins Did the egg or the chicken come first? This question is often pondered regarding life’s origin and how biological systems came into play. How did chemistry move to biology to support life? And how have we evolved into such complex organisms? The ingredients, conditions and thermodynamically favoured reactions hold the answer, but understanding the inner workings of life’s beginnings poses a challenge for us scientists. Under an empirical approach, how can we address these questions if these events occurred 3.7 billion years ago? The early atmosphere of the Earth To approach these questions, it is relevant to understand the atmospheric contents of the primordial Earth. With a lack of oxygen, the predominant make-up included C02, NH3 and H2, creating a reducing environment for the drive of chemical reactions. When the earth cooled, and the atmosphere underwent condensation, pools of chemicals were made - this is known as “primordial soup”. It is thought that reactants could collide from this “soup” to synthesise nucleotides by forming nitrogenous bases and bonds, such as glycosidic or hydrogen bonds. Such nucleotide monomers were perhaps polymerised to create long chains for nucleic acid synthesis, that is, RNA, via this abiotic synthesis. Thus, if we have nucleic acids, genetic information could have been stored and passed later down the line, allowing for our eventual evolution. Conditions for nucleic acid synthesis The environment supported the formation of monomers for said polymerisation. For example, hydrothermal vents could have provided the reducing power via protons, allowing for the protonation of structures and providing the free energy for bond formation. Biology, of course, relies on protons for the proton gradient in ATP synthesis at the mitochondrial membrane and, in general, acid-base catalysis in enzymatic reactions. Therefore, it is safe to say protons played a vital role in life’s emergence. The eventual formation of structures by protonation and deprotonation provides the enzymatic theory of life’s origins. That is, some self-catalytic ability for replication in a closed system and the evolution of complex biological units. This is the “RNA World” theory, which will be discussed later. Another theory is wet and dry cycling at the edge of hydrothermal pools. This theory Is provided by David Deamer, who suggests that nucleic acid monomers placed in acidic (pH 3) and hot (70-90 degrees Celsius) pools could undergo condensation reactions for ester bond formation. It highlights the need for low water activity and a “kinetic trap” in which the condensation reaction rate exceeds the hydrolysation rate. The heat of the pool provides a high activation energy for the localised generation of polymers without the need for a membrane-like compartment. But even if this was possible and nucleic acids could be synthesised, how could we “keep them safe”? This issue is addressed by the theory of "protocells" formed from fatty acid vesicles. Jack Szostak suggests phase transition (that is pH decrease) allowed for the construction of bilayer membranes from fatty acid monomers, which is homologous to what we see now in modern cells. The fatty acids in these vesicles have the ability to “flip-flop” to allow for the exchange of nutrients or nucleotides in and out of the vesicles. It is suggested that clay encapsulated nucleotide monomers were brought into the protocell by this flip-flop action. Vesicles could grow by competing with surrounding smaller vesicles. Larger vesicles are thought to be those harbouring long polyanionic molecules - that is RNA - which creates immense osmotic pressure pushing outward on the protocell for absorption of smaller vesicles. This represents the Darwinian “survival of the fittest” theory in which cells with more RNA are favoured for survival. The RNA World Hypothesis DNA is often seen as the “Saint” of all things biology, given its ability to store and pass genetic information to mRNA and then mRNA can use this information to synthesise polypeptides. This is the central dogma of course. However, the RNA world hypothesis suggests that RNA arose first due to its ability to form catalytic 3D structures and store genetic information that could have allowed for further synthesis of DNA. This makes sense when you think about how the primer for DNA replication is formed out of RNA. If RNA did not come first, how could DNA replication be possible? Many other scenarios suggest RNA evolution preceded that of DNA. So, if RNA arose as a simple polymer, its ability to form 3D structures could have allowed ribozymes (RNA with enzymatic function) within these protocells. Ribozymes, such as RNA ligase and polymerase, could have allowed for self-replication, and then mutation in primary structure could have allowed evolution to occur. If we have a catalyst, in a closed system, with nutrient exchange, then why would life’s formation not be possible? But how can we show that RNA can arise in this way? The answer to this is SELEX - selective evolution of ligands by exponential enrichment (5). This system was developed by Jack Szostak, who wanted to show the evolution of complex RNA, ribozymes in a test tube was possible. A pool of random, fragmented RNA molecules can be added to a chamber and run through a column with beads. These beads harbour some sequence or attraction to the RNA molecules the column is selecting for. Those that attach can be eluted, and those that do not can be disregarded. The bound RNA can be rerun through SELEX, and the conditions in the column can be more specific in that only the most complementary RNAs bind. This allowed for the development of RNA ligase and RNA polymerase - thus, self-replication of RNA is possible. SELEX helps us understand how the evolution of RNA in the primordial Earth could have been possible. This is also established by meteorites, such as carbon chondrites that burnt up in the earth’s atmosphere encapsulating the organic material in the centre. Chondrites found in Antarctica have been found to contain 80+ amino acids (some of which are not compatible with life). These chondrites also included nucleobases. So, if such monomers can be synthesised in a hostile environment in outer space/in our atmosphere, then the theory of abiotic synthesis is supported. Furthermore, it is relevant to address the abiotic synthesis of amino acids since the evolution of catalytic RNA could have some complementarity for polypeptide synthesis. Miller and Urey (1953) set up a simple experiment containing gas representing the early primordial earth (Methane, hydrogen, ammonia, water). They used a conduction rod to provide the electrical discharge (meant to simulate lightning or volcanic eruption) to the gases and then condensed them. The water in the other chamber turned pink/brown. Following chromatography, they identified amino acids in the mixture. These simple manipulations could have been homologous to early life. Conclusion The abiotic synthesis of nucleotides and amino acids for their later polymerisation would support the theories that address chemistry moving toward biological life. Protocells containing such polymers could have been selected based on their “fitness” and these could have mutated to allow for the evolution of catalytic RNA. The experiments mentioned represent a small fragment of those carried out to answer the questions of life’s origins. The evidence provides a firm ground for the emergence of life to the complexity of what we know today. Written by Holly Kitley Project Gallery

  • Allergies | Scientia News

    Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Allergies 06/01/25, 13:55 Last updated: Published: 13/05/24, 14:27 Deconstructing allergies: mechanisms, treatments, and prevention Modern populations have witnessed a dramatic surge in the number of people grappling with allergies, a condition that can lead to a myriad of health issues such as eczema, asthma, hives, and, in severe cases, anaphylaxis. For those who are allergic, these substances can trigger life-threatening reactions due to their abnormal immune response. Common allergens include antibiotics like penicillin, as well as animals, insects, dust, and various foods. The need for strict dietary restrictions and the constant fear of accidental encounters with allergens often plague patients and their families. Negligent business practices and mislabelled food have even led to multiple reported deaths, underscoring the gravity of allergies and their alarming rise in prevalence. The primary reason for the global increase in allergies is believed to be the lack of exposure to microorganisms during early childhood. The human microbiome, a collection of microorganisms that live in and on our bodies, is a key player in our immune system. The rise in sanitation practices is thought to reduce the diversity of the microbiome, potentially affecting immune function. This lack of exposure to infections may cause the immune system to overreact to normally harmless substances like allergens. Furthermore, there is speculation about the impact of vitamin D deficiency, which is becoming more common due to increased indoor time. Vitamin D is known to support a healthy immune response, and its deficiency could worsen allergic reactions. Immune response Allergic responses occur when specific proteins within an allergen are encountered, triggering an immune response that is typically used to fight infections. The allergen's proteins bind to complementary antigens on macrophage cells, causing these cells to engulf the foreign substance. Peptide fragments from the allergen are then presented on the cell surface via major histocompatibility complexes (MHCs), activating receptors on T helper cells. These activated T cells stimulate B cells to produce immunoglobulin E (IgE) antibodies against the allergen. This sensitizes the immune system to the allergen, making the individual hypersensitive. Upon re-exposure to the allergen, IgE antibodies bind to allergen peptides, activating receptors on mast cells and triggering the release of histamines into the bloodstream. Histamines cause vasodilation and increase vascular permeability, leading to inflammation and erythema. In milder cases, patients may experience itching, hives, and runny nose; however, in severe allergic reactions, intense swelling can cause airway constriction, potentially leading to respiratory compromise or even cessation. At this critical point, conventional antihistamine therapy may not be enough, necessitating the immediate use of an EpiPen to alleviate symptoms and prevent further deterioration. EpiPens administer a dose of epinephrine, also known as adrenaline, directly into the bloodstream when an individual experiences anaphylactic shock. Anaphylactic shock is typically characterised by breathing difficulties. The primary function of the EpiPen is to relax the muscles in the airway, facilitating easier breathing. Additionally, they counteract the decrease in blood pressure associated with anaphylaxis by narrowing the blood vessels, which helps prevent symptoms such as weakness or fainting. EpiPens are the primary treatment for severe allergic reactions leading to anaphylaxis and have been proven effective. However, the reliance on EpiPens underscores the necessity for additional preventative measures for individuals with allergies before a reaction occurs. Preventative treatment Young individuals may have a genetic predisposition to developing allergies, a condition referred to as atopy. Many atopic individuals develop multiple hypersensitivities during childhood, but some may outgrow these allergies by adulthood. However, for high-risk atopic children, preventive measures may offer a promising solution to reduce the risk of developing severe allergies. Clinical trials conducted on atopic infants explored the concept of immunotherapy treatments, involving continuous exposure to small doses of peanut allergens to prevent the onset of a full-blown allergy. Initially, skin prick tests for peanut allergens were performed, and only children exhibiting negative or mild reactions were selected for the trial. Those with severe reactions were excluded due to the high risk of anaphylactic shock with continued exposure. The remaining participants were randomly assigned to either consume peanuts or follow a peanut-free diet. Monitoring these infants as they aged revealed that continuous exposure to peanuts reduced the prevalence of peanut allergies by the age of 5. Specifically, only 3% of atopic children exposed to peanuts developed an allergy compared to 17% of those in the peanut-free group. The rise in severe allergies poses a growing concern for global health. Once an atopic individual develops an allergy, mitigating their hypersensitivity can be challenging. Current approaches often involve waiting for children to outgrow their allergies, overlooking the ongoing challenges faced by adults who remain highly sensitive to allergens. Implementing preventive measures, such as early exposure through immunotherapy, could enhance the quality of life for future generations and prevent sudden deaths in at-risk individuals. In conclusion, a dramatic surge in the prevalence of allergies in modern populations requires more attention from researchers and health care providers. Living with allergies can bring many complexities into someone’s life even before they potentially have a serious reaction. Currently treatments are focused on post-reaction emergency care, however preventative strategies are still a pressing need. With cases of allergies predicted to rise further, research into this global health issue will become increasingly important. There are already promising results from early trials of immunotherapy treatments, and with further research and implementation these treatments could improve the quality of life of future generations. Written by Charlotte Jones Related article: Mechanisms of pathogen evasion Project Gallery

  • Iron deficiency anaemia | Scientia News

    Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Iron deficiency anaemia 07/02/25, 16:23 Last updated: Published: 27/06/23, 17:10 A type of anaemia This article is no. 2 of the anaemia series. Next article: anaemia of chronic disease . Previous article: Anaemia . Aetiology Iron deficiency anaemia (IDA) is the most frequent in children due to rapid growth (adolescence) and poor diets (infants), and in peri and post -menopausal women due to rapid growth (pregnancy) and underlying conditions. Anaemia typically presents, in around 50% of cases as headache, lethargy and pallor depending on the severity. Less common side effects include organomegaly and Pica which occurs in patients with zinc and iron deficiency and is defined by the eating of things with little to no nutritional value. Pathophysiology Iron is primarily sourced through diet, as haem (Fe2+) and non-haem iron (Fe3+). Fe2+ is sourced through meat, fish, and other animal-based products, Fe2+ can be absorbed directly through the enterocyte via the haem carrier protein1 (HCP1). Fe3+ is less easily absorbed and is mostly found in plant-based products. Fe3+ must be reduced and transported through the duodenum by the enzyme duodenal cytochrome B (DcytB) and the divalent metal transporter 1 (DMT1), respectively. Diagnosis As with any diagnosis, the first test to run would be a full blood count and this will occur with all the anaemias. In suspected cases of anaemia, the Haemoglobin (Hb) levels would be lower than 130 in males and 120 in females. The mean cell volume (MCV) is a starting point for pinpointing the type of anaemia, for microcytic anaemias you would expect to see an MCV < 80. Iron studies are best for diagnosing anaemias, for IDA you would expect most of the results to be low. A patient with IDA has little to no available iron so the body would halt the mechanism’s for storing iron. As ferratin is directly related to storage, low ferratin can be a lone diagnostic of IDA. Total iron-binding capacity (TIBC) would be expected to be raised, as transferrin transports iron throughout the body, the higher it is the more iron it would be capable of binding to. Elliptocytes (tear drop) are elongated RBC, often described as pencil like in structure and are regularly seen in IDA and other anaemias. Typically, one would see hypochromic RBC as they contain less Hb than normal cells, the Hb is what gives red cells their pigment. It’s not uncommon to see other changes in RBC such as target cells, given their name due to the bullseye appearance. Target cells are frequently seen in cases with blood loss. Summary IDA is the most frequent anaemia affecting patients of all age ranges and usually presents with lethargy and headaches. Dietary iron from animal derivatives are the most efficient source of iron uptake. Diagnosis of IDA is through iron studies, red cell morphological investigations alongside clinical presentation, to rule out other causes. Written by Lauren Kelly Project Gallery

  • The new age of forensic neurology | Scientia News

    Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link The new age of forensic neurology 17/02/25, 14:43 Last updated: Published: 23/08/23, 16:16 Explaining and predicting the behaviour of serial killers Background Nobody can argue that true crime has taken the media by storm in recent years. In 2021, the search to find Gabby Petito inflamed social media, with the r/gabbypetito subreddit having 120,000 members at its peak. Tiktok ‘psychics’ would amass millions of views by attempting to predict how the case would progress, with predictably terrible results. A small solace remains, however; the fact that increased media presence of murder cases increases the rate at which research into murderers is published. The increase in both research and media attention toward true crime continued through 2022, invigorated by the release of Monster: the Dahmer Story on Netflix, which was viewed on Netflix for over 1 billion hours by its user base. It could be argued that the popularity of this show and others depicting serial killers also increased the publication of research on the neurology of serial killers. The neurological basis of the serial killer refractory period Dilly (2021) encompasses some very interesting correlational research into the neurological factors at play in the evocation of the serial killer refractory period. Following analysis of the refractory periods of ten American serial killers, a metaanalysis of prior research was performed to establish which prior theory most thoroughly explained the patterns derived. The American serial killers utilised in this investigation were: The Golden State Killer, Joseph James DeAngelo. Jeffrey Dahmer. Ted Bundy. John Wayne Gacy. The Night Stalker, Richard Ramirez. The BTK Killer, Dennis Rader. The I-5 Killer, Randall Woodfield. Son of Sam, David Berkowitz. The Green River Killer, Gary Ridgway. The Co-Ed Killer, Edmund Kemper III. Theory no. 1 While this research is purely speculative due to the lack of real-time neurological imaging of the killers both during refractory periods and their murderous rampages, this research was demonstrated to lend credence to a prior theory proposed by Simkin and Roychowdhury (2014). This research, titled Stochastic Modelling of a Serial Killer , theorised based on their own collated data that the refractory period of serial killers functions identically to that of the refractory period of neurons. This theory is based upon the idea that murder precipitates the release of a powerful barrage of neurotransmitters, culminating in widespread neurological activation. In line with neurological refractory periods, it is believed that this extreme change in state of activation is followed by a period of time wherein another global activation event cannot occur. Theory no. 2 Hamdi et al. (2022) delineates the extent to which the subject’s murderous impulses were derived from Fregoli syndrome, rather than his comorbid schizophrenia. This research elucidated how schizophrenic symptoms can synergise with symptoms of delusional identification syndromes (DIS) to create distinct behaviours and thought patterns that catalyse sufferers to engage in homicidal impulses. DIS include a range of disorders wherein sufferers experience issues identifying objects, people, places or events; Fregoli Syndrome is a DIS characterised by the delusional belief that people around the sufferer are familiar figures in disguise. The subject’s Fregoli Syndrome caused the degeneration of his trust of those around him, which quickly led to an increase in aggressive behaviours. The killer attacked each member of his family multiple times before undertaking his first homicide- excluding his father, whom reportedly ‘scared him very much’. Unsurprisingly then, his victim cohort of choice for murder were older men. The neurobiological explanation of Fregoli Syndrome asserts that the impairment of facial identification, wherein cerebrocortical hyperactivity catalyses delusional identification of unfamiliar faces as familiar ones. Conclusion Forensic neurology has been a key element in expanding the understanding of serial killers, with the research of Raine et al. (1997) popularising the use of neurology to answer the many questions posed by the existence of serial killers. Since Raine, Buchsbaum and LaCasse of the 1997 study first used brain scanning techniques to study and understand serial killers, the use of brain scanning techniques to study this population has become a near-perfect art, becoming ever more of a valid option for use both in understanding and predicting serial killer behaviour. In all likelihood, future innovations in forensic neurology research will continue to bring about positive change, reducing homicidal crime with the invention and use of different methods and systems to predict and stop the crimes before they happen. Summarised from a full investigation. Written by Aimee Wilson Related article: serial killers in healthcare Project Gallery

  • Why blue whales don't get cancer | Scientia News

    Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Why blue whales don't get cancer 21/02/25, 12:28 Last updated: Published: 16/10/23, 21:22 Discussing Peto's Paradox Introduction: What is Peto’s Paradox? Cancer is a disease that occurs when cells divide uncontrollably, owing to genetic and epigenetic factors . Theoretically, the more cells an organism possesses, the higher the probability should be for it to develop cancer. Imagine that you have one tiny organism – a mouse, and a huge organism – an elephant. Since an elephant has more cells than a mouse, it should have a higher chance of developing cancer, right? This is where things get mysterious. In reality, animals with 1,000 times more cells than humans are not more likely to develop cancer. Notably, blue whales, the largest mammals, hardly develop cancer. Why? In order to understand this phenomenon, we must dive deep into Peto’s Paradox. Peto’s paradox is the lack of correlation between body size and cancer risk. In other words, the number of cells you possess does not dictate how likely you are to develop cancer. Furthermore, research has shown body mass and life expectancy are unlikely to impact the risk of death from cancer . (see figure 1) Peto’s Paradox: Protective Mechanisms Mutations, otherwise known as changes or alterations in the deoxyribonucleic acid (DNA) sequence, play a role in cancer and ageing. Research scientists have analysed mutations in the intestines of several mammalian species , ranging from mice, monkeys, cats, dogs, humans, and giraffes, to tigers and lions. Their results reveal that these mutations mostly come from processes that occur inside the body, such as chemicals causing changes in DNA. These processes were similar in all the animals they studied, with slight differences. Interestingly, annually, animals with longer lifespans were found to have fewer mutations in their cells ( figure 2 ). These findings suggest that the rate of mutations is associated with how long an animal lives and might have something to do with why animals age. Furthermore, even though these animals have very different lifespans and sizes, the amount of mutations in their cells at the end of their lives was not significantly different – this is known as cancer burden. Since animals with a larger size or longer lifespan have a larger number of cells (and hence DNA) that could undergo mutation, and a longer time of exposure to mutations, how is it possible that they do not have a higher cancer burden? Evolution has led to the formation of mechanisms in organisms that suppress the development of cancerous cells . Animals possessing 1,000 times as many cells as humans do not display a higher susceptibility to cancer, indicating that natural mechanisms can suppress cancer roughly 1,000 times more efficiently than they operate in human cells . Does this mean larger animals have a more efficient protective mechanism against cancer? A tumour is an abnormal lump formed by cells that grow and multiply uncontrollably. A tumour suppressor gene acts like a bodyguard in your cells. They help prevent the uncontrollable division of cells that could form tumours. Previous analyses have shown that the addition of one or two tumour suppressor gene mutations would be sufficient to reduce the cancer risk of a whale to that of a human. However, evidence does not suggest that an increased number of tumour suppressor genes correlated with increasing body mass and longevity. Although a study by Caulin et al . identified biomarkers in large animals that may explain Peto’s paradox, more experiments need to be conducted to confirm the biological mechanisms involved. Just over a month ago, an investigation of existing evidence on such mechanisms revealed a list of factors that may contribute to Peto’s paradox. This includes replicative immortality, cell senescence, genome instability and mutations, proliferative signalling, growth suppression evasion and cell resistance to death. As far as we know, different strategies have been followed to prevent cancer in species with larger sizes or longer lifespans . However, more studies must be conducted in the future in order to truly explain Peto’s paradox. Peto’s Paradox: Other Theories There are several theories that attempt to explain Peto’s paradox. One of which explains that large organisms have a lower basal metabolic rate, leading to less reactive oxygen species. This means that cells in larger organisms incur less oxidative damage, causing a lower mutation rate and lower risk of developing cancer. Another popular theory is the formation of hypertumours . As cells divide uncontrollably in a tumour, “cheaters” could emerge. These “cheaters”, known as hypertumours, are cells which grow and feed on their original tumour, ultimately damaging or destroying the original tumour. In large organisms, tumours have more time to reach lethal size. Therefore, hypertumours have more time to evolve, thereby destroying the original tumours. Hence, in large organisms, cancer may be more common but is less lethal. Clinical Implications Curing cancer has posed significant challenges. Consequently, the focus on cancer treatment has shifted towards cancer prevention . Extensive research is currently underway to investigate the behaviour and response of cancer cells to the treatment process. This is done through a multifaceted approach; investigating the tumour microenvironment and diagnostic or prognostic biomarkers. Going forward, a deeper understanding of these fields enables the development of prognostic models as well as targeted treatment methods. One example of an exciting discovery is the revelation of TP53 . The discovery of this tumour suppressor gene indicates that it plays a role in making elephant cells more responsive to DNA damage and in triggering apoptosis by regulating the TP53 signaling pathway. These findings imply that having more copies of TP53 may have directly contributed to the evolution of extremely large body sizes in elephants, helping resolve Peto’s paradox . Particularly, there are 20 copies of the TP53 gene in elephants, but only one copy of the TP53 gene in humans (see figure 3 ). Through more robust studies and translational medicine, it would be fascinating to see how such discoveries could be applied into human medicine ( figure 4 ). Conclusion The complete mechanism of how evolution has enabled organisms that are larger in size and have longer lifespans than humans is still a mystery. There is a multitude of hypotheses that need to be extensively investigated with large-scale experiments. By unravelling the mysteries of Peto’s paradox, these studies could provide invaluable insights into cancer resistance and potentially transform cancer prevention strategies for humans. Written by Joecelyn Kirani Tan Related articles: Biochemistry of cancer / Orcinus orca (killer whale) / Canine friends and cancer Project Gallery

  • How to prevent tooth decay | Scientia News

    Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link How to prevent tooth decay 07/02/25, 16:17 Last updated: Published: 03/02/24, 11:24 The science behind tooth decay Dental caries, commonly referred to as tooth decay, manifests as a gradual process and progressive disease affecting the dental hard tissues, resulting in the breakdown of tooth structure and the potential for pain and infection within the oral cavity. Understanding the mechanisms behind tooth decay is crucial for adopting effective preventative measures, to stop or reverse the carious process and prevent cavity formation. Several factors contribute to dental caries, including bacteria, time, fermentable carbohydrates, and a susceptible tooth surface. In the absence of regular toothbrushing, a plaque biofilm is allowed to form on the tooth surface—a sticky, colourless film that serves as a breeding ground for bacteria such as Streptococcus mutans and Lactobacillus species. Once these bacterial species encounter fermentable carbohydrates and sugars from our diet, they begin to metabolise them, producing acids as a by-product. These acids contribute to an acidic environment in the mouth. When enamel, the outermost layer of tooth structure, is exposed to an acidic pH below 5.5, its mineral structure weakens, initiating the dissociation of hydroxyapatite crystals. Frequent acid attacks from dietary sugars result in a net mineral loss in teeth, leading to cavity formation, dental pain, and potential infections. The initial stage of decay involves the demineralisation of enamel. At this point, the damage can be reversible with good oral hygiene practices and through remineralising agents. Saliva has the capacity to remineralise initial carious lesions, and fluoride application through fluoridated toothpaste can also aid in reversing the initial stages of dental caries. However, if left untreated and allowed to progress, the decay can develop further into the tooth structure reaching the softer dentine beneath enamel. Dentin decay occurs more rapidly than enamel and can contribute to increased sensitivity and discomfort. As the decay advances, it may reach the dental pulp, which is the nerve of the tooth. Infection of the pulp can trigger severe pain and may necessitate root canal treatment in attempt to save the tooth. Persistent infections can lead to abscess formation—a pocket of pus causing swelling, pain, and systemic health issues, should the infection spread throughout the body. Tooth decay can be preventing through regular brushing with a fluoride toothpaste. The consistent disturbance to the plaque biofilm formation through brushing it away will not allow the caries process to continue, and hence prevent cavity formation. The fluoride aspect will help to strengthen the enamel and remineralise any mineral loss found in early lesions; this can stop and even reverse the carious process, thus preventing dental decay A healthy diet with limited consumption of sugary foods and drinks can significantly reduce the risk of tooth decay; with less sugars in the oral environment there is a lower rate of bacterial metabolization to create the acids which contribute to the decay process. Regular dental check up appointments enable early detection and intervention of any initial lesions, preventing the progression of decay before reaching an irreversible status. Tooth decay is a preventable yet prevalent oral health issue. Instigated by the action of oral bacteria metabolising sugars in the mouth, our natural tooth structure can be destructed and decayed if the plaque biofilm is not controlled. By understanding the causes and progression of tooth decay, individuals can adopt proactive measures to maintain good oral hygiene, preserve enamel, and safeguard their smiles for a lifetime. Regular dental check-ups and a commitment to a healthy lifestyle play pivotal roles in preventing the onset andprogression of tooth decay. Written by Isha Parmar Reference (Banerjee & Watson, 2015): Banerjee, A. and Watson, T.F. (2015) Pickard’s Guide to Minimally Invasive Operative Dentistry, King’s College London. Project Gallery

bottom of page